Why Should You Pamper Your ML With The Hardware It Needs?
Why do we need to purchase new cooking utensils when we acquire a new oven / microwave? Simply because the older ‘hardware’ is not going to serve our purpose, in this case, a relishing dish; as the newer ‘machine’ finds it incompatible with itself to function normally. The same analogy can be applied to the argument whether Machine Learning really needs its own specific hardware
Learning About Machine Learning:
Machine Learning is applying the capacity of Artificial Intelligence to machines, thus enabling them with life-like function of learning and improving from experience. ML thus focuses on the development of intelligent computing systems which can access and assess data, and utilize the learning to enhance their inherent capabilities.
The Machine Learning Program determines how the data is accessed and how patterns in data are identified to take better decisions automatically, without human intervention.
The ML program can be classified on the basis of how its algorithm ‘learns’ and is largely categorized as supervised, unsupervised, semi-supervised and reinforcement algorithms (more on this in another article).
So, Why Specific Hardware For Machine Learning?
Since Intel developed the first processor chip in 1971, the CPU has been the brain behind all the processing of instructions submitted by other hardware and software. The processors have thus evolved into performing a few very specific but highly complex tasks, as they were designed with only one core (one CPU) to perform one operation at a time. IBM‘s first dual-core processor released in 2001 could focus on two tasks at once. Though some super-computers have as many as 40 cores, most computers only have a few of them.
Machine Learning Systems derive their superiority from their computational ability to handle simple, but large tasks at once. This overwhelms the CPU, which is apt at specific large tasks which don’t run on forever. Ask the crypto-currency miners how difficult it is to mine using modern day CPUs. Another factor contributing to the difficulty of using CPUs for ML is that ML algorithms work on minute data, and tons of minute data at that. This simply drains the CPU.
GPUs And Machine Learning:
The solution lies in deploying GPUs (Graphics Processing Units) for running ML algorithms. GPUs have been performing the required function since Nvidia delivered superior benefits through chip-compatible software. They are a standard ingredient of any computing recipe due to their parallel architecture which is excellent at handling large sets of simple instructions. Apart from the original video game applications, GPUs have also been the go-to processor for crypto-miners, and shall also serve the Machine Learning requirements too.
ASICs: The Unicorn
This is the specialist, designed to execute only a particular, highly defined task, forever. The Application Specific Integrated Circuits (ASICs) are processors designed to execute a specific application, for e.g., a traffic light measuring the frequency of vehicles at different time periods. Crypto-miners have been able to mine through sheer brute force of the ASICs at carrying out a simple task as guessing a number, a zillion times over and over again. Gate-array and Full-custom are popular ASICs designs.
Tensor Processing Units (TPUs): Google’s Accelerator For Machine Learning Applications
TPUs are custom- designed and developed ASICs by Google for their Tensor Framework. TPUs accelerates the performance of linear algebra computation, something that ML thrives on. Google’s TPUs are installed at its data centre, and are available over the cloud on rent.
So, how does one decide whether a GPU will suffice, or a TPU is needed? Some ready pointers are mentioned here, though they are at best, simplistic.
When to use a GPU? | When to use a TPU? |
Tensor Flow is not used for modelling | Custom Tensor Flow operations |
Tensor Flow models not available to Cloud | Custom Tensor Flow operations |
Source does not exist | Training for weeks and months |
Medium to large models | Large to very large models |
Machine Learning combined with neural networks challenges the existing computing technologies due to their inherent characteristics of handling data and deriving meaning out of it. And thus, the existing hardware shall not be helpful in maximizing the potential ML can attain. We have explored only the processing requirements, and there are many more hardware specific considerations which Machine Learning asks of us. I shall be dwelling upon them in upcoming articles.
Comments