The artificial intelligence (AI) market is booming, leading to a massive rise in the demand for customized chips.
However, the supply isn’t matching the demand, far from it.
Custom chips in devices such as smartphones and computers are now joined by more needs — from?face-recognition hardware to AI to the Internet of Things (IoT).
You can usually divide chips into two categories: general and custom. The general chips are those manufactured by companies such as Intel and AMD, and they cater to multiple use cases such as image processing and multi-threading.
Synopsis in a blog post places it like this:
“AI workloads are massive, demanding a significant amount of bandwidth and processing power. As a result, AI chips require a unique architecture consisting of the optimal processors, memory arrays, security and real-time data connectivity.
Traditional CPUs typically lack the processing performance needed, but are ideal for performing sequential tasks. GPUs, on the other hand, can handle the massive parallelism of AI’s multiply-accumulate functions and can be applied to AI applications. GPUs can serve as AI accelerators, enhancing performance for neural networks and similar workloads”.
The rise of generative AI has been one of the main factors behind the increased demand for custom chips. Generative AI tools, which have exploded in the last year, can generate custom content in the form of text, images, video, or other media in response to prompts.
Organizations like Amazon, Microsoft, and Google realize that custom chips are critical for generative AI and have been focusing on developing in-house custom chips, particularly when the dominant player, NVidia, has already sold out until 2024.
While companies like Amazon join the fray with chips like Inferentia, many startups have been frantically working on developing chips.
For example, D-Matrix is a startup that raised $110 million to develop an inference-computing platform. According to Playground Global partner Sasha Ostojic, who supports D-Matrix, “D-Matrix is the company that will make generative AI commercially viable.”
According to Jensen Huang, founder and CEO of NVIDIA, “A trillion dollars of installed global data center infrastructure will transition from general purpose to accelerated computing as companies race to apply generative AI into every product, service and business process. We are significantly increasing our supply to meet surging demand for them.”
Given the situation, big companies have been investing in in-house chip development, and many startups have been raising capital to develop inference-computing platforms and chips to cater to the market.
The Bottom Line
The race for custom chips could divide the technology world into two parts, at least for some time — those with a supply of custom chips and the ability to develop in-house custom chips and those without.
The accumulation of generative AI in the hands of a few could be a cartelization of generative AI.
Governments worldwide, especially the big powers, must stay wary of the concentration of generative AI development in the hands of a few.
Until then, let the chips fall where they fall.