Tech

Could the Global Chip Shortage Stunt the Growth of A.I. Technology?

Introduction

The origins of artificial intelligence can be traced way back to the 1950s with Alan Turing’s exploration of its mathematical possibility. Turing discussed how intelligent machines can be created and how their intelligence can be tested but unfortunately it was at a time when technology did not have the ability to meet AI’s basic requirements. Back then computers could only execute commands but not store them. This meant that computers did not have the ability to remember what they did, a prerequisite for artificial intelligence. Secondly, computing back then was very pricey, with computer rental costs running up to $200,000 per month! Only big universities and technology institutions could explore this new area of computing. Funding was also hard to come by and a proof of concept was needed before anyone could access them.

Allen Newell, Cliff Shaw, and Herbert Simon kick-started the practical pursuit of AI with their Logical Theorist, a program created to mimic human beings’ problem-solving skills. The program kicked off an important conversation on artificial intelligence that concluded with the sentiment that AI was achievable. AI research experienced a series of ups and downs but commendable progress was seen over the late 70s and significant progress seen from the 80s onwards especially in the 90s. Computers now had the requisite storage and processing capabilities that held back AI research. Algorithms became more complex, storage adequate and the processing speed got to a level that could sufficiently meet AI’s requirements. Today, we live in the age of big data where we are able to collect and process large amounts of information that would be impractical for a human being to handle. The utility AI affords has seen it find application in transport and automobiles, entertainment, marketing, banking, technology, sports, and fitness, among others. AI really has limitless applications and the future looks bright as more and more capable algorithms are being developed. It has got to a point where AI is seen as the next big step in human development. Given the potential AI has of drastically changing our experience of life, it becomes important to consider what the global shortage in computer chips could mean for this technological field.

The effects of the global chip shortage can be seen allover; from high consumer-device prices to empty new-car lots. AI algorithms are usually processor-hungry, which speaks to how hard the shortage has hit the industry. The seeds of the present supply bottleneck were sown early and the problem was only brought to light by the COVID-19 pandemic. A bottleneck refers to difficulties in the supply of a commodity even if there is considerable demand for it. Mark Lapedus of Semiconductor Engineering predicted an imbalance between the demand and supply of chip fabrication equipment for 200mm wafers  often used for the manufacture of older chips. Newer chips are manufactured from 300mm gear and wafers. Many specialists point to the surge in demand for IoT devices as the primary cause of the shortage. IoT devices use older chips that can be made from 200mm wafers, leading to the underproduction of 300 mm ones. This demand was closely followed by the COVID-19 pandemic that was accompanied by an explosion in demand for chips and a simultaneous lowering of worker supply.  Indeed, the pandemic only escalated the unveiling shortage with stay-at-home orders that disrupted chip supply chains while simultaneously increasing the demand for technology. The pandemic also pointed out the real causes of the shortage, that is, the concentration of manufacturing in only a few factories in Korea and Taiwan over the last few decades. Chip manufacturing is expensive and requires specialized equipment so it can only be done by the largest companies. As of 2020, TSMC, Samsung, and Intel were the leading manufacturers, generating almost as much revenue as all the other companies combined. Chip demand seems to be outpacing supply every day. For instance the turnaround time between chip order placement and delivery in 2020 typically took a little over three months but that has now gone to over five months. Experts say this is the longest lead time they have witnessed over the recent past.

AI workloads are slowly seeing the negative effects of the global shortage. The situation is especially conspicuous in cloud computing where users are complaining about the time it takes their jobs to ramp. Many AI models like NLP and computer vision need powerful GPUs to function properly and the effort put into producing 200mm chips is undermining the supply of 300mm ones like GPUs they typically require. The most visibly impacted AI-related industry is the self-driving automobile sector. Makers of these cars have found themselves in a struggle for chips. Self-driving cars run multiple AI models so they need powerful CPUs if they are to perform optimally. This sector has seen tremendous growth over the last decade, with Tesla, a company specialized in this area, ranking as the most valuable car manufacturer in the world. The shortage of powerful chips means that progress has to slow down as companies wait for their orders to be fulfilled. This slows down the self-driving car revolution and could significantly delay the realization of really fully-autonomous vehicles.

AI companies have been forced to turn their attention to other factor needed to produce successful AI solutions. The chip shortage has forced AI companies to focus on the computation efficiency of their models. Companies today are concentrating on producing AI-specific solutions that allow for easy integration into user workflows. As such, AI entities are now focused on delivering easy-to-use products that require minimum expertise on the end-users’ part. There lies an opportunity in this chip crisis and companies are looking to novel solutions like developing workarounds and manufacturing chip fabs. And herein lies the solution to preventing future supply shortages.

Conclusion

The global chip shortage was always bound to happen and the COVID-19 pandemic only accelerated its unraveling. The shortage was set up by the concentration of manufacturing plants in one region of the world, increasing worker shortages, and a surge in demand from IoT devices. The pandemic worsened the situation as it caused a further shortage in workers, an explosion in the demand of electronics and the combination led to a drastic imbalance between supply and demand and companies now have to wait for long periods before their orders are fulfilled. But there is a silver lining in all this. The shortage in chips has forced companies to focus on other factors that lead to the successful production of AI solutions. AI stakeholders are now forced to rethink their designs and approaches to solutions to keep delivering at the same level. This is also key to preventing a future shortage. Companies have to come up new AI algorithms that require less processing power and speed to function properly and create their own chip fabs to ensure in-house chip manufacturing. The successful implementation of these suggestions needs a lot of investment but that does not seem to be deterrence as AI companies are investing billions of dollars towards realizing the same. The global chip shortage has slowed down many industries; most notably AI as it needs powerful and sophisticated chips but perhaps this is the first step towards a future of more efficient solutions. 

news7h

News7h: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button