“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”
― Pedro Domingos, “The Master Algorithm”
Many column inches in business and news media have been devoted to artificial intelligence in 2023, driven by the buzzy introduction of OpenAI’s ChatGPT (version 3.5) and Microsoft’s splashy announcement of its investment in OpenAI and its integration of ChatGPT into its products. The follow-up me-too announcements from tech rivals like Alphabet, Baidu, Meta, and Alibaba have also added to the hype.
Artificial intelligence is not a new concept. The idea of machines that can reason, think, and behave in a way that approximates or even precisely mimics human behavior appears in fiction from as far back as the 1880s. The phrase “artificial intelligence” (and the field of research to which it refers) goes back to a small conference of mathematicians and scientists held in the summer of 1956 at Dartmouth College.
By the 1960s, researchers had developed programs that could solve basic problems in geometry and elementary algebra, or find a path out of a maze. In 1964, Prof. Joseph Weizenbaum developed a natural-language program called ELIZA that could arguably be considered the first chatbot. ELIZA could sustain a conversation that mimicked a psychotherapy session, using pattern recognition to generate human-like responses to a user’s inputs, sometimes successfully passing the Turing test.
This pattern recognition remains the foundation of the technology used by chatbots like ChatGPT and Bard, albeit on a data set that is larger by many magnitudes of order, with far greater computing power and using algorithms that are many generations more evolved.
AI research stagnated in the 1970s and 1980s. Scientists had underestimated the intellectual challenges involved in advancing their field, and due to the lack of adequate computing power and digitized information, it quickly became clear that useful, commercially viable artificial-intelligence applications were not coming anytime soon. Funding for AI research slowed to a trickle, and the field mostly rode out the next few decades in academia, entering public consciousness only through science fiction.
But in recent years, parallel advances in computer science and computing power, along with the exponentially increasing volumes of digitized data available, have enabled AI and machine-learning research efforts to accelerate. Those advances have come just in time.
AI is set to take on increasing importance in the coming decades as much of the world, including the United States, western Europe, and large swaths of Asia, confronts the reality of an increasingly older population. As the Boomer generation ages into retirement, years of declining birth rates have resulted in insufficient numbers of people replacing Boomers in the workforce. This tightening labor supply is one of the causes of the global inflation that emerged in 2021. And no economic sleight of hand, whether by the Federal Reserve or any other entity, will change that.
As Fundstrat’s Head of Research Tom Lee said: “There’s a shortage of labor, globally, and everyone thinks the solution to fixing the labor problem is to cause a recession. But it doesn’t solve anything […] As soon as the economy starts growing again, you’re going to run out of people again. You can’t just keep jamming a recession to fix a labor shortage. The only way to structurally fix a labor shortage is to replace human endeavors with machine units.”
In other words, the long-term incentives to advance AI applications are here to stay. That’s why the rise of AI and automation is one of the major thematic/strategic components frequently cited by FS Insight Head of Research Tom Lee.
State of the art
“The advance of technology is based on making it fit in so that you don’t really even notice it, because it’s part of everyday life.”
― Bill Gates
Today, artificial intelligence applications fall into two categories: generative artificial intelligence, and predictive artificial intelligence. As the name suggests, predictive artificial intelligence refers to the use of algorithms to identify patterns that can then be used to predict trends. The first examples of artificial intelligence research often revolved around predictive AI. Supercomputers such as IBM’s Deep Blue successfully defeated chess grandmasters through brute-force predictive analyses of the potential outcomes of any allowable move, for example. (Deep Blue could analyze approximately 200 million moves per second, unfortunately for the great Garry Kasparov.)
Since Deep Blue’s historic 1997 victory over Kasparov, predictive AI has transcended the game of chess to seep into our lives, powering biometric (speech, facial, etc.) recognition, autocorrect, smart homes, and navigation. Diverse industrial sectors have leveraged its power to enhance automobile manufacturing, supply-chain management, pharmaceutical research, fraud protection, robotic surgery, and more.
But the latest buzz has been driven by advances in generative AI. It was one thing to use predictive AI to recognize an image of a fire hydrant, and quite another to ask a machine to generate a Van Gogh-inspired image of a fire hydrant in the middle of a pumpkin patch. But that is exactly what OpenAI made possible in January 2021 when it released DALL-E to the general public.
Not content to only help those with no oil-painting skills whatsoever in their minor creative endeavors, OpenAI followed that up with ChatGPT, a chatbot that took the abilities of its technological ancestor, ELIZA, to a new level―generating natural-sounding text, copy, and conversational interactions about any topic on the Internet and in any style or format. ChatGPT could write computer code, fabricate essays for unmotivated high-school students, and craft amusing greeting-card verses, all to spec.
The buzz around this chatbot sparked a flurry of announcements. Soon Microsoft (MSFT -0.33% ) announced a partnership with OpenAI, Google (GOOGL 1.61% ) revealed its work on a rival, Bard AI, and Chinese tech giants Alibaba and Baidu made similar disclosures.
Although many individuals and companies have found chatbots helpful, even OpenAI’s founder, Sam Altman, is blunt about its current usefulness. “It’s a mistake to be relying on ChatGPT for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness,” he warned.
What’s needed to advance and expand the reach of AI technology: hardware
Inadequate hardware poses one of the biggest challenges to better, more powerful, and more useful AI. ChatGPT and others like it are enormously resource-intensive, requiring huge cutting-edge server farms to train them and run queries en masse.
ChatGPT 3.5 was trained on approximately 570 GB of data (an array of data sources selected from the internet in 4Q2021), which took an estimated 1,287 megawatt-hours of power. (For reference, one megawatt-hour can power 100 homes for a day.)
ChatGPT runs on 175 billion parameters, which is already significant considering the limited selection of data on which the chatbot was trained. Yet most experts estimate that running ChatGPT on this somewhat limited dataset for millions of users around the world consumes exorbitant processing power and energy. Tom Goldstein, a computer-science professor at the University of Maryland, has conservatively estimated that the power consumption for one million users running ChatGPT queries costs $100,000 a day.
NVIDIA (NVDA 2.96% ) is often cited as one of the leading companies working to solve this problem. The company started as a designer of graphics processing units (GPUs)―chips specifically designed to process images and video, generally using an optimized parallel structure that can carry out large numbers of computations simultaneously. This structure turned out to be ideal for the types of computations required for artificial intelligence. NVDA 2.96% dominates the race to develop the increasingly powerful, energy-efficient processors needed to drive AI advances. AMD (AMD 0.30% ) and Intel (INTC 2.20% ) are pouring resources into their catch-up efforts. But as chips bump against the physical limits of Moore’s Law at an atomic level, an extra edge will be needed: faster, more power-efficient memory chips.
Prof. Mark Parsons, director of the Edinburgh Parallel Computing Centre (EPCC) at the University of Edinburgh, said inadequate transfer speeds between processors and memory can negate the advantages of a fast processor. “This isn’t a new problem and [it’s] one that we’ve had in supercomputing for some time, but now AI developers are turning to supercomputers they are realizing this issue.”
Today the fastest memory chips are SDRAM (synchronous dynamic random-access memory) chips and their derivatives, HBM (high bandwidth memory, which consists of 3D-stacked SDRAM chips) and GDDR (graphics double data rate) memory. Because faster memory chips help mitigate the need for increasingly more powerful processors, they will become increasingly important as chip design pushes against the limits of Moore’s Law.
Leaders in this field include:
- Samsung
- SK Hynix
- Micron Technology (MU 3.11% )
- Rambus (RMBS -1.94% )
Eventually, the limits of Moore’s Law will be reached, and that means we will need an alternative to the binary-based semiconductors and processors used today. One possibility is quantum computing. Unlike binary processors that work with bits of data that can have only a value of “1” or “0” (on or off), quantum processors exploit quantum mechanics to work with qubits, which can coherently hold “0” and “1” at the same time―in fact, it can hold four possible combinations (0-1, 1-1, 1-0, 0-0.) This results in processors that are exponentially more capable of carrying out parallel operations on large datasets―exactly the kinds of operations demanded in the training and implementing AI models.
Quantum computers are extremely difficult to build and calibrate, and the field is still in its infancy. But one of the leaders in quantum-computing research is one of the oldest names in tech: IBM (IBM -0.18% ).
What’s needed to advance and expand the reach of AI technology: protection
During its initial demo, ChatGPT 3.5 was asked to summarize the 3Q2022 financial results for Gap Inc. and compare them with those of Lululemon. The chatbot delivered a cogent analysis that was unfortunately based on made-up numbers for both apparel companies. (The tendency for current AI chatbots to “hallucinate” false answers to questions for which they lack adequate data is a separate issue.)
It had to. Because ChatGPT 3.5 was trained on data from 4Q2021, it obviously did not have access to any company’s 2022 financial reports. But that led to the obvious question: why couldn’t ChatGPT simply have looked up the 3Q2022 financial results before formulating its answer? A large part of this has to do with cybersecurity.
Connecting an AI to the Internet means exposing it to malicious hackers who could tamper with its code or, barring that, steal knowledge of its algorithms. This has ramifications beyond the theft of proprietary technology. To learn how an AI works is to learn how to manipulate it or force it to produce incorrect results.
Consider image recognition. Just as knowledge of how the human brain works can inform the creation of optical illusions, knowledge of how an AI-powered image recognition system works can be used to make a machine see things that aren’t there. For example, with such knowledge, a researcher was able to alter pixels in the image of a panda below, fooling the system into thinking it was a gibbon–even though to human eyes, both images clearly show a panda.
Mislabeling a panda as a gibbon is one thing. But what if the image being miscategorized was important?
It is also important to note that it isn’t just businesses looking to the latest AI advances for an advantage. So are hackers. Although AI-written malware has yet to be identified in the wild, phishing e-mails crafted with AI assistance have already been found, and cybersecurity researchers have found that they are more likely to be opened by prospective victims.
Going forward, effective cybersecurity that can both use artificial intelligence and protect it will be paramount. Three companies to watch in this space are:
- Palo Alto Networks (PANW -1.35% )
- Crowdstrike (CRWD 3.41% )
- Fortinet (FTNT 2.55% )
What’s needed to advance and expand the reach of AI technology: good data.
There was another important reason why ChatGPT wasn’t allowed to go online and look up the 3Q2022 financial results of the Gap and Lululemon: the data used by an AI needs to be strictly screened for accuracy, and there is no way to do that on the fly. The training of useful AIs will depend on efficient manipulation of large datasets. This includes compiling data from reliable sources, screening for (and documenting) accuracy and lack of bias, and transforming it into a format that can be used to train AI models.
One of the companies with the expertise to do so is a familiar name: IBM (IBM -0.18% ). It’s no coincidence that one of the pioneers in AI research also has a strong presence in two fields that are going to be critical to advancing the technology. Two other names in this space are:
- Snowflake (SNOW 1.09% )
- Informatica (INFA)
Conclusion
Despite the recent buzz surrounding artificial intelligence, this field has been around for decades. Yet at the same time, it is also a nascent technology. Making artificial intelligence reliable, powerful, and ubiquitous enough to fulfill its promise will clearly be a large-scale endeavor harnessing the expertise of talents working in a variety of fields.
Although we have cited some companies positioned to play key roles in the collaborative effort to advance the capabilities and adoption of artificial intelligence, this does not necessarily mean that this is the right time to include them in your portfolios. Prospective investors should view these names as ideas for further investigation rather than recommendations that have taken their respective risk tolerances, time horizons, and existing portfolios into account.
Your feedback is welcome and appreciated. What do you want to see more of in this column? Let us know. We read everything our members send and make every effort to write back.
Thank you. Access our full Signal From Noise library here.