Nvidia’s Faltering Make You Nervous? These AI Plays Can Fill The Gap.

Cookie

Black Friday Sale: Save Up to $1,000 Don't Miss Out! Only Once a Year!

You know the big semiconductor names: Nvidia, AMD, Intel, Qualcomm, Samsung. These are the fabless chip companies leading the charge into the age of AI – possibly alongside Chinese companies like Huawei. The bull case around such companies is well known – it’s certainly a discussion that can be found throughout much of Fundstrat research. As our Head of Research Tom Lee has previously noted, Nvidia makes the chips everyone wants and needs. Sales of its Blackwell processors are “off the charts,” Jensen Huang noted on Nov. 19 after the release of the company’s latest earnings.  They’re in such demand that GPU-backed debt products are now a tradable asset class. A flourishing derivatives market based on the trading of Nvidia H100 GPU rental prices has also sprung up.

For investors looking to diversify their chip holdings, however, there are companies beyond these big, household names. Yet these too are related to the AI trade.

A good way to think about AI chips might start with a look back to the world of auto racing back in the 1960s and 1970s. Back then, American muscle cars like the Ford Mustang and the Plymouth Barracuda were known for sheer horsepower – robust engines with massive torque and acceleration that enabled speed freaks to flex as they raced down the mostly straight streets and highways of the U.S. They exemplified a typically American approach: When brute force fails, add more brute force. 

Meanwhile in Italy, the likes of Lamborghini and Ferrari were achieving speed with a different approach – making cars in which slightly less-powerful engines were paired with superior balance and handling. Such cars proved superior at racing around the pulse-pumping hairpin turns of Europe’s narrow mountain roads. 

What does this have to do with AI chips? Glad you asked.

The American muscle-car approach favored by the likes of Ford and Dodge is analogous to the current U.S. focus on AI chips – with ever-more powerful processors analogous to bigger and bigger engines under the hood. Each successive generation of Nvidia processors has offered greater computing power – smaller, faster, and more capable. And those chips are still incredibly important. 

Yet as the Italian carmakers from Emilia-Romagna understood, there is more to speed than just engines with more cylinders. It’s not a perfect analogy, but the point here is this: better engines – GPUs such as those made by Nvidia or its competitors – are obviously crucial for ever-more-capable AIs. However, other chip technologies are critical to getting the most out of those GPUs. 

These three technologies are: 

  • Silicon photonics technology 
  • High-performance memory chips
  • Serializer/Deserializer technology

Silicon Photonics

For some years now, industry observers have wondered when Moore’s Law would run up against the laws of physics. Chip transistors have shrunk to such small sizes that quantum mechanical effects are starting to pose an ever-more-imposing challenge. The insulating barriers and gates in processors are so delicate that electrons can “leak” through, threatening reliability. In addition, chips today rely on electrical signals being transmitted over interconnects – superfine copper wires – both within the processor and to other chips and components on a circuit board. That adds quite a lot of heat to the already considerable amount generated by the chip’s transistors. You can only shave so many atoms off a chip’s pathways before this massive power density (heat per unit area) threatens chip integrity and becomes an insurmountable issue. 

That’s where silicon photonics comes in. Rather than working out ways to make ever-more powerful internal combustion engines, silicon photonics act like turbochargers, making GPU technology more capable and efficient. With silicon photonics, hardware gurus can sidestep many (though not all) of the problematic side effects of transmitting electrical signals by using light instead. 

Why is this better? To begin, light can travel up to 50 times faster in transparent waveguides than electrical signals travel through the materials currently used to make chip interconnects, like copper. Not only is this faster, but it generates virtually no heat, removing much of the need for advanced, complex cooling systems (and the electricity such cooling systems require). 

This obviously improves interconnects within processors. But it’s worth recalling that processors do not operate on their own. In any given computer, in a given server, four to eight processors work together, communicating with each other, with multiple memory chips, and numerous other components. Replacing copper interconnects with photonic versions can bring the same benefits to server performance. Furthermore, silicon photonics have the potential to make communication and data transfers between servers in an AI hyperscaler faster and less power hungry, all while drastically reducing the need for cooling. That sounds incredible. There must be a catch, right? 

Right. In fact, there are four. 

Three are related to manufacturing. 

  • Manufacturing challenge No. 1: The waveguides that carry light must be perfectly smooth on a nanometer-level, or else the light could scatter as it travels within. Unsurprisingly, this is quite difficult to achieve consistently.
  • Manufacturing challenge No. 2: The waveguides used to carry light signals on a chip must be connected to the much larger glass fibers that carry light to other components, chips, and machines. This size disparity makes the coupling far more difficult, at least if it is to be done with the precision needed to prevent significant signal loss.
  • Manufacturing challenge No. 3: Generating the light: Before light can carry data, it must be generated by a laser. That requires a separate component that must be integrated onto the chip or installed near it. Adding to the challenge, the shape of the beam generated by the laser must precisely match the shape of the waveguide channel on the chip. 
  • Manufacturing challenge No. 4: Finally, although photonics generate far less heat, they are nonetheless far more sensitive to temperature. Optical components are tuned to specific wavelengths, and heat fluctuations can change the wavelengths of light. This degrades signal fidelity.

And there’s one more overriding truth: while photonics can replace electrical signals in many aspects of computing, they cannot replace them all. That means that achieving the benefits of photonics requires integrating photonics into traditional silicon – what the industry calls Co-Packaged Optics (CPO).

Nonetheless, many of the largest names in the semiconductor world view these challenges as surmountable and worth tackling to achieve the potential benefits listed above. 

Nvidia is actively working on incorporating photonics into their switches and designs, while Broadcom has become a major supplier of photonic components such as transceivers. Cisco (CSCO) is working to design optical interconnects in networking equipment. 

More specialized companies in this field include 

  • Lumentum Holdings Inc. (LITE) – Lumentum is developing photonics-specific lasers that can be integrated into other companies’ silicon photonics chips, including Nvidia’s Spectrum-X solution. It is also a leading supplier of optical transceivers and circuit switches and transceivers.
  • Coherent Corp. (COHR) – Coherent is also a key supplier of photonics lasers, but it is more vested in creating an entire suite of photonics offerings, including their own switches, specialized circuits (ASICs), transceivers, amplifiers, modulators, detectors, and drivers – along with a photonics module and single-chip solution. 
  • Marvell (MRVL) – Marvell’s photonics offerings are focused more on integrated solutions that data-center operators can lug into their existing hardware, enabling hyperscalers to quickly scale up.

As we discussed previously, Taiwan Semiconductor Manufacturing Company is unquestionably the most advanced and dominant pure-play foundry in the world. So, it is unsurprising that the company is actively researching how to efficiently and effectively integrate photonics into its fabrication capabilities. 

Yet GlobalFoundries (GF), U.S.-based (though majority owned by the UAE’s sovereign wealth fund), might have an advantage with its legacy fabrication technology. GF specializes in mature, legacy fabrication called the 45 nm RF SOI technology platform. While the technology does not make anybody oooh and aaah, it nevertheless boasts a battle-tested, highly reliable lithography process that is superior at achieving consistent waveguide smoothness – far more important in this context than achieving the smallest sizes. The company’s GF Fotonix platform is already capable of combining high-performance traditional silicon circuitry and advanced photonics features (including lasers) on the same silicon wafer.

Even in the fast-evolving semiconductor industry, silicon photonics is considered a nascent field. A number of non-public photonics startups, among them Ayarlabs, Lightmatter, Celestial AI, and Nubis, bear watching for those interested in this avenue of investment. 

High-performance memory

Blazing-fast processors are great. Yet, if chips like Nvidia’s Blackwell B200 and H100 are engines, then high-performance memory can (imperfectly) be thought of as the gas tank and fuel injector of the system. Whether an AI model is in training or inference mode, GPUs are constantly retrieving, processing, and storing vast amounts of data, and those massive stores of information are stored in memory chips. 

It doesn’t take an AI researcher – or auto mechanic – to understand that memory chips with low bandwidth can cause data-retrieval and storage bottlenecks that throttle the speed of even the best of processors. You wouldn’t expect your sports car to perform very well if the fuel pump could only dispense a drop of fuel every minute, no matter how great the engine is, right?

This consists of two primary types of memory. Dynamic Random Access Memory (DRAM) is the more commonly used memory chip, and it remains in demand for storing larger volumes of data. The current DDR5 state of the art continues to improve on speed and bandwidth: in the past year, the Joint Electron Device Engineering Council (JEDEC), an industry group, has boosted memory chip standards, with the cutting edge DDR5 (Double Data Rate 5) standard approaching or even exceeding 8,000 MT/s (MegaTransfers per second). 

For AI purposes, however, high-bandwidth memory (HBM) is the main focus. HBM can be compared to automotive fuel injectors, connected to the GPUs and directly responsible for feeding data into the GPUs as quickly as possible. In the past year, major HBM manufacturers have improved bandwidth to up to 2 TB/s (the HBM3E standard, soon to be supplanted by HBM4), while achieving promising advances in the effort to improve I/O density (input-output) and energy efficiency. These advances are partly due to improvements in the stacking of memory chiplets on top of each other, improving not just bandwidth, but also energy efficiency and physical footprint. 

Given the problems Nvidia and its competitors have with keeping up with demand for their AI chips, it should be unsurprising to note that high-performance memory chipmakers are facing similar struggles, with demand significantly outstripping supply. 2025 has seen HBM prices rising by up to 60%. That has been to the benefit of the major memory players. The top three are:

  • SK Hynix ($000660) – SK Hynix is the dominant player, with an estimated 62% market share in HBM. The company, which has a partnership with TSMC, was the first to ship samples of HBM4 chips (mass-produced HBM4 chip shipments are targeted for early 2026.) With its CHIPS Act incentives, the company is building a packaging facility in Indiana.  
  • Samsung (SSNLF) – As a semiconductor company, Samsung is one of the largest integrated device manufacturers (IDMs), capable of designing and fabricating its own range of chips. This is a key advantage as it competes with SK Hynix and Micron. In the space of HBM, Samsung is working to design bespoke HBM for Meta and Microsoft AI. Its CHIPS Act incentives cover the construction of facilities in Texas that will partly be dedicated to HBM products.
  • Micron (MU) – As the only U.S. company in this triumvirate, Micron is perhaps of the most interest to U.S. investors looking for exposure to HBMs poised to be the beneficiary of government-propelled onshoring incentives. In fact, the company, which received $6.165 billion in CHIPS Act incentives, is in the midst of building new HBM packaging fabrication facilities in Idaho. (The company is also building DRAM fabrication plants in upstate New York and upgrading its existing Virginia facility.) 

Although competition from their Chinese counterparts remains a challenge, perhaps the largest risk to HBM companies’ revenues comes from their customers – the big GPU makers like Nvidia. A major part of HBM speed – and hence, its value and price – is the memory chip’s logic base die – essentially the controller that connects to the GPU and manages the flow of data between the two. Lately, Nvidia and its cohorts have signaled a growing desire to take over the design of the logic base die. This would remove much of the pricing power that memory-chip companies currently enjoy.

As an aside, though DRAM is not generally seen to be as AI-focused, it has nonetheless benefited from the AI boom. DRAM prices have risen this year due to shortages caused by manufacturers pivoting their production to HBM. 

Ser/Des

We’ve used an automotive analogy for much of this piece, and in this context, Serial/Deserializer (Ser/Des) technology could be compared to a vehicle’s transmission – responsible for getting the output of the engine to where it needs to go. Yet it might be better to compare Ser/Des to a highway interchange, where lots of smaller local roads merge onto a superhighway (the serializing part of this formula) and vice versa (deserializing, obviously). 

In computing, such data interchanges, linking processors, memory chips, networking hardware, and other servers have also proven to be bottlenecks. Ser/Des is the technology that changes bits and bytes from many smaller parallel streams into a single high-speed transmission pathway, and back again. 

Ser/Des technology is closely tied to how the photonics and traditional silicon portions of the next generation of processors will communicate, as well as how processors and memory connect. It’s key to keeping data flowing not just quickly, but with a high level of fidelity and accuracy

It’s useful to use the example of Nexperia (a subsidiary of NXP Semiconductors NXPI) to illustrate Ser/Des. Though Nexperia is not a household name, it has garnered attention in the last month as a major supplier of chips to automakers caught in a row between the Netherlands and China. The company’s prominence in automotive chips (as well as industrial and Internet-of-Things chips) has made it a leader in Ser/Des technology. 

Ser/Des is what enables Nexperia chips to almost-instantly funnel the enormous volumes of data generated by a modern car’s various sensors and cameras to the vehicle’s central computer, and what enables the computer to send a constant stream of commands to computer-assisted components such as braking systems.

Other prominent companies within the Ser/Des space include: 

  • Broadcom (AVGO) – Broadcom is a leader in incorporating ser/des technology into ethernet switches. Such switches are the critical core for coordinating and connecting the thousands of GPUs in a hyperscaler together.
  • Marvell Technology (MRVL) – If Broadcom focuses on connecting servers, Marvell’s specialty lies in accelerating communications between the various chips and components inside a server. 
  • Texas Instruments (TXN) – TI’s Ser/Des technology is relevant to AI applications. As with Nexperia, it makes chips that can quickly aggregate, synchronize, and transmit data from a variety of automotive and industrial sensors and components to a processor. 
  • Analog Devices (ADI) – The company has long focused on technology that converts information from the real, physical world into a digital format. ADI’s offerings include such data converters, particularly as they relate to automation and robotics. 

Conclusion

We are arguably in the midst of what Fundstrat Head of Research Tom Lee describes as an AI supercycle. And yes, it’s likely that xProcessor companies like Nvidia and AMD will be the engine of those advances. We hope, however, that this piece has served as a reminder that one does not drive engines – plenty of other technologies will be integral to the journey, and there are potential opportunities to be found there. 

As always, Signal From Noise should not be used as a source of investment recommendations but rather ideas for further investigation. We encourage you to explore our full Signal From Noise library, which includes deep dives on AI rearranging the winners and losers, race to onshore fabrication, AI Merry-Go-Round, space-exploration investments, the military drone industry, the presidential effect on markets, ChatGPT’s challenge to Google Search, and the rising wealth of women. You’ll also find a recent update on AI focusing on sovereign AI and AI agents, the TikTok demographic, and the tech-powered utilities trade.

Disclosures (show)