“Healing is a matter of time, but it is sometimes also a matter of opportunity.” – Hippocrates
Artificial intelligence seems to be expanding everywhere—from hot tech sectors to even boring utilities. As such, some might assert that artificial intelligence is the future of health care.
But that’s not really true, because AI in health care is already here. In fact, it’s been here for quite some time. At the user level, AI is already improving quality of life for those living with certain ailments and conditions. As AI develops and becomes better, it can be argued that health care, and consequently, patients, stand to benefit even more.
One example: Though still imperfect, Meta’s (META) AI-powered sunglasses are helping the blind: While not perfectly reliable, their AI-powered visual capabilities are helping wearers read street signs, recognize faces, and, perhaps most importantly, connect with sighted volunteers who can immediately provide remote, real-time assistance on an ad-hoc basis.
AI-powered machine vision is also helping to speed and improve analysis of medical images, albeit on a more sophisticated level. Radiologists are using AI to quickly and more accurately analyze X-rays, MRIs, electrocardiograms, and CT scans. This has improved the assessment of cardiac function, helping to better identify those likely to develop heart ailments in the future. Such technology has also been useful in diagnosing Alzheimer’s disease, vertebral fractures, and some types of cancer. Researchers are also increasingly using AI to triage and diagnose strokes.
In fact, in some ways, AI-powered systems have long been shown to be at least as accurate as human radiologists, if not more so. In a 2018 Stanford University test, an AI system outperformed human radiologists in diagnosing pneumonia from chest X-rays. In 2023, researchers at the University of Nottingham found that a commercially available AI system performed as well as radiologists in accurately diagnosing cancers in a test set of mammograms. (South Korea’s Lunit, the maker of the INSIGHT MMG system used in the study, has since claimed that its AI can accurately estimate odds of a patient developing breast cancer four to six years before it becomes actually detectable, which to us is mind-blowing.)
To be fair to those physicians though, even in its earlier iterations, AI has always had an advantage. While each human physician starts from a basic level of knowledge and requires years of experience to become truly skilled, AI diagnosticians can be easily replicated. So while human physicians get old and retire, taking their experience out of the field of play, AI counterparts never stop learning and improving.
The LLM Revolution
Still, it’s difficult to deny that the introduction of major LLMs has significantly opened up opportunities to do even more to understand, enhance, and extend human health. Furthermore, DeepSeek’s R1, in providing proof of concept of how useful the AI-modelling techniques of knowledge distillation and mixture of experts (MoE), point the way for healthcare organizations to make those possibilities a reality.
The first step
The path to superior health care lies in a better understanding of the human body and how it works. Last spring, Moderna (MRNA) CEO Stéphane Bancel argued that “the reason we still have people dying of cancer, people suffering from Alzheimer’s, is we do not understand the fundamental biology of those diseases.” Speaking at Semafor’s World Economy Summit, he argued that this was about to change: Within the next three to five years, he claimed, AI technology would enable scientists and doctors to “understand most diseases” – and thus plot a route to quickly diagnosing them, and more importantly, curing them.
Rarely has the phrase “easier said than done” been more appropriate. However, Bancel is not alone in his optimism. The Human Cell Atlas project, arguably one of the most ambitious endeavors in human history, is well on its way to providing a comprehensive and detailed reference of each of the millions of types of cells in the human body, including where they are located, what functions they might serve, and how they interact with every other type of cell.
Broad global institutional collaboration, including leading researchers at MIT, Harvard, Japan’s RIKEN center, Israel’s Weizmann Institute of Science, and the UK’s Wellcome Sanger Institute, are part of the coalition. AI is critical to their efforts, helping to integrate vast volumes of data and identify patterns and genomic sequencing information. As of November 2024, data from roughly 62 million cells from 18 major biological networks or systems had been collected and categorized. (This is impressive, but it’s important to note that the goal is to analyze 10 billion cells – an effort projected to take another five to 10 years.)
Finding new treatments
It seems clear that a better understanding of the human body will greatly assist in showing the way towards understanding the many ills that can ail the human body. But AI is also speeding up the path toward identifying medications to treat them.
One way is illustrated by the 2024 Nobel Prize for Chemistry, which honored David Baker, Demis Hassabis, and John M. Jumper for their work in protein research. While working at Google (GOOGL) DeepMind, Drs. Hassabis and Jumper jointly developed a significantly faster, AI-powered method of predicting the three-dimensional shape of a protein based on its chemical formula. That’s something that used to take months; their work enables the task to be completed in a few hours. A protein’s 3D shape is key to predicting what it might be able to do, and thus, can speed the development of new medical treatments.
After Bancel’s remarks, it should not be surprising that Moderna believes in AI’s potential. In fact, AI was instrumental in the work that put Moderna in the public eye in the first place, enabling the company to develop a highly effective COVID-19 vaccine just 42 days after the virus’s genetic sequencing was completed. Moderna has since partnered with IBM (IBM), using the latter’s MoLFormer generative AI to look for other applications of Moderna’s mRNA technology (among the possibilities: a cancer vaccine) – and to optimize clinical trials of promising candidates. Moderna has also integrated OpenAI’s technology into various other aspects of the company’s operations, from drug design and clinical trial operations to manufacturing optimization and marketing.
But Moderna is not alone. Last fall, Novo Nordisk (NVO) partnered with Denmark’s sovereign wealth fund to build Gefion, Denmark’s national AI supercomputer. The machine (unsurprisingly powered by Nvidia and built by France’s Eviden) will doubtless be used to “accelerate groundbreaking scientific discoveries in areas such as drug discovery, disease diagnosis and treatment,” as Cédric Bourrasset, Eviden’s head of quantum computing, put it. The Cleveland Clinic will be among those using the computer to develop clinical and operational innovations. With Danish taxpayers putting up a significant share of the funding, the machine will also be made available to scientists and researchers across a range of other disciplines and fields of research.
Most if not all of the major pharmaceutical companies, including Pfizer, GSK, and Bayer, are in the midst of integrating AI into every aspect of their businesses, from drug development to clinical trials, from streamlining regulatory requirements to optimizing manufacturing, and handling marketing and day-to-day operations.
The DeepSeek revolution
Drug discovery is not the only area set for AI-driven advances. DeepSeek recently showed a way forward for research organizations that do not have Big Pharma-sized budgets to spend on AI expertise and cutting-edge hardware. The Chinese startup’s embrace of open-source development will arguably enable researchers from a broad range of disciplines to begin leveraging R1’s capabilities with minimal ramp-up time. Perhaps more importantly, DeepSeek’s R1 model serves as a very public evidence of how AI training through knowledge distillation and AI execution using a Mixture of Experts (MoE) can enable the creation of powerful tools that don’t require a lot of money or hardware.
Mixture of Experts is an AI and machine-learning technique in which any given query is broken down into smaller, simpler problems that are then passed on to one of a network of specialist modules each with a specific, narrow expertise. This enables an LLM to run with significantly less computational power.
Conveniently, the fields of medicine and health care are also separated into a broad range of specialties and skills. MoE is thus an excellent way for developers to create AI healthcare models that draw on modules with knowledge from various specialties, such as pathology, radiology, or anesthesiology. They might also draw on modules with other types of expertise, such as patient demographics, treatment modalities, or specific diseases.
OpenAI has spent, by some estimates, as much as $100 million creating its various ChatGPT models. Most medical research institutes and healthcare organizations do not have anywhere near that kind of money to devote to AI. So when DeepSeek claimed to have developed R1 for roughly $6 million, it was big news for everyone. As David Cox, Vice President of AI models at IBM, told the Financial Times, “Anytime you can [make it less expensive] and it gives you the right performance you want, there is very little reason not to do it.”
Most experts believe that knowledge distillation was key to DeepSeek’s success at cutting costs. It involves creating a new AI model by analyzing the responses to queries submitted to an existing model. In essence, a “student” model learns from a “teacher” model. The result is a model with similar capabilities, but with far less expenditure of time, effort, and money – a bit like the difference between having a teenager learn calculus in class and what Isaac Newton (or Gottfried Leibniz) had to go through to derive calculus from scratch.
This is a technique that healthcare organizations can leverage to develop specialized models for their own purposes. Most such entities have access to a wealth of untapped data – patient case reports, millions of patient interactions, bedside monitoring data, and hospitalization metrics, not to mention access to numerous specialized research databases.
Knowledge distillation can enable organizations to combine this data with existing LLMs to create their own models. Such student models are generally smaller, and if tailored to a very narrow or specific area, can be made smaller still – small enough to run on handheld point-of-care devices used by clinicians. (Google, Meta, and Microsoft already each have smaller, distilled versions of their general LLMs designed to run locally on smartphones.)
To present just some of the AI-driven possibilities in the field of health care that are being explored or developed, it might be worth taking a look at the efforts of the Mayo Clinic, long ranked as one of the finest hospital systems in the United States. Mayo is a leading center for medical research and offers one of the most prestigious residency training programs in the world, with campuses in Rochester, Minn., Jacksonville, Fla., and Arizona (Phoenix and Scottsdale).
As early as 2022, MIT’s Sloan School of Management had identified the Mayo Clinic as arguably the most aggressive adopter of AI in health care. Mayo has used AI to optimize operations such as the scheduling of surgeries (based on staffing and operating-room resources). Its efforts – known by some as an “AI factory” – rest on Google Clouds Vertex AI, the hub for roughly 250 AI-related research and deployment projects that can be integrated with real-time patient data. Google is not Mayo’s only AI partner, however. For example:
- Mayo Clinic and Microsoft Research (MSFT) have teamed up to develop an AI model that integrates text and images. Focusing on chest X-rays, the two institutions are working on a tool that would not just generate reports automatically, but also assess tube and line placement and detect meaningful changes from prior images.
- Mayo Clinic and Cerebras are collaborating on a genomic model to advance the field of personalized medicine. The objective would be real-time comparisons between reference human genome data (basically, the idealized version of a human genome) and patient genomic information to more precisely diagnose diseases and identify treatments that are most likely to be maximally effective for that specific patient.
- Nvidia (NVDA) has entered into a partnership with Mayo Clinic to develop an AI model to make use of the hospital system’s extensive trove of recently digitized pathology data, which includes 10 million patient records and 20 million slide images. The hope is that AI will be able to help doctors better diagnose and treat complex diseases like cancer.
- A separate partnership with Google involves developing AI-powered algorithms to better target head and neck tumors with radiation while minimizing damage to surrounding, healthy tissue.
Conclusion
As we conclude, it is worth noting that the use of AI in health care promises to raise a number of ethical issues that certainly warrant careful debate, and many would argue that they should be resolved before we proceed any further. These include issues regarding patient privacy and the possibility of ingrained biases being built into what would surely become an essential technology. Ethical debates are beyond the scope of this publication, but they could present a novel set of operational risks that investors should consider while doing their own research.
As a reminder, Signal From Noise should be used as a source of ideas for further research rather than as a source of investment recommendations. We encourage you to explore our full Signal From Noise library, which includes deep dives on the rising wealth of women, investments related to natural disasters and an update to our overview of the semiconductor industry. You’ll also find discussions about the TikTok demographic, artificial intelligence, and weight loss-related investments.