Home 5 Lab Industry Advisor 5 Essential 5 AI in Healthcare: What Do Recent Developments Mean for Labs?

AI in Healthcare: What Do Recent Developments Mean for Labs?

by | Jan 2, 2024 | Essential, Inside the Lab Industry-lir, Lab Industry Advisor, Legislation-lca

Genialis CEO and co-founder Rafael Rosengarten, PhD, discusses key 2023 AI developments and what they could mean for laboratories.

In many ways, 2023 was the year of artificial intelligence (AI). Though AI technologies are not new, they seemed to take the world by storm last year with products such as ChatGPT being used regularly in a variety of industries. As usual, legislative efforts to regulate such products are far behind their advancement, with the European Union only recently reaching “a provisional agreement” on its Artificial Intelligence Act and the Biden administration issuing an Executive Order to establish standards for AI in the US.1,2 Both the AI Act and Executive Order aim to manage the risks of AI, while ensuring innovation.

Rafael Rosengarten, PhD, chief executive officer and co-founder, Genialis, and founding member of the Alliance for Artificial Intelligence in Healthcare (AAIH), discusses the key issues that remain around AI’s potential uses in healthcare, the biggest AI-related trends of 2023, and what those developments mean for laboratories and other healthcare providers going forward.

Q: What, in your opinion, were the biggest developments in AI in healthcare for 2023?

A: Large language models (LLMs), the basis of ChatGPT, and generative AI exploded onto the scene this year. The emergence of these in the popular Zeitgeist has had ripple effects into healthcare, and has everyone talking. The biggest impact so far is impending policy and regulatory changes. A year ago, those of us pushing for AI technologies in the healthcare space felt like we were struggling to get attention. Now, it’s the opposite and the emergence of AI in the popular discourse has made policy and regulatory implications both imminent and inevitable. In healthcare, neither LLMs nor generative AI has yet made a resounding impact, but they will, give it time. Some molecules produced by generative AI are starting to make their way into the clinic. While judgment on the success or impact of these molecules requires clinical validation, the speed and efficiency of these programs is astounding.

Q: Why were those developments so important for the healthcare industry?

A: Generative AI is exciting to me because we can now generate all new molecules with better therapeutic properties—or even make digital replicas of whole patients for the purpose of clinical study. These developments will allow us to address areas that have previously gone unaddressed.

The policy and regulatory changes are significant because these will set boundaries and either speed or impede innovation. Our existing framework for evaluating whether drugs and diagnostic devices are safe and efficacious, as at the FDA and EMA [European Medicines Agency], is a bar that shouldn’t change depending on whether the drug is discovered by an AI or by a human. The drug or test still must work. The question becomes, do we need additional regulatory safeguards in the healthcare sphere because AI is now so prominent?

One area where we may need to raise the bar is healthcare-related consumer goods and applications that are currently unregulated. These include, for example, at-home genetic testing kits or wearables and mobile phone apps. People make healthcare decisions based on results they get using these types of products without necessarily relying on the guidance of physicians or medical consensus. One can imagine any number of “diagnose yourself chatbots” that people might use themselves instead of going to the doctor, and therefore these do need the watchful eye of consumer protection.

One risk of AI that cannot be ignored is that healthcare data is notoriously full of bias, largely because our healthcare system is biased regarding who can access it and whose data is collected. These disparities could get propagated by an artificial intelligence agent because the computer will only learn via the information provided. We have an obligation to monitor and address bias in data feeding artificial intelligence. Ultimately, the solution lies in dealing with bias at the source. How do we improve healthcare equity? How do we eliminate those disparities that are driving data bias in the first place? Perhaps this is an area where AI can be part of the solution.

Q: What do those trends mean for medical laboratories specifically?

A: A coming regulatory change, that is specific to laboratories, is that of diagnostic tests. The FDA hopes to soon pass a rule which allows them to regulate laboratory-developed tests (LDTs) the same way in vitro diagnostic devices are regulated. This issue is not necessarily related to AI, but rather a long-standing concern over a broad commercial interpretation of what makes an LDT. In 2024, these two issues will inevitably become intertwined, especially since the number of FDA cleared devices that incorporate AI has exponentially increased. At the end of the day, the FDA may point to the increasing importance and reliance on artificial intelligence as part of our diagnostic paradigm as an additional argument as to why it needs more oversight of LDTs. AI introduces very specific and technical risks, so the FDA does need to have more purview over it. This is just one of the ways that increased general awareness and concern over the risks that AI poses may impact the laboratory.

Q: What are your thoughts on legislative efforts regarding AI regulations so far?

A: So far legislative efforts mostly have consisted of fact finding, which is the right place to start.

But I do find it alarming that healthcare is getting bundled with other applications of AI. We’ve written about this at the AAIH.

I would argue that the effort to develop any regulatory framework should include a wide swath of healthcare experts as stakeholders. This means entrepreneurs, seasoned veterans, and people representing pharma, diagnostics, healthcare systems, payers, providers, patients, and regulators. You need all these voices to inform the legislative process. My hope is that consideration will be given to the current healthcare regulatory landscape, including the way we already regulate data, privacy, security, patient rights, etc., and question whether we even need additional regulation. This current regulatory model may even serve to inform how to regulate AI in other sectors. Unlike other sectors being revolutionized by AI, the healthcare industry is already highly regulated with significant oversight. We already take great care to ensure patient and privacy protection is in place, and hope that would become the new standard when it comes to AI technology.

Q: What worries you the most when it comes to AI in healthcare?

A: The pace of improvements in AI algorithms, especially around generative AI and LLMs with ChatGPT’s ascent, is astounding. These are incredibly powerful, transformative technologies, being advanced even further by Big Tech, that can be harnessed for good. They unfortunately can also be harnessed for the opposite. Ultimately, I worry that AI could be put to use purely for profit, by bad actors, and not for the benefit of people. Those of us early advocates of AI in healthcare will look pretty foolish if we don’t band together to use the technologies responsibly and for genuine benefit of people. So, it’s a question of trying to get it right and of figuring out how, as a community, we can implement and enforce standards and the right legal requirements.

Q: What excites you the most about how this technology can be used in the healthcare/laboratory space?

A: AI is very good at figuring out interactions between things that we don’t know to look for—those we haven’t previously observed. If you train your AI right, it’s going to find biological interactions and biological associations that are highly predictive of whatever property you want to predict. It could be a patient response to a cancer drug or the utility of a molecule to bind a disease-causing target. It also could include discovering a new target or designing a new molecule from scratch. We don’t always know how to look for these things. We may not even know how to design the right experiments for them, but the AI algorithms can find them and that is something that I find especially exciting.

Q: How do you think the key trends related to AI in healthcare will develop going forward into 2024?

A: On the drug discovery side, clearly we will continue to lean into generative AI, designing new molecules that nature hasn’t invented yet, potentially introducing new drug modalities. That absolutely is going to keep growing, which is extremely exciting.

I think we’re going to continue to see the adoption of AI for diagnostic devices. On the diagnostics end, and in a lot of fields, we are getting much better, thanks to AI, at figuring out what is causing disease and how to treat it.

The quality and equity of healthcare delivery is going to be a continued focus and one that accelerates in 2024. Unbiased data requirements for AI algorithms, coupled with the desire to provide equitable access and better patient outcomes at a better price, is going to force change, especially when it comes to organizing patient data, a major driver of whether we can deliver improvements.

Q: What advice do you have for laboratory and other healthcare leaders for addressing AI and its use in healthcare?

A: All scientists working in the healthcare field have been trained to be skeptical. Collectively, we should apply a similar lens to any new technology. A healthy level of skepticism is necessary, and as scientists we should embrace the scientific method and aim to disprove hypotheses as a tried-and-true way to advance knowledge. But at the same time, we should be optimistic that new technologies can in fact be transformative. Ultimately, I think this moment is the precipice for great creative collaboration, and for us to bring together interdisciplinary groups of technologists, life scientists, healthcare providers, patient advocates, and patients themselves to think about what problems can be solved if only we could ask better questions and get answers we didn’t know existed. So, be cautious, but not afraid. Let’s think about how we can solve bigger problems instead of finding them daunting because we haven’t solved them before.

References:

    1. https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai

  1. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

________________________________________________________________________________________________________________________________________

Rafael Rosengarten, PhD, CEO of Genialis, leads the company’s effort to deliver the next generation of clinical biomarkers to help the right patient get the right treatment at the right time. He spent nearly 20 years in biomedical research prior to Genialis, publishing in the fields of evolution, immunology, bioengineering and genetics. Rafael is also a board member and co-founder of the Alliance for AI in Healthcare (AAIH).

Rafael attended Dartmouth College and then Yale University, where he was an NSF Graduate Research Fellow. He went on to postdoctoral training in Jay Keasling’s synthetic biology group at Lawrence Berkeley National Laboratory, Joint BioEnergy Institute (JBEI), where he co-invented the j5 DNA assembly design automation tool (which has since been commercialized by TeselaGen Biotechnology). This was followed by a National Library of Medicine fellowship in biomedical informatics at Baylor College of Medicine. In his free time, Rafael enjoys cooking, climbing mountains, and exploring the world with his wife and two precocious children.

Subscribe to view Essential

Start a Free Trial for immediate access to this article