Compliance Perspectives: The Risks of Using ChatGPT & How to Manage Them
While ChatGPT has its place, allowing lab staff to use it exposes you to liabilities—here are some ways to manage the risks involved.
ChatGPT, an artificial intelligence (AI)-driven chatbot, has seen rapid adoption, attracting 1 million users in just its first week when it launched in November 2022 and surpassing Instagram to become the quickest consumer app to gain such a following, according to a UBS report.1 With a current user base of over 100 million, ChatGPT has become the biggest and most powerful chatbot in the world.
While ChatGPT can be an incredibly useful tool, it could also cause compliance issues. For compliance managers, the key is to get on top of the issue, proactively weigh the advantages and disadvantages, and make an informed decision about work use of ChatGPT and other AI chatbots. To help you make that decision, we outline the risks of using the chatbot:
The Potential of ChatGPT
ChatGPT (GPT stands for generative pre-trained transformer) is a type of AI product known as a large language model (LLM). LLMs are algorithms trained on vast amounts of data and trained to engage in human-like conversations. Produced by OpenAI, ChatGPT is trained on large internet datasets that include information up to the year 2021 and can answer questions, follow instructions, and learn from its mistakes, though it should be noted that it has a limited memory and can repeat errors. While not perfect, these capabilities make it suitable for performing online research, synthesizing dense, lengthy, or technical materials, answering technical questions, generating content and code, and other work uses.
ChatGPT also has potential medical applications. It actually passed the United States Medical Licensing Exam (USMLE) required for a medical license and could be deployed to interpret test results and provide evidence-based recommendations to support clinical decision-making, as well as perform technical lab functions.2
ChatGPT’s Lab Capabilities
· Prepare lab and pathology reports
· Translate medical jargon and technical terms from lab reports into everyday language that patients can comprehend
· Analyze data from wearables, sensors, and other monitoring devices for the purposes of remote patient monitoring
· Generate automated summaries of patient interactions, physician notes, and other information to facilitate medical recordkeeping
The Risks of Using ChatGPT
Despite these potential benefits, ChatGPT and other LLM-based AI programs contain flaws that make them highly risky for use in labs and other healthcare settings. Perhaps the greatest risk is inaccuracy. While ChatGPT may be able to pass the USMLE, it lacks the human knowledge, training, and experience required to practice lab science and medicine. The chatbot’s data also contains errors, gaps, biases, and blind spots, as they are limited to the data they were trained on, e.g., they are not be aware of advances and events that occur after they are initially programmed. Chatbots also cannot differentiate between reliable and unreliable sources. Most alarmingly, they tend to completely fabricate data, inventing fantastical and inaccurate biographies for well-known public figures and making up references to scientific studies that don’t exist, while presenting these references and findings as facts.4
A new study published in Clinical Chemistry has questioned the clinical utility of using ChatGPT for laboratory medicine. Researchers asked ChatGPT questions on topics ranging from “basic knowledge to complex interpretation of laboratory data in clinical context.”5 Of the 65 questions, ChatGPT answered
- 33 correctly (50.7 percent),
- 15 (23.1 percent) incompletely or only partially correctly,
- 11 (16.9 percent) incorrectly or misleadingly, and
- 6 (9.3 percent) irrelevantly.
The study cites other fundamental flaws that ChatGPT demonstrated, such as an inability to diagnose alcoholic hepatitis from a panel of liver enzyme results and erroneously diagnosing B-cell acute lymphoblastic leukemia when presented with a case of chronic myeloid leukemia with basophilic blast crisis.
Takeaway: Don’t Use ChatGPT to Perform Clinical Lab Functions
As evidenced by the Clinical Chemistry paper, ChatGPT’s inaccuracies can have catastrophic consequences for labs and their patients, and the authors caution labs against using it for clinical purposes: “While ChatGPT has the potential to improve medical education and provide faster responses to routine clinical laboratory questions, currently, it should not be relied upon without expert human supervision.”
Three ChatGPT Liability Risks
Using ChatGPT can also expose your lab to legal risks.
1. HIPAA Privacy Violations
“We may automatically collect information about your use of the Services, such as the types of content that you view or engage with, the features you use, and the actions you take, as well as your time zone, country, the dates and times of access, user agent and version, type of computer or mobile device, computer connection, IP address, and the like.”
Strategy: Revise your lab data privacy and security policies to address the risks posed by ChatGPT and other AI-based chatbots. The safest approach is to ban employees from using ChatGPT to carry out functions requiring PHI and monitor their computer usage to ensure they’re not entering PHI into the app.
2. Copyright & Other Infringement
Strategy: Be wary of circulating or republishing ChatGPT data or incorporating it into your own content, especially if you consider that end product proprietary. Rule of thumb: Treat ChatGPT material as the inspiration or starting point for research and not the product itself.
Data and algorithms built into ChatGPT incorporate the subtle biases and prejudices of the humans who created them. So, relying on it as a decision-making tool, particularly for employment, can expose your lab to discrimination liability risks.
Example: In 2018, Amazon pulled the plug on an AI-based recruitment program after discovering that the algorithm was skewed against women. The model was programmed to vet candidates by observing patterns in resumes submitted to the company over a 10-year period. Most of the candidates in the training set were men. As a result, the AI learned that male candidates were preferred over female candidates.9 In other cases, it has been reported that LLMs generated code stating that only White and Asian men make for good scientists.10
Strategy: Ensure your employees are aware of the discrimination risks associated with the use of ChatGPT and similar AI products. Based on G2 Intelligence’s research and the information available so far, here are some action steps labs may want to take:
- Caution employees to be sensitive to these risks when taking instructions from or using content these platforms generate
- Do a self-audit rigorously testing your algorithms and AI-based selection tools with an eye toward tools that look neutral on their face but have the effect of discriminating against groups protected by anti-discrimination laws, as in the Amazon example above
- Include language addressing algorithm discrimination in your lab’s equal opportunity and nondiscrimination policies
Bottom Line: Decide Between Total & Limited ChatGPT Use Ban
Like most employers, labs must now decide on work-related use of ChatGPT. So far, many large companies, such as Samsung, Apple, Verizon, and JPMorgan Chase, have fully banned the technology. While this is the safest approach, a total ban may mean missing out on some of ChatGPT’s potential benefits. Thus, while it entails some risk, allowing limited use of ChatGPT may be worth considering. The key is to create a clearly written policy that imposes the right restrictions. To create your own policy around AI chatbots such as ChatGPT, download and adapt our template “Compliance Tool: Model ChatGPT Acceptable Use Policy” from the G2 Intelligence website.
This content is exclusive to Lab Compliance Advisor subscribers
Start a Free Trial for immediate access to this article and our entire archive of over 20 years of LCA reports.