Home 5 Lab Industry Advisor 5 Lab Compliance Advisor 5 Compliance Perspectives-lca 5 Compliance Perspectives: The Risks of Using ChatGPT & How to Manage Them

Compliance Perspectives: The Risks of Using ChatGPT & How to Manage Them

by | Jul 4, 2023 | Compliance Perspectives-lca, Essential, Lab Compliance Advisor

While ChatGPT has its place, allowing lab staff to use it exposes you to liabilities—here are some ways to manage the risks involved.

ChatGPT, an artificial intelligence (AI)-driven chatbot, has seen rapid adoption, attracting 1 million users in just its first week when it launched in November 2022 and surpassing Instagram to become the quickest consumer app to gain such a following, according to a UBS report.1 With a current user base of over 100 million, ChatGPT has become the biggest and most powerful chatbot in the world.

While ChatGPT can be an incredibly useful tool, it could also cause compliance issues. For compliance managers, the key is to get on top of the issue, proactively weigh the advantages and disadvantages, and make an informed decision about work use of ChatGPT and other AI chatbots. To help you make that decision, we outline the risks of using the chatbot:

The Potential of ChatGPT

ChatGPT (GPT stands for generative pre-trained transformer) is a type of AI product known as a large language model (LLM). LLMs are algorithms trained on vast amounts of data and trained to engage in human-like conversations. Produced by OpenAI, ChatGPT is trained on large internet datasets that include information up to the year 2021 and can answer questions, follow instructions, and learn from its mistakes, though it should be noted that it has a limited memory and can repeat errors. While not perfect, these capabilities make it suitable for performing online research, synthesizing dense, lengthy, or technical materials, answering technical questions, generating content and code, and other work uses.

ChatGPT also has potential medical applications. It actually passed the United States Medical Licensing Exam (USMLE) required for a medical license and could be deployed to interpret test results and provide evidence-based recommendations to support clinical decision-making, as well as perform technical lab functions.2

ChatGPT’s Lab Capabilities

Some of the other things ChatGPT could do include the following:3
·       Prepare lab and pathology reports
·       Translate medical jargon and technical terms from lab reports into everyday language that patients can comprehend
·       Analyze data from wearables, sensors, and other monitoring devices for the purposes of remote patient monitoring
·       Generate automated summaries of patient interactions, physician notes, and other information to facilitate medical recordkeeping

The Risks of Using ChatGPT

Despite these potential benefits, ChatGPT and other LLM-based AI programs contain flaws that make them highly risky for use in labs and other healthcare settings. Perhaps the greatest risk is inaccuracy. While ChatGPT may be able to pass the USMLE, it lacks the human knowledge, training, and experience required to practice lab science and medicine. The chatbot’s data also contains errors, gaps, biases, and blind spots, as they are limited to the data they were trained on, e.g., they are not be aware of advances and events that occur after they are initially programmed. Chatbots also cannot differentiate between reliable and unreliable sources. Most alarmingly, they tend to completely fabricate data, inventing fantastical and inaccurate biographies for well-known public figures and making up references to scientific studies that don’t exist, while presenting these references and findings as facts.4

A new study published in Clinical Chemistry has questioned the clinical utility of using ChatGPT for laboratory medicine. Researchers asked ChatGPT questions on topics ranging from “basic knowledge to complex interpretation of laboratory data in clinical context.”5 Of the 65 questions, ChatGPT answered

    • 33 correctly (50.7 percent),

    • 15 (23.1 percent) incompletely or only partially correctly,

    • 11 (16.9 percent) incorrectly or misleadingly, and

  • 6 (9.3 percent) irrelevantly.

The study cites other fundamental flaws that ChatGPT demonstrated, such as an inability to diagnose alcoholic hepatitis from a panel of liver enzyme results and erroneously diagnosing B-cell acute lymphoblastic leukemia when presented with a case of chronic myeloid leukemia with basophilic blast crisis.

Takeaway: Don’t Use ChatGPT to Perform Clinical Lab Functions

As evidenced by the Clinical Chemistry paper, ChatGPT’s inaccuracies can have catastrophic consequences for labs and their patients, and the authors caution labs against using it for clinical purposes: “While ChatGPT has the potential to improve medical education and provide faster responses to routine clinical laboratory questions, currently, it should not be relied upon without expert human supervision.”

Three ChatGPT Liability Risks

Using ChatGPT can also expose your lab to legal risks.

1. HIPAA Privacy Violations

Problem: Some AI chatbot applications require the collection, use, and disclosure of protected health information (PHI) that HIPAA requires labs to keep private and secure. However, once PHI is entered into ChatGPT, it is no longer private or secure as it becomes part of the chatbot’s dataset. OpenAI’s Privacy Policy specifically allows the company to use personal information acquired via the customer’s use of the service, including log data, device information, and most significantly, usage data:6

“We may automatically collect information about your use of the Services, such as the types of content that you view or engage with, the features you use, and the actions you take, as well as your time zone, country, the dates and times of access, user agent and version, type of computer or mobile device, computer connection, IP address, and the like.”

Strategy: Revise your lab data privacy and security policies to address the risks posed by ChatGPT and other AI-based chatbots. The safest approach is to ban employees from using ChatGPT to carry out functions requiring PHI and monitor their computer usage to ensure they’re not entering PHI into the app.

2. Copyright & Other Infringement

ChatGPT’s enormous data banks may include information protected by copyright and other intellectual property laws. Inadvertent use of that data thus exposes you to risk of copyright infringement. In recognition of this, ChatGPT’s Terms of Use (see Section 7[b] and [c]) expressly disclaim any warranties about noninfringement, leaving users to bear the risk of liability.7 Integrating ChatGPT data into your own writings or work product might also compromise your ownership over the output.8

Strategy: Be wary of circulating or republishing ChatGPT data or incorporating it into your own content, especially if you consider that end product proprietary. Rule of thumb: Treat ChatGPT material as the inspiration or starting point for research and not the product itself.

3. Discrimination

Data and algorithms built into ChatGPT incorporate the subtle biases and prejudices of the humans who created them. So, relying on it as a decision-making tool, particularly for employment, can expose your lab to discrimination liability risks.

Example: In 2018, Amazon pulled the plug on an AI-based recruitment program after discovering that the algorithm was skewed against women. The model was programmed to vet candidates by observing patterns in resumes submitted to the company over a 10-year period. Most of the candidates in the training set were men. As a result, the AI learned that male candidates were preferred over female candidates.9 In other cases, it has been reported that LLMs generated code stating that only White and Asian men make for good scientists.10

Strategy: Ensure your employees are aware of the discrimination risks associated with the use of ChatGPT and similar AI products. Based on G2 Intelligence’s research and the information available so far, here are some action steps labs may want to take:

    • Caution employees to be sensitive to these risks when taking instructions from or using content these platforms generate

    • Do a self-audit rigorously testing your algorithms and AI-based selection tools with an eye toward tools that look neutral on their face but have the effect of discriminating against groups protected by anti-discrimination laws, as in the Amazon example above

  • Include language addressing algorithm discrimination in your lab’s equal opportunity and nondiscrimination policies

Bottom Line: Decide Between Total & Limited ChatGPT Use Ban

Like most employers, labs must now decide on work-related use of ChatGPT. So far, many large companies, such as Samsung, Apple, Verizon, and JPMorgan Chase, have fully banned the technology. While this is the safest approach, a total ban may mean missing out on some of ChatGPT’s potential benefits. Thus, while it entails some risk, allowing limited use of ChatGPT may be worth considering. The key is to create a clearly written policy that imposes the right restrictions. To create your own policy around AI chatbots such as ChatGPT, download and adapt our template “Compliance Tool: Model ChatGPT Acceptable Use Policy” from the G2 Intelligence website.

References:

    1. https://www.ubs.com/global/en/wealth-management/our-approach/marketnews/article.1585717.html

    1. https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000198

    1. https://www.forbes.com/sites/bernardmarr/2023/03/02/revolutionizing-healthcare-the-top-14-uses-of-chatgpt-in-medicine-and-wellness/?sh=380d11fd6e54

    1. https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/

    1. https://academic.oup.com/clinchem/advance-article-abstract/doi/10.1093/clinchem/hvad058/7180070?redirectedFrom=fulltext

    1. https://openai.com/policies/privacy-policy

    1. https://openai.com/policies/terms-of-use

    1. https://www.forbes.com/sites/joemckendrick/2022/12/21/who-ultimately-owns-content-generated-by-chatgpt-and-other-ai-platforms/?sh=10ff15295423

    1. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

    1. https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results

Subscribe to view Essential

Start a Free Trial for immediate access to this article