Home 5 Lab Industry Advisor 5 Essential 5 Compliance Tool: Model ChatGPT Acceptable Use Policy

Compliance Tool: Model ChatGPT Acceptable Use Policy

by | Jul 4, 2023 | Essential, Lab Compliance Advisor, Tool

As generative AI chatbot use increases, labs may want to implement a written acceptable use policy for staff.

Employee use of ChatGPT or other generative artificial intelligence (AI) chatbots can expose your lab to significant clinical and legal danger. For example, AI chatbots’ inability to distinguish between some diseases may lead to an incorrect diagnosis; in fact, even entering patient information into such applications would violate the Health Insurance Portability and Accountability Act (HIPAA), exposing labs to fines and other penalties. That’s why you may want to take preventive action by implementing a written acceptable use policy for such products. One option is to impose a complete ban on using AI chatbots for work. This template takes a less extreme—but still tough—approach that allows limited use of ChatGPT and other AI chatbots. Consult legal counsel when adapting this policy for use in your own lab.

******

Policy on Workplace Use of AI Chatbots

1. PURPOSE

Employees may be considering using ChatGPT, Bard, Bing, and/or other generative artificial intelligence (AI) products known as “AI chatbots” for work-related purposes. Although [Lab Name Here] appreciates AI chatbots’ potential to simplify and improve laboratory functions ranging from research and content creation to interpretation of test results and clinical decision-making, it also recognizes the inherent risks, including those regarding testing accuracy, quality control, and organizational compliance with regulatory requirements, clinical standards, and ethical obligations. Having weighed the advantages and disadvantages, our lab has decided to implement clear and specific limitations on employee use of AI chatbots. The purpose of this Policy is to ensure that you understand and adhere to these limitations.

2. SCOPE OF POLICY

This Policy applies to all [Lab Name Here] employees, contractors, temporary employees, interns, volunteers, and third parties with access to AI chatbots, regardless of whether the computers or equipment through which that access is obtained are owned by the lab or personally.

3. USE POLICY

AI chatbots may not be used to conduct [Lab Name Here] business. Disallowed uses of AI chatbots include, but are not limited to:

    • Interpreting test results

    • Making or supporting treatment decisions

    • Performing other clinical functions

    • Communicating with patients or providers

    • Generating internal communications

    • Creating or maintaining medical records

  • Creating content that is expected to be original and/or proprietary to [Lab Name Here]

This list of impermissible uses is meant to be general and illustrative and is not comprehensive. It will be supplemented and clarified by the Policy provisions below.

4. PERMISSIBLE USES

Employees may use AI chatbots for general education and research purposes to the extent those uses are:

    • Work-related,

    • Not deemed impermissible under Section 3, and

  • Carried out in accordance with the restrictions, requirements, and protocols set forth below.

5. BAN ON USE OF PRIVATE & CONFIDENTIAL INFORMATION

Employees must be aware that information cannot be kept fully confidential once it is entered into AI chatbots. Accordingly, use of AI chatbots must comply with the Health Insurance Portability and Accountability Act (HIPAA) and other applicable personal privacy laws, as well as [Lab Name Here] data privacy and security and confidentiality policies. Do NOT use AI chatbots to perform functions that require you to enter, collect, use, or disclose:

    • Protected health information (PHI) (as that term is defined in the [Lab Name Here] Data Privacy Policy) about a patient or any other individual;

    • [Lab Name Here] trade secrets, clients, processes, or other proprietary or business information that you are required to keep confidential;

  • Information about vendors, customers, clients, or other third parties that [Lab Name Here] is contractually required to keep confidential.

6. RISKS OF DISCRIMINATION

Employees must be aware that AI chatbots’ data and algorithms may contain hidden prejudices or biases or be based on stereotypes about people of certain races, sexes, ages, religions, or other protected classes under anti-discrimination laws. Accordingly, employees may not use AI chatbots for purposes of recruiting, hiring, promoting, retaining, or making other employment-related decisions unless and until [Lab Name Here] legal counsel vets and verifies that those applications and tools relying on AI chatbot data are fully compliant with applicable federal and state anti-discrimination laws and will not have the indirect effect of discriminating against groups or individuals those laws are designed to protect.

7. RISKS OF ERROR & INACCURACY

Employees must be aware that data and material generated by AI chatbots may be inaccurate, misleading, or even fabricated. Accordingly, employees must not rely on AI chatbots’ data to make clinical, research, or business decisions unless and until a competent person verifies that the data is fully accurate.

8. RISKS OF COPYRIGHT INFRINGEMENT

Employees must be aware that content generated by AI chatbots may incorporate data or material protected by copyright and other intellectual property laws and that use of that content may expose [Lab Name Here] to risk of liability for infringement. Accordingly, do not republish, distribute, or incorporate AI chatbot content into work products that are intended to be proprietary to [Lab Name Here] unless and until legal counsel verifies that use of such content is permissible under intellectual property laws.

 9. OTHER EMPLOYEE DUTIES

In addition to complying with the above requirements, employees must:

    • Notify a supervisor or manager of their work use of AI chatbots regardless of whether access is via a [Lab Name Here] or personally owned computer or device,

    • When in doubt about whether a proposed AI chatbot use is permissible, get express confirmation of permissibility from a supervisor or manager before engaging in the use,

    • Ensure that content generated by AI chatbots is labeled or footnoted to clearly indicate that it contains AI chatbot information, and

  • Immediately report any Policy violations they are aware of to a supervisor or manager.

10. MONITORING OF USAGE

Employees acknowledge and accept that [Lab Name Here] reserves the right, at its sole discretion and at any and all times, to monitor and access all employee computer files, emails, and other communications for purposes of verifying compliance with this Policy, regardless of whether employees access AI chatbots through a [Lab Name Here] or personally owned computer or device and that they have no reasonable expectation of privacy in such data.

11. CONSEQUENCE OF VIOLATIONS

Failure to comply with this Policy will be treated as serious misconduct that may result in disciplinary action, up to and including termination, in accordance with [Lab Name Here] disciplinary policies and the terms of applicable collective bargaining agreements. Employees who are aware of violations should report the misconduct to the HR department. No employee will be subject to retaliation, discrimination, or adverse treatment of any kind by [Lab Name Here] or any of its personnel or agents in reprisal for reporting Policy violations in good faith.

12. ACKNOWLEDGMENT

By signing this Policy, I acknowledge that I have read, understand, and promise to comply with all its requirements.

Employee Signature: ____________________________

Date: ____________________________

Subscribe to view Essential

Start a Free Trial for immediate access to this article