Home 5 Lab Industry Advisor 5 Essential 5 White House Unveils New “Bill of Rights” for Responsible Use of AI

White House Unveils New “Bill of Rights” for Responsible Use of AI

by | Nov 16, 2022 | Essential, HIPAA-nir, National Lab Reporter

The US has taken its first steps to establish legal limits over AI use by publishing a voluntary code to ensure responsible corporate use of AI.

Technology moves much faster than lawmaking. The most recent manifestation of this ages-old principle is the lack of regulation to curb the growing corporate use of artificial intelligence (AI) for surveillance and other intrusive purposes in not just health care, but all aspects of the economy. While many countries around the world, particularly in Europe, have sought to establish legal limits over AI use, the US has lagged behind. But now the US has taken its first, albeit tentative, steps to address the problem by publishing a voluntary code to ensure responsible corporate use of AI. Here’s a briefing on the new guidelines and their potential significance.

The Need for Regulation of Corporate AI Use

Algorithms for identifying cancers; security systems based on facial and biometric identification; computers that can translate different languages; just about anything becomes possible when machine learning is applied to translate massive amounts of data faster and more efficiently than the human brain.

But there’s also a price to pay for such technological advances, including the potential loss of individual privacy rights. Adding to the problem is that the data used for AI analysis typically incorporates and thus perpetuates the prejudices and blind spots of those who supply it. Thus, for example, facial recognition technology has been linked to disproportionately wrongful arrest rates of Blacks and other marginalized communities. AI-based medical decision support solutions rely on historic utilization and case data that don’t account for the medical experiences of disadvantaged populations who have limited access to the health system.

Growth of AI technology use has outstripped society’s capacity to regulate it. But in the past few years, the lawmakers have started to catch up. Since 2018, the European Union (EU) has led the global effort to develop legal protections against AI abuses and plans to unveil comprehensive legislation called the EU AI Act in 2024.1 China, too, is working on a framework for regulating AI use. Approximately 60 countries now have official national AI strategies, according to the World Economic Forum. Nongovernmental organizations like the OECD and UNESCO have also proposed international best practices for responsible use of AI.

But widespread AI regulation is a more controversial and politically charged issue in the US. While a few states have passed legislation limiting collection, use, and disclosure of biometric and facial recognition data, the US Congress has yet to tackle the issue. To date, national impetus and leadership to establish principles for responsible AI use has come from federal agencies, like the U.S. Department of Defense, and some private corporations, including Google.2,3

The New Biden AI Bill of Rights

On Oct. 5, the White House Office of Science and Technology Policy (OSTP) published a document called Blueprint for an AI Bill of Rights “to support [corporations in] the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems.”4 The Blueprint sets out five basic principles of protection against AI abuses that members of the public should have:

1. Assurance of Safe and Effective Systems

You should be protected against AI systems that are unsafe or ineffective. Those who deploy, develop, and remove systems should act proactively to protect you from harms, including use of inappropriate or irrelevant data and the “compounded harm” from its reuse. There should be independent evaluation and reporting to confirm that systems are safe and effective, with results made public whenever possible.

2. Protection from Algorithmic Discrimination

Systems should be designed equitably so you don’t have to face algorithmic discrimination of any kind. According to the Blueprint, “algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” Systems designers and users should engage in “proactive equity assessments” to ensure that data used is representative and doesn’t contain biases for or against demographic features.

3. Data Privacy

You should be protected from abusive data practices via built-in design system protections giving you notice and control over how data is used. Collection, use and disclosure of private data should be undertaken with your consent and limited to the minimum necessary to accomplish the purpose of such collection, use, and disclosure. There should also be additional limitations for use of data related to health, work, education, criminal justice, finance, and other “sensitive domains.” You should also be able to find out how your data is being used and ensure that such use conforms with your expectations in providing consent.

4. Notice and Explanation

“You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you,” the Blueprint states. “Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.” Summary reports explaining data uses in plain language should be made public whenever possible.

5. Human Alternatives, Consideration, and Fallback

“You should be able to opt out from automated systems in favor of a human alternative, where appropriate” based on “reasonable expectations.” If an automated system fails in a way that may impact you, there should be a way that you can access “timely human consideration” and ensure corrective action is taken. You should also understand how the escalation and correction processes work and get timely and accessible reports of their results, according to the Blueprint.

Significance of the Blueprint

As the OSTP is quick to stress, the Blueprint “is non-binding and does not constitute U.S. government policy” or an official interpretation of any existing laws. In other words, the big tech companies at whom the Blueprint is aimed are under no compulsion to follow it and face no penalties if they choose not to do so. By contrast, the EU AI Act, which contains protections that are far more robust than the Blueprint, will become an enforceable law when it takes effect in 2024. However, the Blueprint isn’t totally devoid of significance. While not a regulatory document, it establishes for the first time a set of broad principles and best practices for responsible design, deployment, and implementation of AI systems for corporations. In addition to guidance for companies that want to comply, the Blueprint represents a framework for future legislation and regulation, should it come down to that.

References:

  1. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  2. https://www.defense.gov/News/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/
  3. https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf
  4. https://ai.google/principles/

Subscribe to view Essential

Start a Free Trial for immediate access to this article