AI GOVERNANCE IN LIGHT OF RECENT CASES AND REGULATORY ACTIVITY


Artificial intelligence technology matured significantly in 2023, resulting in a flood of laws and standards in an attempt to regulate it.  

Here's a look at the major AI events of 2023, what may come in 2024, and some practical tips for responding to the challenges and opportunities that lie ahead. 

FTC ACTIVITY IN 2023 AND LEARNING FROM THE RITE AID DECISION 

In May the Federal Trade Commission put businesses on notice that existing laws, such as Section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act, apply to AI systems. The FTC also brought a number of actions against corporations – its latest ruling against Rite Aid in December found that Rite Aid failed to consider heightened risks to consumers of a certain race or gender of being tagged as potential shoplifters by Rite Aid’s AI facial recognition technology. Rite Aid was ordered to delete images collected as well as any algorithms that were developed using those images. Rite Aid was also ordered to: 

  • Notify consumers when their biometric information is enrolled in a database used in connection with a biometric security or surveillance system and when Rite Aid takes some kind of action against them based on an output generated by such a system; 

  • Investigate and respond in writing to consumer complaints about actions taken against consumers related to an automated biometric security or surveillance system; 

  • Provide clear and conspicuous notice to consumers about the use of facial recognition or other biometric surveillance technology in its stores; 

  • Delete any biometric information it collects within five years; 

  • Implement a data security program to protect and secure personal information it collects, stores, and shares with its vendors; 

  • Obtain independent third-party assessments of its information security program; and 

  • Provide the Commission with an annual certification from its CEO documenting Rite Aid’s adherence to the order’s provisions. 

The FTC has also made clear through its actions last year that it will continue to use model deletion as a remedy. 

OTHER KEY AI DEVELOPMENTS IN THE UNITED STATES 

On 30 Oct. 2023, U.S. President Joe Biden issued the "Executive Order on Safe, Secure, and Trustworthy AI Development and Use of Artificial Intelligence," recognizing the benefits of the government's use of AI while detailing core principles, objectives, and requirements to mitigate risks. Building off the executive order, the Office of Management and Budget followed with its proposed memo "Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence." The OMB memo outlines requirements for government agencies as they procure, develop, and deploy AI. While the executive order and OMB memo apply to the federal government, companies providing services to the government will also be subject to these requirements. 

There has also been an evolving AI policy landscape throughout the U.S. The past year has seen a flurry of regional action on AI at the state and city levels. Under state omnibus privacy laws, Colorado finalized rulemaking on profiling and automated decision-making and California proposed rulemaking on automated decision-making technologies. Several other states also passed similar laws that require an opt-out for certain automated decision-making and profiling. Connecticut passed legislation establishing a working group on AI and requirements for government. Illinois and Texas established task forces to study the use of AI in education and in government systems and the potential harm AI could cause to civil rights.  

TRENDS TO EXPECT IN 2024 

2024 will bring more adoption and novel uses of AI tools and systems by government, private entities, and individuals. As a result, more legislation and regulatory scrutiny around the uses of AI is expected. Around the globe, in addition to the new EU AI Act’s risk-based approach, more countries will likely consider and pass AI laws. 

In the U.S., more states will likely require data protection assessments for profiling and automated decision-making, and possibly opt-in for profiling, as proposed in some pending bills. Several state laws are also being proposed regarding AI in employment contexts, including notice to employees and other restrictions for use in employment decisions and monitoring, and introducing employee rights to request information used for AI development. We can also expect more laws and enforcement activities to focus on preventing discriminatory harms in the context of credit scoring, hiring, insurance, health care, and targeted advertising. 

PRACTICAL TIPS FOR AI GOVERNANCE IN 2024 

AI governance becomes critical with so much change coming. Consider the following practical tips for 2024: 

  1. Develop and update AI policies, processes and frameworks 

    Have a process in place to keep up to date with changes in AI technologies, laws, use cases and risks. This will help ensure you have up-to-date information to keep policies and frameworks current and compliant.   

    Ensure accountability by designating personnel responsible for your AI program and have an effective process to train individuals about AI policies, guidance and the use of frameworks, such as the NIST AI Risk Management Framework

    In developing policies and procedures, consider the life cycle of your AI systems and tools, from the data used to train AI models in development, to data inputs and outputs processed in production. Policies and frameworks should address: securing AI systems and data; incident response procedures; data sourcing practices; data minimization and retention; assessing and monitoring systems for data integrity, bias, safety, and discriminatory or disparate impacts to individuals; assessing the likelihood of inaccurate outputs; and societal harms. 

    Review policies and statements about your AI systems and data practices to ensure they align with your existing privacy and security policies. 

  2. Conduct AI inventories and risk assessments, monitor suppliers 

    Conduct an inventory of existing AI systems. Identify and document the various AI systems in use, the content and data they process, the outputs they produce, and any recipients of data or content. Once you have conducted an AI inventory, use this information to conduct an AI risk assessment. 

    Don't overlook third-party AI solutions and use of AI by third party vendors as part of your assessment. For third party AI solutions, request their AI policies and administer AI due diligence questionnaires. PRIVATECH’s Supplier Privacy Risk Management Toolkit includes template questionnaires with privacy, data security and AI sections, as well as template contractual clauses. 

  3. Leverage existing principles and resources  

    As organizations grapple with new challenges, changing landscapes and uncertainty posed by AI technologies and regulation, there are many areas of uncertainty. Initial AI governance efforts will need to continuously adapt as new technologies, use cases, laws and regulations, and market standards evolve. As a result, AI governance efforts should encourage flexible strategies in 2024 and in the years to come. 

FOR MORE ASSISTANCE WITH RESOURCES OR TO ASSESS YOUR AI PRACTICES AND POLICIES, CONTACT US! 

Previous
Previous

EFFECTIVE PRIVACY OFFICERS – BEYOND LEGAL SKILLS

Next
Next

THE CPPA’S DRAFT REGULATIONS ON CONDUCTING RISK ASSESSMENTS