What should U.S. lawmakers, regulators, and companies think of the proposed E.U. approach to regulating artificial intelligence?
As artificial intelligence continues to be an advantageous boom for companies in nearly every sector, many executives and regulators alike are struggling to understand both the boons and risks AI will bring to both companies and customers. In wake of these pressures, the European Union recently issued a much anticipated white paper on artificial intelligence that attempts to set forth a “regulatory and investment oriented approach” to “promote the development and deployment of AI”. Given the E.U.’s impact on U.S. and global technology, privacy, and security laws, the white paper is instructional and informative even for U.S. organizations not doing business in the E.U.. Indeed, it is no secret that Washington and Silicon Valley are looking seriously into AI regulation, and will likely lean on this white paper as they develop their own AI’s strategies and policies.
A clear signal from the paper is that how laws and regulations classify and define AI, and the use cases there for it, will significantly impact the applicability of legal regimes to companies and their technology. To note, one of the fundamental elements of the paper is that the E.U. should have strict legal requirements for “high-risk” uses of AI technology. A high-risk is present when there is “a risk of injury, death or significant material or immaterial [in the sense of intangible, as opposed to not significant] damage; that produce effects that cannot reasonably be avoided by individuals or legal entities.” “Material” damage includes circumstances impacting safety and health of individuals, including loss of life, and damage to property. “Immaterial” damage includes occurrences such as loss of privacy, limitations on the right of freedom of expression, human dignity, and discrimination.
For example, “high-risk” uses of AI could relate to a physician diagnosing and treating an illness, a car deciding what to do in a dangerous situation, or a bank determining who gets a loan and at what interest rate. The challenge, however, when it comes to regulation is determining the boundaries of what is and what is not within the high-risk “bucket.” We expect that many significant legislative and regulatory debates surrounding the white paper will focus on advocating for or against particular applications of AI to be deemed high-risk.
Another challenge the white paper posits is the inherent friction between AI innovation – which in large part relies on “black box” or secret algorithms) and AI transparency – how and why an AI tool makes the decisions that it makes. This latter concept is referred to as “explainable AI” or XAI. As companies often spend significant resources developing (and protecting the confidentiality of) AI technologies, many companies may push hard against revealing their non-XAI code and technology to prying eyes.
Advocates for greater transparency in AI contend that laws are, at least in part, intended to hold individuals and organizations accountable for their actions if they violate its requirements. Thus, to enforce accountability for a law, it is necessary to understand the root cause of the “problem.” Under current legal frameworks, if a smart car suddenly faced with an obstacle swerves onto a sidewalk and injures a person rather than swerving into oncoming traffic, the traffic ticket and lawsuit will likely be driver of the car and not the algorithm that made that decision to swerve. Accordingly, many regulators and lawmakers see a strong need for AI specific laws to include transparency provisions, so law enforcement, regulators and plaintiffs’ attorneys can look to companies to take responsibility for their own products.
As a crucial point to consider, the white paper’s emphasis to regulate “high-risk” AI does not abolish regulatory scrutiny for other AI uses cases. Even some “low-risk” circumstances may be problematic or dangerous. We may not think there is much harm and how Amazon determines what products to recommend or how Netflix determines what movies to recommend, but such decisions can have meaningful and impactful outcomes. This kind of AI technology underpins targeted advertising, which can be more disturbing to consumers and lead to selection and discrimination problems.
To illustrate the concerns with AI in the advertising space, consider how targeted advertising can result in users seeing Amazon adds on their favorite news or entertainment site for the same shoes they searched for on Amazon but did not buy (or maybe did buy) the day before. To display ads in this manner, marketing data aggregators build profiles on individuals based on things like what products they buy and what movies they rent. These profiles are then used by retailors (and can be shared with banks, insurance companies, and others) to “screen” customers, and potential customers. This, in turn, can result in “digital redlining” – the practice of discriminating (sometimes lawful, sometimes unlawful) against customers based on their marketing profile.
We will continue to monitor the legal and regulatory landscape related to artificial intelligence and machine learning and will provide updates as they progress. For several years now, we have seen the difficulties that differences among different countries’ and jurisdictions’ view of AI and its impact to individuals causes, including impacts into privacy and the potential for discrimination. These differing views tend to create substantial burdens for individuals and organizations that live and deal in multiple continents, countries, and states. However, the potential benefits of artificial intelligence are too important to weigh down with the anchors of significant jurisdictional legal differences. At the end of the day, in light of the global economy and particularly because technology developments know no borders, it is thus important for the United States and the E.U. (and other countries) to develop, if not the same, reconcilable regimens for regulating artificial intelligence. We shall see.