1 Legal and enforcement framework
1.1 In broad terms, which legislative and regulatory provisions govern AI in your jurisdiction?
Despite AI’s ubiquity across every technology and healthcare field, there is no comprehensive federal legislation on AI in the United States to date.
The US Congress has nonetheless enacted and is considering several pieces of legislation that will regulate certain aspects of AI. The executive branch continues to adopt directives and rulemaking that will impact on the use of AI. In February 2020, the Electronic Privacy Information Center petitioned the Federal Trade Commission (FTC) to conduct rulemaking concerning the use of AI in commerce in order to define and prevent consumer harms resulting from AI products. We expect other organisations and groups to increasingly pressure the FTC and other governmental agencies to establish regulations regarding AI use.
Meanwhile, much of the governing legal framework is through the cross-application of rules and regulations governing traditional disciplines such as product liability, data privacy, intellectual property, discrimination and workplace rights. Self-regulation and standards groups also contribute to the governing framework.
1.2 How is established or ‘background’ law evolving to cover AI in your jurisdiction?
On the torts front, many states have passed autonomous vehicle (AV) legislation to help address liabilities associated with self-driving cars. For example, these laws may identify safety standards for AV testing, impose limits on AV manufacturers’ liability or set insurance requirements.
At the federal level, following Executive Order 13859 and the establishment of the AI Initiative, the federal hub whitehouse.gov/ai was launched. Then, the Office of Management and Budget in early 2020 provided guidance regarding how to develop regulatory and non-regulatory approaches for AI technology and potential ways to reduce barriers to the use of AI to promote innovation in the private sector. The guidance provides for a set of principles (described in question 1.8) to consider when to formulate regulatory and non-regulatory approaches. The guidance further provides that if existing regulations are sufficient or if the costs of new regulation would outweigh the benefits, then relevant agencies may find alternative approaches. Some believe that the new AI guidance is or will become a de facto set of regulatory principles.
In April 2020, the FTC published further guidance regarding the commercial use of AI technology, acknowledging that while AI technology has significant positive potential, it also presents negative risks, such as the risk of unfair or discriminatory outcomes or the entrenchment of existing disparities. The FTC urged companies to:
- be transparent with consumers;
- explain how algorithms make decisions;
- ensure that decisions are fair, robust, and empirically sound; and
- hold themselves accountable for compliance, ethics, fairness and non-discrimination.
Failure to uphold these principles could lead to liability for companies under the existing regulatory framework, such as:
- the Fair Credit Reporting Act,
- the Equal Credit Opportunity Act;
- Title VII of the Civil Rights Act of 1964;
- the Americans with Disabilities Act;
- the Age Discrimination in Employment Act;
- the Fair Housing Act;
- the Genetic Information and Nondiscrimination Act; and
- FTC Act general authority to bring enforcement actions regarding unfair and deceptive trade practices.
1.3 Is there a general duty in your jurisdiction to take reasonable care (like the tort of negligence in the United Kingdom) when using AI?
Yes, negligence is an established tort under US common law and is codified in many state statutes. The primary factors to consider for negligence are:
- whether an action lacks reasonable care;
- the foreseeable likelihood that such action would result in harm;
- the foreseeable severity of the harm; and
- any precautionary burdens to eliminate or reduce the harm.
Four elements that are required to establish negligence are:
- the existence of a legal duty;
- breach of that legal duty;
- sufferance of injury by the injured party; and
- proof that the defendant’s breach of legal duty caused the injury.
1.4 For robots and other mobile AI, is the general law (eg, in the United Kingdom, the torts of nuisance and ‘escape’ and (statutory) strict liability for animals) applicable by analogy in your jurisdiction?
The United States applies tort liability for private nuisance and public nuisance. Another type of strict liability relevant to AI devices is consumer product liability, which relates to a manufacturer’s liability regarding defective products.
1.5 Do any special regimes apply in specific areas?
Rights and protections for intellectual property are primarily regulated at the federal level, with some state-level statutes around trademarks and trade secrets.
1.6 Do any bilateral or multilateral instruments have relevance in the AI context?
The EU General Data Protection Regulation (GDPR) will likely affect AI companies that meet the establishment criteria for the European Union. Article 22 of the GDPR states that a “data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”, unless certain conditions are present. One permitted condition is based on express and informed consent by the data subject. This will likely affect how companies approach AI transparency and bias.
1.7 Which bodies are responsible for enforcing the applicable laws and regulations? What powers do they have?
No particular body is currently designated for enforcing AI-related policies. The enforcement of laws applicable to AI can be at the federal level, state level and/or private citizen level, depending on the area of law.
1.8 What is the general regulatory approach to AI in your jurisdiction?
No general regulatory framework on AI currently exists in the United States, but the White House Office of Science and Technology Policy has promulgated 10 principles to consider when formulating regulatory and non-regulatory approaches to the development and use of AI:
- Establish public trust in AI;
- Encourage public participation and public awareness of AI standards and technology;
- Apply high standards of scientific integrity and information quality to AI and AI decisions;
- Use transparent risk assessment and risk management approaches in a cross-disciplinary manner;
- Assess full societal costs, benefits, and other externalities in considering the development and deployment of AI;
- Pursue performance-based and flexible approaches so as to adapt to the rapidly changing nature of AI;
- Evaluate issues of fairness and non-discrimination in AI application;
- Determine appropriate levels of transparency and disclosure to increase public trust;
- Maintain controls to ensure confidentiality, integrity and availability of AI data such that the AI developed is safe and secure; and
- Encourage inter-agency coordination to help ensure the consistency and predictability of AI policies.
2 AI market
2.1 Which AI applications have become most embedded in your jurisdiction?
In the United States, AI exists in many different forms and through different functions and applications. Some examples of AI technology include:
- natural language processing;
- logical AI inferencing;
- machine learning;
- artificial neural networks; and
- machine perception and motion manipulation.
These technologies can perform functions such as automation, predictive analytics, image recognition and classification, speech-to-text and text-to-speech conversion, text analytics and generation, voice-controller assistance, and language translation.
Beyond specific technologies that are enabled by AI, market applications abound:
- In healthcare, AI allows users to analyse their own health data to identify anomalies, diagnose disorders and prescribe solutions;
- In the automotive realm, AI is facilitating the design and operation of autonomous vehicles (AVs);
- In finance and economics, AI is helping fund managers to deploy assets and harvest dividends and returns;
- In e-commerce, AI is assisting e-tailers in predicting which products consumers will want to buy and suggesting those products to them;
- In cybersecurity, AI is helping to identify and eliminate threats;
- In law, AI is being used to crunch terabytes of data in seconds and to identify discoverable evidence and conduct due diligence, identifying potential liabilities;
- In corporate governance, AI can assist in mitigating and managing risks, compliance and ethics within corporations;
- In video gaming, AI is being used to predict player behaviour, identify anti-social conduct and increase the sale of virtual goods; and
- In the military, AI is being used to identify threats and increase security.
2.2 What AI-based products and services are primarily offered?
AVs/systems, AI-enabled connected devices and software platforms using AI to provide services, are most common.
2.3 How are AI companies generally structured?
AI companies run the gamut from rapidly growing start-ups to large (even mega) cap companies such as Nvidia, Alphabet, Salesforce, Amazon and Microsoft. AI companies do not fall squarely within any traditional business model, as they often combine elements of a technology company, software company and services delivery model. Unlike software as a service companies, AI companies have continually high computing and data needs, as often a single AI model can require significant amounts of training data and computing resources. AI technology can also have demanding support and human oversight requirements, such as humans needed to manually clean and label datasets, or human input needed to augment AI-based systems. As AI companies begin to scale, we expect best practices and evolved business models to emerge.
Companies using AI can either develop AI capabilities themselves, license AI from a third party, acquire AI companies or a combination of the above. If developing AI capabilities themselves, companies should take pains early on to establish enforceable IP protection of their AI technology. If acquiring AI from a third party, a software licence will typically be required and the acquisition may also involve the purchase, lease or licensing of equipment, services or data. If exporting products to third countries, AI companies must obtain authorisations to export legally. The US government imposed additional export restrictions in January 2020 in an amendment to the Export Administration Regulations determined by the Department of Commerce, Defense and State. The measures were announced by the Bureau of Industry and Security and applied under the Export Control Reform Act of 2018. These restrictions make it harder for US companies to export AI technology, likely to help keep key technologies out of the hands of geopolitical rivals.
2.4 How are AI companies generally financed?
The United States has a highly evolved market for venture capital funds, based mainly in the San Francisco Bay Area, which pools capital and deploys it into private emerging growth companies in exchange for equity or debt instruments, providing differing levels of governance and economic rights. AI start-ups are typically financed by venture capital firms. For larger established companies that are evolving an existing set of products and services to integrate AI capabilities, these are funded by:
- research, development and innovation budgets;
- corporate venturing groups that secure the right to integrate such AI-enabled technologies from private emerging growth companies; or
- corporate development groups that acquire them on behalf of larger established companies.
Because of the proliferation of data, cloud and advanced computing capacities, there are lower barriers to entry for companies that are interested in entering the AI space.
2.5 To what extent is the state involved in the uptake and development of AI?
Most of the development of AI technology is happening in the commercial sector. However, the US federal government has provided guidance promoting and encouraging the development of AI technology; and certain federal agencies, such as the Department of Defense and others, are actively developing and promoting the use of AI technology. The Department of Defense has publicly expressed support regarding the importance of fostering innovation in the development and deployment of AI technology.
3 Sectoral perspectives
3.1 How is AI currently treated in the following sectors from a regulatory perspective in your jurisdiction and what specific legal issues are associated with each: (a) Healthcare; (b) Security and defence; (c) Autonomous vehicles; (d) Manufacturing; (e) Agriculture; (f) Professional services; (g) Public sector; and (h) Other?
(a) Healthcare
AI technology can be deployed in healthcare to gain information, process such information, and give a well-defined output, with the primary aim of analysing relationships between prevention or treatment techniques and patient outcomes. Hospitals and other healthcare providers may also deploy AI technology for operational purposes. Increasingly, insurance companies are seeking to leverage AI technologies to process claims and set policy prices. However, the use of AI in healthcare and insurance raises issues regarding algorithmic bias and other ethical concerns. Currently, there is no regulation specific to the use of AI in healthcare.
(b) Security and defence
The Department of Defense and other national security agencies are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and semi-autonomous and autonomous vehicles (AVs). Competition exists at the global level regarding development of the ‘best’ AI technology. Some legal issues raised by military AI development include the risk of vulnerability and manipulation of AI technology, as well as ethical considerations.
(c) Autonomous vehicles
AVs are one of the primary areas where AI is applied, mostly through machine learning and deep learning. These vehicles are typically equipped with sensors, such as cameras, radars and lidar, to help them better understand and navigate their surroundings through the processing of large quantities of environmental input data. Some of the legal issues raised include the question of civil liability: if an AV injures someone in an accident, who should be responsible? The question of criminal culpability also arises: if someone falls asleep or is inebriated at the wheel in an AV, will that still be considered unlawful? Additionally, insurance law comes into question – specifically, whether the parameters of traditional auto insurance need to be altered, with new potential limits and exclusions.
(d) Manufacturing
AI technology can help manufacturers to digitise their factory operations. Some applications include:
- detecting defects throughout the production process;
- deploying predictive maintenance to reduce downtime;
- responding to real-time changes in demand across the supply chain;
- validating whether intricate goods have been produced to specifications;
- reducing the costs of small-batch or single-run goods to enable greater customisation; and
- improving employee satisfaction by relegating mundane tasks to machines.
Concerns regarding the replacement of human workers with AI technology, as well as ethical and labour implications, are rightly under consideration.
(e) Agriculture
Some of the more common applications of AI technology in agriculture include:
- the use of agricultural robots that can be programmed to handle essential agricultural tasks (eg, harvesting crops) at a higher volume and faster pace than human workers;
- the use of computer vision and deep-learning algorithms to process data captured by drones and/or other software-based technology to monitor crop and soil health; and
- the use of machine learning models to track and predict environmental impacts, such as weather change, on crop yields.
Regarding the use of AI robots and other AI-based equipment, legal considerations may apply regarding the operation of such machinery. States such as California are modifying their Occupational Safety and Health Administration regulations to impose new rules regarding the operation of AI-based machinery.
(f) Professional services
Professional services companies, including law firms, can use AI to help to automate a number of time-consuming or repetitive tasks, such as:
- reviewing and categorising a large portfolio of documents based on given criteria;
- extracting data from documents for analysis;
- identifying documents that are relevant for a request;
- maintaining consistency in document records; and
- conducting research and other tasks in support of compliance efforts.
(g) Public sector
The public sector has also been able to deploy AI technology, such as the use of chatbots to field incoming calls and questions from constituents, to free up time and resources for other functions. Other applications include using AI technology to:
- recognise and report objects in photographs and videos;
- translate dynamically between languages;
- monitor social media or public opinion for government-related topics or emergency situations;
- identify fraudulent activity or claims;
- automatically detect code violations;
- anticipate traffic flow and road maintenance needs; and
- measure the impact of public policies.
Just as with the use of AI technology in other sectors, AI technology in the public sector raises concerns regarding vulnerability to cyberattack and manipulation, as well as bias and discrimination in AI deployment and related ethical considerations.
(h) Other
AI technology is also being used in other sectors such as education, marketing, retail and e-commerce, as well as job recruiting and human resources.
4 Data protection and cybersecurity
4.1 What is the applicable data protection regime in your jurisdiction and what specific implications does this have for AI companies and applications?
The United States does not have a federal privacy law and instead currently has a sectoral model when it comes to privacy. Therefore, which privacy laws apply to an AI company will depend on the scope of its operations and the industry vertical in which it operates. For example, at the time of writing, a company will be subject to the California Consumer Privacy Act if it:
- makes more than $25 million in revenue annually;
- purchases or sells personal data annually of more than 50,000 California consumers, devices or households; or
- makes more than 50% of its annual revenue from selling California personal data.
Other states – such as Nevada, Illinois and Maine – have also enacted their own privacy laws. Similarly, if a company is in the healthcare space and is considered a covered entity, or in the fintech space and is considered a financial institution, then additional privacy regulations will apply. AI companies must determine which regulations are applicable to their business. In some cases, AI companies may be subject to certain restrictions when it comes to automated decision making. They may also need to:
- procure necessary consents;
- impose purpose limitations or data minimisation;
- provide notices; and/or
- meet other data privacy requirements.
4.2 What is the applicable cybersecurity regime in your jurisdiction and what specific implications does this have for AI companies and applications?
There is no regulatory cybersecurity regime in the United States; rather, applicable privacy laws for the most part defer to (and require implementation of) accepted industry standards when it comes to cybersecurity.
5 Competition
5.1 What specific challenges or concerns does the development and uptake of AI present from a competition perspective? How are these being addressed?
Businesses may use AI to respond more quickly to changing market conditions, innovate their products, set pricing and take other actions that can have antitrust considerations. While there are clear commercial benefits to being able to respond rapidly to market conditions and re-set pricing in real time, this can be used in an anti-competitive manner, such as by engaging in collusion with competitors or reaching other anti-competitive agreements using AI systems. In some cases, the AI system may be able to develop sufficient learning capability to arrive at an anti-competitive conclusion (eg, that collusion with a competing AI system is the optimal action to take), independent of any human direction or decision. This is still a developing area of law, but we expect regulators to seek to hold companies responsible for the actions of their AI, and to want companies to build compliance measures into their AI from the outset.
6 Employment
6.1 What specific challenges or concerns does the development and uptake of AI present from an employment perspective? How are these being addressed?
One specific challenge, as autonomous vehicles are developed, is whether employers will expect or require employees to check work email or perform other tasks while ‘driving’. This may then lead to other considerations, regarding wages and other workers’ compensation issues. There is no current regulation on this particular concern; such regulation will need to be promulgated as the technology develops,
Another general challenge of using AI technology for employment recruitment and hiring purposes is the issue of discrimination. Concerns include:
- disparate treatment, where there is intentional discrimination against individuals of a protected class; and
- disparate impact, where facially neutral practices disproportionately impact on members of a protected class.
Privacy concerns also exist as they relate to background screening of potential employees. A number of federal laws prohibit workplace discrimination, such as Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act and others. State anti-discrimination laws are similar to the federal laws, but may offer additional protections against employment-related discrimination.
7 Data manipulation and integrity
7.1 What specific challenges or concerns does the development and uptake of AI present with regard to data manipulation and integrity? How are they being addressed?
Data manipulation and integrity is a significant concern, as the use of AI technology proliferates. Failure to maintain the integrity and security of AI systems may lead to a lack of public trust, which would hamper the adoption and development of AI technology. Some specific concerns regarding AI technology include:
- vulnerabilities and blind spots in sensor technology and neural networks;
- manipulation of visual data to trick deep learning systems;
- backdoors and triggers which can be maliciously trained into algorithms by outsourced third parties; and
- misappropriation of AI systems by hackers.
While it is possible (and likely) to impose legal liability in the wake of these negative events, preventative approaches should be employed with regard to data manipulation and integrity.
8 AI best practice
8.1 There is currently a surfeit of ‘best practice’ guidance on AI at the national and international level. As a practical matter, are there one or more particular AI best practice approaches that are widely adopted in your jurisdiction? If so, what are they?
At a high level, AI best practices should help to address concerns and seek to improve the quality, integrity and accuracy of the AI system. Google, a leader in the AI technology space, has recommended the following best practices:
- Use a human-centred design approach with a focus on the user experience, which should encompass a diversity of users and use cases;
- Identify multiple metrics to assess training and monitoring, where such metrics should be appropriate for the context and goals of your system;
- Where possible, directly examine your raw data to assess accuracy and predictive capabilities;
- Understand the limits of your dataset and model, and communicate these limitations where possible;
- Conduct rigorous, diverse and regular testing of your AI system; and
- Continue to monitor and update the AI system after deployment to take into account real-world performance and user feedback.
8.2 What are the top seven things that well-crafted AI best practices should address in your jurisdiction?
See question 8.1.
8.3 As AI becomes ubiquitous, what are your top tips to ensure that AI best practice is practical, manageable, proportionate and followed in the organisation?
As with the implementation and deployment of any other internal process, identifying and communicating clear objectives and actionable steps to accomplish those objectives across all levels of the organisation, including in relation to functional groups, will be necessary. It would also be helpful to document processes and responsibilities, as well as learnings, to the extent possible. Separately, building a culture that encourages and values the development of AI in a thoughtful, ethical manner, and that understands the potential dire consequences of failure to do, so is also important.
9 Other legal issues
9.1 What risks does the use of AI present from a contractual perspective? How can these be mitigated?
Companies that license or purchase AI technology for incorporation into mission-critical functions should carefully consider the AI vendor’s risk allocation from a contractual perspective. Any AI system failure or equipment malfunction can have catastrophic consequences. The vendor’s representations and warranties and associated liability regarding the AI system’s performance and output should adequately address the potential business impact and damages in the event of a system failure. For any physical AI equipment, the vendor’s representations and warranties and associated liability should cover injuries and other damages caused by such equipment. The vendor’s liability in these instances of system or equipment failure should encompass third-party claims, as such situations will affect downstream users and may cause significant reputational damage to the company.
9.2 What risks does the use of AI present from a liability perspective? How can these be mitigated?
See question 9.1. Another mitigation mechanism to consider is insurance. The vendor and the customer can limit their liability exposure by shifting some risk to an insurer. This can be covered through commercial general liability insurance, cyber insurance, errors and omissions coverage, business interruption coverage, and other types of insurance applicable to the ways in which AI systems can fail or cause damages.
9.3 What risks does the use of AI present with regard to potential bias and discrimination? How can these be mitigated?
Because AI is trained largely on real-world datasets, it can integrate and deploy existing biases. For example, a criminal justice algorithm used in law enforcement may incorrectly identify black individuals as ‘high risk’ at a higher rate than white individuals. Algorithms may also integrate gender expectations and stereotypes. For example, a hiring algorithm may favour applicants based on action verbs used more commonly by men than women, leading to a disparate impact. Groups who are underrepresented in training data may result in a higher error rate of the AI technology regarding underrepresented groups in deployment.
To mitigate these potential biases and discrimination, it is important to be cognisant of the presence of bias and to develop criteria for measuring bias in AI technology. Doing so will require a multi-disciplinary approach, including a technical solution, as well as perspectives from ethicists, social scientists and others.
10 Innovation
10.1 How is innovation in the AI space protected in your jurisdiction?
Innovation in AI, like other technologies, is largely protected through IP law. Intellectual property in the United States is primarily protected through patents, trademarks, copyrights and trade secrets. As it relates to AI, certain forms of AI are patentable, which is expressly recognised by the US Patent and Trademark Office as a Class 706 designation. Copyright protection is also available to certain types of AI, such as source code and visual elements of an AI computer program. However, datasets and algorithms, which are both key elements of AI technology, may not fall under copyrights or patents as registrable intellectual property; consequently, trade secret protection can also be useful for AI. Trade secrets are protected at the federal level in the United States, as well as at the state level through state trade secret statutes. Trade secret protection can offer advantages over patents or copyrights as a form of intellectual property, because it can last indefinitely and there is no application or registration process. Instead, the trade secret owner must make significant and diligent efforts to obtain and maintain trade secrets compared to other forms of IP protection.
10.2 How is innovation in the AI space incentivised in your jurisdiction?
Because of the numerous potential applications and commercial uses of AI across a myriad of industries, there are significant profit and market incentives for AI companies. Additionally, at the national level, the United States is interested in encouraging and fostering AI innovation to remain globally competitive. As AI technology continues to develop, we will likely see policies and regulations emerge to foster and reward innovation, such as consumer tax credits for the adoption of AI technology. We may also see:
- sector-specific policy guidelines or frameworks to encourage AI innovation;
- the granting of exemptions or allowance of pilot programmes that provide safe harbours for specific AI applications; and
- voluntary consensus standards for AI applications.
11 Talent acquisition
11.1 What is the applicable employment regime in your jurisdiction and what specific implications does this have for AI companies?
Talent acquisition and retention is an increasing area of application for AI technologies, which seek to automate recruiting and hiring, employee onboarding and management of human capital. Facial recognition applications of AI, among others, could have significant impacts on an individual’s ability to obtain and retain employment. The risk of discrimination attracts attention, as well as the ethical concerns implicated by the potential disparate treatment or disparate impact of AI on certain pools of talent. Employers using AI tools must also ensure that they do not violate privacy rights that are codified in federal password privacy laws, salary history bans and biometric privacy laws.
In the United States, employment is governed on a state-by-state basis, with an overlay of federal laws to prohibit discrimination, harassment and retaliation based on gender, race, creed or identification with another protected class. While there has not yet been a federal statute adopted on AI in the workplace, the US Congress is considering a law that would create a moratorium on the government’s use of facial recognition technology until a commission recommends the appropriate guidelines and limitation for its use.
11.2 How can AI companies attract specialist talent from overseas where necessary?
The employment of non-US-born computer scientists, programmers and engineers to create AI technologies and form AI companies is a subject of great controversy in the United States. While historically, the US immigration laws were applied by the executive branch to recruit and retain specialist talent from overseas, this has become a subject of increasing controversy in recent years, with the 45th president, Donald J Trump, passing directives aimed at curbing the flow of immigrants, even those with special skills. Moreover, the US Congress passed legislation (supported and signed by Trump, implementing the legislation into binding statute) to reinforce the powers of the Committee on Foreign Investment in the United States, to ensure that AI technologies are not exported by the United States to third countries, particularly China, and has placed significant controls on investment in companies developing AI technologies.
12 Trends and predictions
12.1 How would you describe the current AI landscape and prevailing trends in your jurisdiction? Are any new developments anticipated in the next 12 months, including any proposed legislative reforms?
The legal and regulatory framework is working to catch up with the pace of technology when it comes to AI. Governments at the federal and state level are exploring potential regulatory and non-regulatory approaches to overseeing AI without stifling innovation. We may start to see more regulations, such as ones that have been enacted at the state level for autonomous vehicles, while keeping in mind the goal of finding the right balance between protecting consumers and the general public and simultaneously stimulating innovation in AI.
13 Tips and traps
13.1 What are your top tips for AI companies seeking to enter your jurisdiction and what potential sticking points would you highlight?
For any AI company, the AI technology or product should solve a problem with sufficient market potential in order to be successful. AI development can be resource-intensive, so the company must assess whether it has enough runway and market potential to sustain itself. Building AI will involve continual testing and improvement of the company’s AI technology. Separately, it is also important to be able to execute on multiple fronts as a business beyond the development itself. For example, companies will need to engage in PR/external communications to raise awareness of their products; they will also need to develop revenue-generating channels with potential customers and partners.
As the use of AI technologies becomes ubiquitous, those companies that can solve specialised problems unique to specific industry verticals will differentiate themselves from the pack. Research indicates that vertically specialised AI companies have a higher potential for fundraising and exit, even if exit valuations are lower than horizontal AI companies that finally succeed in exiting.