Everyone is closely watching developments in the artificial intelligence (AI) space in terms of advancements, regulations, and investment. As we go to print on this post, SoftBank’s Vision Fund has just announced its agreement to invest $500 million in OpenAI’s latest funding round, valuing the developer of ChatGPT at $150 billion on a pre-money basis, according to industry reports. SoftBank joins Thrive Capital, which is investing another $1 billion in the round, with participation from Tiger Global, Coatue, and Microsoft. Meanwhile, California’s Governor Gavin Newsom just vetoed the controversial proposed legislation that purported to render artificial intelligence technologies “safe” for consumers.
Taking a step back, what is the next frontier for AI?
Market Perspective
There have been many predictions as to what the future of AI will be as we look for ways to capitalize on this game-changing technology, promote broader adoption, and implement regulations that will ensure it is used in an ethical and legal manner. Hot off the press, PitchBook has just released an Emerging Tech Future Report, providing an updated outlook for Generative AI (GenAI).
When ChatGPT was first introduced, there was a limited understanding of the ways in which GenAI could be used and just how transformative it could be. Since that time, we have seen interest in AI explode, and investors have followed, putting money into this sector at a time when other startups have struggled to secure funding.
Data from PitchBook’s reporting shows that spending on GenAI software is on a significant upward trajectory. In 2023, an estimated $7.5 billion was spent, compared to $17 billion estimated as of August 20, 2024. That number is projected to jump to $32.4 billion next year. In terms of GenAI venture capital activity, there were 581 deals valued at $9.3 billion in 2022, compared to 877 deals valued at $26 billion in 2023. As of August 20, 2024, there were 508 deals valued at $23.9 billion already. This indicates that developments and investment in this sector are showing no signs of slowing down.
While there has been a great deal of investment as well as effort toward AI transformation in the past year, PitchBook analysts point to several blockers to adoption that still exist, including “high compute costs, data availability, data security, and overall system complexity.” Nowhere is this more evident that in the enterprise software context.
In this business lawyer’s experience, which is informed by advising dozens of companies with AI-based business models, the technology has such broad applications that it is necessarily not “plug and play” ready to replace non-digital processes. The salesforce cannot “wash, rinse, and repeat” a use case for AI tools with such broad applicability. In my experience, an AI business first approaches any potential customer and needs to demonstrate the “use case” for the technology. Often, the customer does not even appreciate the potential and has not identified any specific business process for AI transformation. So, there is a “discovery” process by which the AI business and potential customer interact to identify the use cases for transformation.
It goes to logic that AI-based businesses typically suffer from a long sales cycle between first contact and first contract signing, with extended product adaptations to specific customer use-cases, lengthy proofs of concept, then pilots, before ultimately the customer commits to the solution and installs it throughout an enterprise to replace pre-AI processes with AI-enabled technology.
In this “post-zero interest rate policy” (ZIRP) environment, we see businesses that are struggling to raise scaling capital to fuel sales after lengthy periods of product development cycles (not to mention pivots), sometimes after initially raising capital at pre-ZIRP valuations. So, while there seems to be endless funding available and significant progress for startups at the foundation model level (like OpenAI, Anthropic, Hugging Face, and Mistral), application-level startups have more difficulty securing funding and experience more pressure to show they can be commercially viable.
We also see that some industries are impacted by AI differently than others. Take, for example, the case of “crypto” or distributed ledger technology on the blockchain. In PitchBook’s 2023 report, AI was anticipated to have a “potentially transformative” impact on the crypto space, particularly as we watched the advancements in GenAI. Fast forward to 2024, and AI is definitely impacting the crypto space, but not always to the degree expected. While AI is playing a role in smart contract development and auditing, people have not been as quick to adopt it as expected. They are simply used to using older tools, and any change can be difficult. There is also the continued need for human oversight when it comes to writing and debugging smart contract code, which has also impacted adoption.
In the Fintech space, in 2023, PitchBook predicted more of a slow adoption of GenAI-powered products, mostly due to the regulatory requirements governing this sector and the “meticulous research & development (R&D) processes.” The expectation here was that GenAI would be used more to enhance operational efficiencies, assist with customer support, or document reviews. A year later, there has been sustained interest in GenAI from fintech companies who have been “quicker to deploy GenAI-based products.” As expected, incumbent FIs and banks have proceeded with caution.
Legal and Regulatory Perspectives
As the world watched legacy non-profit OpenAI executives flee the company as rumors circled of it becoming a fully for-profit enterprise, unshackled by limitations on returns to investors, there is increasing concern in public discourse about who will provide guardrails to protect the public against what is increasingly scare mongered as some kind of “leviathan” technology.
In California, safety experts and ethicists had worked with state legislators to bring forward Senate Bill 1047, which would have required developers of large language AI models, or LLMs, to take “reasonable care” to ensure that their technology did not pose an “unreasonable risk of causing or materially enabling a critical harm.” The legislation would have required developers to ensure that their AI could be disabled by a human if it started behaving dangerously. As California is home to most of the top companies in the world of AI, the law would have had broad applications on how AI is regulated across the United States and, indeed, across the world.
California Governor Gavin Newsom ultimately vetoed the legislation, citing its application to only the biggest and most expensive AI models and its failure to consider whether they were deployed in high-risk situations. Venture capitalists in Silicon Valley feared that it would have created unfair hurdles for new startups to compete.
While proposals to regulate AI nationally have made little progress in Washington, there continues to be fear and recrimination about the technology, and technologists will not be allowed to live in a regulatory sandbox forever. Watch this space.
The interest and excitement surrounding GenAI are certainly apparent, and investment continues to increase. However, as with any rapidly developing technology, there are some roadblocks to broader adoption that must be overcome. Adoption will come with more time as we have just scratched the surface of what this technology can really do and understand how we can use it. I look forward to seeing what happens in GenAI one year from now, as we are sure to see many exciting developments in 2025.