Innovative Technology Insights

Key Takeaways From TEDAI 2024

Foley & Larder LLP recently sponsored TEDAI 2024 in San Francisco, drawing industry leaders and change makers to share meaningful new ideas in the rapidly developing area of artificial intelligence (AI). Four Foley partners, Natasha Allen, Monica Chmielewski, Christopher Swift, and Louis Lehot participated in panel discussions throughout the event.

Much of the conference discussion centered on regulation as government officials across the United States are still struggling to differentiate generative AI from the overall technology sphere. There has been a rush to implement legislation at the local, state, and federal levels, but the real challenge in the AI space is that absent applications, it is difficult to measure the benefits and risk of the technology due to its vague nature. Regulating at the model level is abstract in nature and could have widespread side effects.

Below we examine the TEDAI panel discussions featuring Foley partners and key takeaways from each.

What Will an AI-Driven Business Landscape Look Like in 2030?

This panel discussed the future of AI and what an AI-driven business landscape will look like by 2030. The discussion explored the transformative potential of AI on industries, workplaces, and economies, offering insights into how businesses can adapt and thrive in this rapidly evolving environment and providing a glimpse into the AI-powered world of tomorrow.

Panelists:

  • Natasha Allen, Partner, Foley & Lardner
  • Umesh Sachdev, CEO & Co-Founder, Uniphore
  • Navin Chaddha, Managing Partner, Mayfield
  • Dan Priest, U.S. Chief AI Officer, PwC
  • John Chambers, Founder & CEO, JC2 Ventures (Moderator)

Key Takeaways:

Panelists emphasized the urgency and transformative potential of AI, suggesting that adapting strategies frequently is critical. This means adapting every 18 months (or less) to keep pace with the advancements in generative AI, which is accelerating more rapidly that traditional tech cycles. Unlike past shifts, AI is considered a “100x force,” and it is a disruption that every CEO should prioritize as machines can now understand human language and respond due to tremendous advancements in GPUs and memory.

As companies leverage AI, panelists highlighted a new focus on the “4 As” that guide AI-powered growth. These include Automated Tasks, Accelerated Productivity, Accelerated Creativity, and Augmented Human Capabilities. They also pointed to a distinct diligence checklist for those companies engaging in M&A activity within the AI space that goes beyond traditional M&A diligence practices. Acquiring companies should acknowledge the unique challenges in this area such as regulatory uncertainty and must have a focus on risk tolerance as the legal and regulatory landscape continues to evolve.

Panelists further noted that CEOs are facing shareholder pressure to demonstrate AI strategies and are anxious to push investment, yet many still struggle to fully grasp the technology. At the same time, the landscape for AI investment has also shifted, with investors becoming more selective and moving away from the broad enthusiasm seen in 2022 and 2023 when investment was happening for any company claiming to focus on AI.

Natasha Allen reminded attendees that sometimes it’s not the sexy things that win, sometimes it’s the mundane things that help the world be more connected.

How are we Already Delivering on the Promise of Generative AI in Health Care?

This insightful panel explored whether AI is already delivering on the promise of generative AI in health care, sharing concrete trends in adoption, success stories, and the challenges faced, while discussing what’s next in AI-driven health care innovation.

Panelists:

  • Monica Chmielewski, Partner, Foley & Lardner
  • Fawad Butt, CEO, Stealth Startup
  • Khan Siddiqui, Founder & CEO, HOPPR
  • Aaliya Yaqub, Chief Medical Officer, Thrive Global
  • Missy Krasner, GenAI Investor & Digital Health Board Member (Moderator)

Key Takeaways:

The panel pointed to several “shovel-ready” use cases for AI in health care, particularly in areas where there are workforce shortages such as medical imaging, with AI supporting radiologists in tasks ranging from dictation to preliminary detection before being reviewed by doctors. There can also be improvements in patient-centered care through high-accuracy transcription of patient-physician interactions.

Panelists discussed clinical AI trends as well, which are not currently generative in nature due to the risk/reward ratio that tends to be in favor of caution. When asked where we might start to see patients interacting with generative AI, panelists pointed to chatbots that can help with coaching personalized behavior change, tailoring the experience beyond what is currently feasible due to logistics and expertise and providing an instant nature that is hard to replicate with human interaction. There are also possibilities for generative AI use in non-patient facing administration, such as coding, claims adjudication, and other areas that are major cost centers.

Monica Chmielewski pointed to the lack of regulation as one of the key issues facing the use of AI in health care today, noting that her team is asking clients, “what is your risk tolerance level?” due to the current patchwork of laws. She raised the question of how to deploy a model that is in compliance with all the state and federal regulations when they are unknown.

The panel brought up the ethical considerations of using today’s generative AI models that have a lack of transparency in data sources. If these are used for patient care, it brings up ethical concerns. In creating models for health care, there is a lot of language that is very discrete and precise, so it is important to build models that take this into account. In terms of training a model, it is essential that it consider different practice backgrounds (geography, education, etc.) to avoid errors. This raises the question of how you input that data to train the underlying system, while also navigating the patient data privacy concerns. Building foundation models specifically for health care, whether small or large, will be necessary to truly drive adoption within the industry.

Monica Chmielewski also noted that for adoption and incorporation of any AI into a health care system, audits and assessments and classifying AI uses as high, medium, and low risk is essential. Physicians will use AI to supplement, but they are almost universally unwilling to give up their final professional decision making for patient care.

There is one health care sector that the panel highlighted that is an exception to the rule on slower AI adoption, and that is the pharmaceutical industry. They are quicker to adopt because they know the line in the sand for regulators is patient touch. This means staying away from AI adoption at that stage, but rather using it for the vast amount of internal work, including documentation, where generative AI excels.

Defense Technology and Warfare in the Age of AI

This panel discussion centered on the impact of generative AI on defense and critical infrastructure, with experts exploring how AI is reshaping national security, cybersecurity, and key infrastructure. Panelists shared their insights on emerging risks, strategic opportunities, and what we need to know to best prepare for the evolving AI landscape in these crucial sectors.

Panelists:

  • Dr. Christopher Swift, Partner, Foley & Lardner
  • Paul Scharre, EVP, Center for a New American Security
  • Daniel Riedel, Partner/Founder, Genlab Ventures Studio
  • Fatema Hamdani, CEO & Co-Founder, Kraus Hamdani Aerospace
  • Reed Albergotti, Technology Editor of Semafor (Moderator)

Key Takeaways:

According to panelists, the prevailing view in Silicon Valley is to stay away from warfare. However, whether we like it or not, war is at our doorsteps now, and in warfare many other considerations go out the window as “combatants don’t care about their carbon footprint.” They noted there is a great deal of fear surrounding this topic, specifically around autonomy as pressures in the military environment tend to push towards greater autonomy.

There is the big question of how to incorporate AI into conventional warfare models in a hybrid warfare world made up of information warfare, psychological warfare, asymmetrical warfare, and conventional warfare, all of which are utilized by peer powers and can be augmented and accelerated by AI systems.

The panel considered another important question: how do we acculture AI? They noted it is typically the culture of a military unit that defines its behavior, not the law, giving the example that United States Marines do not commit war crimes because of the culture of leadership and the organization. So, how do we train an AI to have that same approach?

While most people think about AI through the lens of edge weaponry, panelists commented that one the biggest uses of AI is actually information gathering and analysis that can help decrease the so-called fog of war and reduce mistakes. They also said people have a tendency to separate war from law and from business, but they are interrelated and develop one another. Putting technology in its own vertical is a fallacy.

In considering whether there is too much control of this technology in the private sector, panelists said the government is asking the private sector for guidance on how to handle AI and other related parts of the ecosystem like semiconductors. There is a need to shift away from the command-and-control reactive model to a more creative model like we are seeing on the front lines of Ukraine, with public-private sector partnerships comprised of small companies, large companies, and government actors.

Industry Experts in Conversation With Hackathon Winners

An interactive session with this year’s TEDAI Hackathon winners and industry experts exploring the future of their winning ideas, the challenges and opportunities ahead, and what’s next for scaling their innovations. The hackathon challenge centered on utilizing AI for societal change.

Panelists:

  • Louis Lehot, Partner, Foley & Lardner
  • Matt White, Executive Director, PyTorch Foundation and GM of AI, Linux Foundation
  • Arun Gupta, VP and General Manager, Intel (Moderator)
Disclaimer