In our inaugural episode of the Innovative Technology Insights podcast, Mark Dredze, Associate Professor of Computer Science at Johns Hopkins University and Bloomberg research scientist, joins Natasha Allen for an in-depth discussion on the new age of Artificial Intelligence: What are the trends in AI today? How will these developments intersect with regulatory frameworks? And what can companies do to embrace AI as it transforms the business landscape?
Go Deeper:
- AI Regulation: Where do China, the EU, and the U.S. Stand Today?
- Federal Circuit Rules Inventorship Must Be Natural Human Beings
The below episode transcript has been edited for clarity.
Natasha Allen
Hi everyone and welcome. My name is Natasha Allen and I am a partner in Foley’s Silicon Valley office and the Co-Chair of the AI area of focus within our Innovative Technology sector. On today’s episode we will be discussing the new age of AI. Joining me to provide expertise on the topic is Mark Dredze. Mark is an Associate Professor of Computer Science at Johns Hopkins University and a Bloomberg research scientist. He received his PhD from the University of Pennsylvania in 2009 and also has extensive appointments in biomedical informatics and data science.
Mark, thank you for joining me today, hope you’re doing well.
Mark Dredze
It’s my pleasure to be here.
Natasha Allen
We’ve called this episode the new age of AI. What are some of the changes happening and how will they impact us in the future?
Mark Dredze
We’ve really entered a new age of AI. For many years people within AI and specifically within the field of natural language processing have been looking to build models that can replicate what language is and how language functions. These models have a lot of utility. They are used in speech recognition systems which are now everywhere. They’re in models of translation of languages which are becoming more ubiquitous.
This has been a long running goal of the natural language processing and AI community for decades. Recently we’ve seen major advances in the underlying technology. The first application is deep learning – building very large neural networks for problem solving.
With deep learning, language models became much better at the sorts of things that we were building them for. Because language encodes knowledge about the world – these systems and algorithms were not just able to finish a sentence or a paragraph. They really seem to be learning about our world – with capabilities far beyond what we were expecting.
For example, you can now turn to these models and ask how many states there are in the U.S. and they know that the answer is 50. You can ask a model to take an article and rewrite it as a blog post or as a headline. You can ask them to explain inferences too. If Sally had five apples and she gave one to John, will she have more or less apples? That’s a very simple question; you know a kindergartner can answer it. Language models can not only say that Sally has less apples – they can explain why she has less apples, showing that they really do seem to understand something about what it means to give something to someone else.
Some new models can even explain jokes. If you ask them, why a joke is funny they can walk through and break down the logic to humor.
These are just some examples of what language models are now able to do – unlike any previous AI technology. That has really opened the door to a huge range of applications.
Natasha Allen
That sounds amazing. What was the impetus behind this? Is it just more information being added to these various AI technologies? What are the causes of this huge advancement?
Mark Dredze
I don’t think our goal was to build models that were capable of doing the things that we now see them doing. First of all, deep learning has really transformed AI and many of the subfields of AI. That has been an unfolding process over the last decade.
Deep learning is the idea that we can build neural networks. These are networks that have some similarity to how we conceptualize brain structure. They surpass the types of algorithms that became popular in the field of AI in the 1990’s and the 2000’s. So that’s one chain of thought.
Another is data accessibility. Before the internet, if you said, “I want a million words of English text,” you couldn’t just get a million words of English text. Where would you get that? You would have to start scanning books, right? Today a high school student can download all of Wikipedia. You can download gigabytes and gigabytes of data from the internet and have as much text as you need in many different languages.
Another recent development is the computing revolution. Over the past 40 to 50 years, there’s been a steady increase. It has really gone to a point where the amount of computing available to individuals and companies alike is far beyond what we imagined even a couple years ago. Today’s models are literally using billions, up to a trillion parameters. This has really unlocked the door to a new level of capabilities in artificial intelligence.
Natasha Allen
From the perspective of a developer, there are legal regulations to consider. I also saw something about employment in AI – how it may be discriminatory and how to guard against that. Are AI developers considering these issues?
Mark Dredze
It depends on where the developers are. Big companies are well aware of what regulation looks like and how it impacts their business. AI is such a transformational technology that government absolutely must have a role in representing society to decide what should be in and out of bounds.
Let’s look at something that I think people are perhaps more familiar with – self driving cars. This is really a revolutionary field of technology in AI. The idea of pushing things to the limit. Perhaps we won’t even own cars in the future. We may just press a button and a car in our neighborhood will just come pick us up. The elderly would now be able to give up their licenses while still living in their homes.
That’s a major reason why seniors retire to different communities – transportation. One of the big reasons why people don’t make it to the doctor is simply because they can’t get there. Having widespread, accessible self-driving cars can solve many problems. But while self-driving vehicles are really transformational technology, progress is sometimes slow moving.
The idea of a self-driving car has been around for quite a while now. Yet regulators are still struggling to keep up. If you are driving a Tesla with self-drive enabled and you get into an accident, who is liable? Are you liable because you should have been paying attention? Is the automotive company liable because they built a faulty technology? These issues are still being worked out.
I use that example because that is a source of transformation that is wide spread but is actually pretty slow moving when it comes to AI. And still regulators are struggling to keep up. When you talk about how AI is going to transform smart speakers, or how it’s going to transform how you author documents, or create images, take pictures. These are so ubiquitous and there is so little understanding of what the concerns are and what should be regulated.
Many big companies are very concerned because they need guidance on AI. They need to know what will be in and out of bounds. Yet I don’t think the regulators even know – they’re still struggling to comprehend the issues.
When you look at small companies in the startup space, the risk profile is completely different. If you develop a startup on a high risk topic and you do something really terrible – you’re going to be sued, your startup closes and fails. That’s completely different from big companies that are on the hook for hundreds of billions of dollars in damages. Small companies and startups are able to move very quickly to deploy these technologies for innovative use cases. Meanwhile, bigger companies are investing heavily in the research.
Overall, I think there’s still big questions of how technology will impact practice: what companies can and can’t do. Today’s companies are asking regulators for guidance while regulators are asking what the core issues they should be aware of are.
Natasha Allen
I want to touch on something you had spoken about before – connecting AI to the public health space. You’re in a very interesting position because of your associate professorship at Johns Hopkins University. What do you think are some key opportunities for the future of AI in the public health space?
Mark Dredze
COVID has really exposed gaps in public health infrastructure. It was such a transformational and earth-shaking event in medicine and public health that exposed a lot of cracks in the system. There are a lot of areas where we’re going to have to rebuild and rethink from the ground up. I think AI will be a major part of that conversation.
For many years we have been developing forecasting models of epidemics and pandemics. If there’s an outbreak in Seattle, how will that outbreak spread to the rest of the country? There are many factors that traditionally go into these models, including how infectious the illness is, duration of infections, and outbreak patterns. What we have not done is incorporate into those models the decisions that people make. When the mask mandate went into effect, it didn’t mean that everyone put on a mask. There was a wide range of compliance and adherence to that mandate – depending on where you lived and the political climate.
Recommendations about travel or social distancing were also not universally adhered to. That ended up being one of the big factors that influenced the course of the pandemic. I think a lot of people thought getting a vaccine was the hard part. It turns out there’s a lot of groups remaining unvaccinated. The next generation of pandemic models has to model that. We have to model the behavioral, the cultural issues, and how people behave in a pandemic. That is a space where AI will have a lot to say.
One of the strengths of AI is bringing into settings data from lots of places – oftentimes non-traditional data. For example, cell phone movements have offered insights on social distancing behavior. Many of us have gotten exposure alerts on our phones. AI really excels at bringing large datasets into the public conversation. That is going to be really critical for the behavioral elements of forecast models.
Natasha Allen
What do you think are some of the biggest challenges facing AI companies today?
Mark Dredze
The biggest challenge is really hiring people. The interest in AI has increased far faster than we can possibly train people in this field. If you just look at the academic setting and you just say Howard University is hiring new computer science professors. AI is the number one thing they’re hiring in, and they’re hiring everyone they can.
Universities are making massive investments, bringing in dozens of faculty, all under the banner of AI. That will in turn produce new PhDs and masters students. But it is a very, very slow moving process. One of the biggest problems is the small senior talent pool. At Johns Hopkins, we’re churning out [people] as fast as we can people with training in machine learning, AI, and data science – but these are very junior.
If you are a big company thinking of investing in AI, the first step is to bring in senior people. And senior people don’t grow on trees. The only way to produce senior people is time. I think one of the challenges a lot of companies have is just getting the talent. Even the biggest companies struggle to bring top talent because they’re competing with startups. Everyone is offering crazy salaries to get AI talent. And if you’re not a traditional player in this space, it’s very hard to compete.
Let’s say you’re a big retailer that realizes AI is critical to the future of the company. You look at Amazon, which uses AI for deciding what people are going to buy, what this holiday season’s big products are going to be, how to stock products in which warehouse around the country, and how to arrange shipments. All of these things are being decided by artificial intelligence.
You would realize that AI is the future but may not know where to go for talent. That’s really a bottleneck right now. We need more people going to this field, we need more people trained in this field and that is really a struggle for companies. AI is not a technology where you can just download it off the internet and there you’ve got it. You really need the people to figure out what to use and how to use it for business-critical uses.
The opportunities in retail look completely different than other sectors, like homebuilding. These are completely different businesses. You don’t just want someone with a master’s degree in computer science – you need someone who really understands your business. And you need that deep integration to figure out how to make the best use of the technology at hand.
Much of AI technology is surprisingly available off the shelf. It is really amazing how much is there. I’ll give you an example. About a year or two ago, OpenAI created a model called GPT3. That was a major investment. It was probably millions of dollars in computing, not to mention the salaries of the people involved.
Very recently Facebook built the same system, essentially, and then posted it on their website for download. It’s really astonishing that companies are spending millions of dollars investing in core technologies and then giving it away.
In the computing space, for example, you don’t have to build up a data center. You go to Azure or Google Cloud or Amazon Web Services. And you can get all this computing power on demand. The software that is in the open source movement is amazingly good.
On one hand, accessibility of the technology is really difficult to understand unless you’re in the space. You need someone who has a PhD in computer science focused on AI in order to use any of it. And that’s really the challenge. That’s where you come back to the labor. It’s really building up the resources in terms of the human capital to make use of the technology that’s out there.
Natasha Allen
Are you finding that more people are enrolling in these [computer science and machine learning] courses? Or is there a lag in the enrollment? What are you seeing?
Mark Dredze
Enrollments across computer science are off the charts. We struggle at Johns Hopkins; we keep wondering when we are going to hit the peak. Historically there was a huge increase in interest in computer science in the 90’s with the dot.com boom. Then with the bust there was a huge drop off in CS interest. That peak looks like a quaint history compared to demand now.
This is why computer science departments all across the country are doubling in size in many cases just to keep up with the demand. There really is an understanding among students that this is a very exciting area with lots of opportunities. We are seeing huge demands in enrollment and many new programs starting.
At Hopkins specifically we have in the past twelve years started a new master’s program in data science and launched a certificate program in human language technology. That’s not even considering how much our existing programs have been adapted to include courses in AI. The number of courses we offer in AI has dramatically increased over the past couple of years. We still struggle because we don’t offer enough. The demand is really intense. The first time I taught my class in machine learning about 10 years ago, I had 40 students enroll. This fall I might have 200 students.
Natasha Allen
Oh, wow.
Mark Dredze
I hope not. And the course is now offered twice a year. Of course I’m not the only person teaching these classes. There are now many related classes that you can take instead of my class and still we’d have not met the demand.
Natasha Allen
My final question is to sum everything up. Say you are a company that’s trying to make itself relevant and more efficient with AI. However, you feel like you’re behind. What do you think they can do to catch up to the competitors?
Mark Dredze
The companies who don’t yet see the value of AI are really in danger. When the dot.com bubble came around, companies looked to Amazon and said, “Amazon just sells books and we’re major retailers. We know retail, Amazon doesn’t.”
What Amazon has demonstrated is the retail industry isn’t about retail. It’s about data and understanding data – and understanding the customers through data. Amazon is successful because they redefined what retail could be.
I think many industries run that risk right now. That what it means to be a player in that industry isn’t likely to change because of AI. If you don’t see how AI fits into your business, I really think you need to look harder because you run the risk of not existing.
Tesla is another example. Tesla is not a car company. It’s a data and software company redefining what it means to build cars. That is the profound level of transformation that AI represents.
If you’ve concluded that AI is critical to your business where do you go? How do you start? I think that a number of the traditional sources actually offer good solutions. A lot of the traditional consulting companies out there are investing heavily in AI because they need to help their customers. The existing relationships that companies have with outside advisors are a good place to start.
Bringing people into the company, creating roles in the company, whether that’s in the CTO office or product office, is also important. It’s very daunting for a company to figure out how to hire an AI expert when they don’t know anything about AI.
You really need someone who understands your business and how this technology is going to revolutionize your business. It’s not enough to have a generic expert from the outside. You really want someone inside that understands your company, understands your business, understands your industry, and can figure out how to make the best use of these transformative technologies for your business needs.
Natasha Allen
Thank you so much, Mark. This was super informative. Appreciate you taking the time to talk with me about the new age of AI. Thank you everyone for joining us and hope to see you soon. Until next time.
Foley & Lardner’s Innovative Technology Insights podcast focuses on the wide-ranging innovations shaping today’s business, regulatory, and scientific landscape. With guest speakers who work in a diverse set of fields, from artificial intelligence to genomics, our discussions examine not only on the legal implications of these changes but also on the impact they will have on our daily lives.