Felicia Ellsworth: Welcome to “In the Public Interest,” a podcast from WilmerHale. I’m Felicia Ellsworth, a partner in the firm’s Litigation/Controversy Department.
Michael Dawson: And I’m Michael Dawson. I’m thrilled to join Felicia as the new co-host of “In the Public Interest.” Felicia and I are partners at WilmerHale, an international law firm that works at the intersection of government, technology, and business.
On today’s episode, we’re delving into the complex and fast-evolving world of artificial intelligence with WilmerHale Partner Ariel Soiffer.
Ariel is based in our Boston office, where he specializes in technology-related transactions and regularly advises clients on technology-related matters, including issues related to machine learning and artificial intelligence. He has over 20 years of experience in AI and AI-related fields, including pre-law work in data analytics. Given his unique expertise, Ariel has been recognized as a Next Generation Lawyer for Technology Transactions and Licensing, as a Leading Lawyer in Massachusetts for Technology, and as a Lawdragon Leading Global Cyber Lawyer.
We invited Ariel onto the podcast to discuss the challenges and opportunities that Artificial Intelligence is already presenting for transactional lawyers,and what he sees on the horizon. We hope you enjoy.
Ariel, thank you so much for joining us on today’s episode. I’m sure it’s going to be both a fun and practical conversation for our audience, especially for the in-house counsels who might be listening in. After all, they’re currently navigating many of these issues. And so I think it’s a timely conversation indeed. One of the many accolades that I mentioned in the introduction was the Lawdragon recognition, and I just have to ask, did you tell your kids that you are a Lawdragon Leading Global Cyber Lawyer?
Ariel Soiffer: Michael, thank you for chatting with me today and I made sure to tell my son since he was born in the Year of the Dragon but, apart from that, I didn’t get much of a reaction from them. They kind of gave me the typical “huh,” and then let’s move on, which you know is maybe the best that you can expect from your children.
Michael Dawson: I think that’s par for the course. All right, let’s start by talking about what is AI. I mean, there’s been a lot of discussion. So I think most of the audience have a general idea of what we’re talking about. But in terms of what an in-house counsel needs to know about AI, what’s a good working definition
Ariel Soiffer: So most people would say AI has been around for a long time, dating back perhaps to the 1950s. But now, with the huge amounts of data that we have available, the really powerful computers and distributed computing as well, AI can do a really good job of writing a letter or writing an essay, and that’s something new that wasn’t true 5-10 years ago, let alone in the 1950s. I think the good news about AI is, for most of us, we don’t need to understand the technical details of how AI works, but the way that I like to explain it is, AI is a tool that synthesizes large amounts of information or inputs, and then the AI provides an output.
It’s important to remember that AI is a statistical tool, so the output isn’t always going to be accurate. AI systems will always have some degree of error. It’s supposed to learn and recognize patterns so, theoretically, it should improve over time. Sometimes though, it learns patterns that are a little bit funny, like one of my favorite examples was an AI that could distinguish images between dogs and wolves, and people found out that the way that it worked was it found snow in the background, and it said, oh, if there’s snow in the background, it’s probably a wolf. Now, that’s probably accurate, but that’s not what you’re using the AI program to do, and so we need to know that we’re not supposed to completely trust the output of an AI.
Michael Dawson: Tell us about general intelligence. What is that and why are people talking about it?
Ariel Soiffer: So artificial intelligence, folks generally refer to three different kinds of categories. One is narrow intelligence, the next one is general intelligence, and the third category is super intelligence.
So narrow intelligence is when an artificial intelligence can do a simple task. At one point, it was thought that computers would never be able to beat humans at chess. Once computers did beat humans at chess, folks thought, well, they’ll never be able to beat humans at Go. And now, computers can beat humans at Go. There are some interesting things that also have happened still in the domain of narrow intelligence. General McMaster has noted, for example, that when AIs run military simulations, they’re often much more willing to sacrifice troops than a human commander would be, and so that’s the sort of thing that I think we need to think about and be concerned about.
The next level up is artificial general intelligence, which is when we think of a system that can simulate the level of human intelligence and can resolve many different kinds of problems, deal with many different kinds of inputs. Folks will usually use the Turing test to say, can we tell the difference between whether this is a human or a machine based on queries and responses without actually being able to see the person on the other side of the curtain? And if something can pass that Turing test, then we would say it probably has achieved artificial general intelligence. One author I like has said should we fear the AI that knows enough to fail the Turing test? Because if it knows enough to fail the Turing test intentionally, it’s tricking us into thinking that it’s not actually as smart as we are.
Michael Dawson: That’s fascinating. Now, what about super intelligence? What is that?
Ariel Soiffer: Artificial super intelligence is when an intelligence is so far beyond our ability to understand that it’s no longer something we can even capture, that we can even conceive of. Ray Kurzweil predicts that this could happen around 2045, whereas artificial general intelligence, he thinks might happen as early as 2029, which is just a few short years away. If artificial superintelligence happens in 2045—that’s still within the lifetime of many of the folks that will be listening to this podcast—we could end up in a situation where we can’t even understand what the machines are thinking of.
Michael Dawson: Where are we now on the spectrum from artificial to general to super intelligence?
Ariel Soiffer: Most folks would say we’re in the transition point now between artificial narrow intelligence to artificial general intelligence. There are certainly some simulations that are getting pretty close. One of the examples that I like is you probably have all seen where a website asks you to find the bikes or the motorbikes that are in particular images. It turns out that those were not supposed to be able to be identified by computers, but now, if you use advanced AI systems, they can actually resolve those images better than humans can often, especially the ones that have the squiggly letters that are often really hard for humans to solve. ChatGPT and other AI systems are actually better at solving than humans, which of course suggests that maybe they are already passing the Turing test.
Michael Dawson: So let’s take these concepts and apply them in the context of transactional law practice. So if you were to use artificial intelligence in a narrow sense, what would be an example of the types of things that AI could do now or in the foreseeable future in terms of your own practice as a transactional lawyer
Ariel Soiffer: One of the things that I did as an experiment a while ago, when I first started playing around with ChatGPT, was I used it to write a trademark infringement letter and it was not bad. It was about what a good first-year associate would write at the time as an initial draft. And so I said, okay, this is good. This is kind of a decent starting point. And I think there are some other uses that we’ve used current AI systems for. One of them is looking at contracts, doing diligence, especially for M&A or finance purposes, and understanding whether a particular counterparty has assignment clauses that are permissive or most favored nation clauses that are problematic. Whatever it is that we’re looking for, we found that the AI systems are actually pretty good.
Michael Dawson: And general intelligence, if we get there, what would general intelligence enable in the transactional work that you do now?
Ariel Soiffer: So if we get to an artificial general intelligence, it should be able to help us suggest clauses that maybe we weren’t thinking of. Take a look at a contract and say are you sure you want to agree to that? Point out problems, suggest solutions. Hopefully, at least for some period of time, human lawyers will still need to be involved to help provide some thoughts on that, but we think that an AGI, an artificial general intelligence, should be able to help with many of those tasks once they’re fully able to do it.
Michael Dawson: Excellent. What are some of the risks of AI that you think in-house counsel should be worried about?
Ariel Soiffer: The bad news about AI is that there are a lot of risks associated with the use of AI. There’s a wide range of them, and it’s important to be thinking about them as you’re considering using or deploying an AI system. Some of those risks include confidentiality, unfair or deceptive trade practices, cybersecurity.
Michael Dawson: Let’s take each of them in turn and talk about the mitigation of those risks, whether it’s something that a company needs to do or where we need some legislative action. Tell us a little bit about that.
Ariel Soiffer: So confidentiality, there was an example where a software engineer at a major international corporation disclosed some proprietary source code to an AI system, and then that proprietary source code seems like it showed up in queries and prompts that were provided to that AI system later on. There are two problems with this. One problem is you lose the confidentiality. Another is you lose potentially the trade secret status, and that is unless you have signed up for the enterprise versions of these systems where they’ve agreed to provide you with appropriate confidentiality protections. That could still expire depending on the particular terms, so it’s important to think about that.
Michael Dawson: When I heard you describe that example, the unintentional disclosure of a company’s trade secrets, that sounds like a job for the company in terms of its own internal controls. Is that right? Or do we need more?
Ariel Soiffer: Yeah, I think that’s right. I think for the most part, one of the things that many of our clients are doing and many other folks are doing as well is setting up AI policies that govern how you can use an AI, what you’re allowed to submit, which AI systems are allowed, and then they enter into contracts, often on the form of the AI system because of leverage, that protect the confidentiality of what they input into that AI system. By limiting the scope, of course, you’re potentially limiting some of the benefit. You’re not using every AI system out there, but on the other hand, you’re ensuring that you have the confidentiality protections of the tools that you’ve entered into agreements with.
Michael Dawson: And for unfair and deceptive practices, what’s the appropriate societal response to try to mitigate those risks?
Ariel Soiffer: In terms of unfair, deceptive trade practices, just to give one example of that, folks sometimes will use a chatbot and that chatbot might be powered by AI. If you don’t identify that that chatbot is powered by AI, then folks might be deceived, and they might not understand that they’re speaking to a computer. That’s why so much of the time now you see something that looks like a little robot when you’re chatting with the chatbot. I think with Air Canada, their chatbot, when it was queried, invented a bereavement policy and decided that it would provide bereavement discounts to a customer. That customer sued when Air Canada didn’t actually honor the bereavement discount that its chatbot offered, and won because the court said, look, your AI system is something that you have put out there and you need to honor what it provides to your customers. That’s an example of unfair or deceptive trade practices.
From the perspective of the company, these AI systems are learning systems, but they are systems that make mistakes. So first we need to think about, well, how do we scan for those mistakes? Do we perhaps need a second AI system that checks the first one? But then I think there’s a second question that you raised, Michael, which is what do we do about it as a society? And I think the options there are we say, well, look, if you’re the ones who put the AI system out there, then you’re the ones who should be responsible for it, which is effectively what happened with Air Canada. You could also imagine other systems saying, for example, this is new and innovative, we want to encourage it and we don’t want to have liability for early deployments of AI systems, and that would be more like the Section 230 approach that was used in the early days of the internet and still continues to be used widely in user-generated content.
Michael Dawson: That’s the provision that gives social media platforms immunity from liability for the content posted by their users.
Ariel Soiffer: Exactly, and not just social media companies—any company at all. I also mentioned cybersecurity and the important thing to note there is there are many different ways that this can manifest. One of the ways is you can use an AI system to protect a company, to scan for intrusion, scan for risks. But you can also use an AI system to generate those risks, to generate malware, to generate problematics source code. A lot of the common AI systems have made it a lot harder to do that, but you can still develop your own AI system that scans for ways to conduct cybersecurity intrusions.
Michael Dawson: Yeah, how should we mitigate those?
Ariel Soiffer: That’s one of the hardest ones to answer because I think the answer probably is you need to deploy some cybersecurity systems that use AI, understand that they might not be perfect, but, at the same time, they might be better than any other alternative. Some of that reminds me of William Gibson’s Neuromancer novels, where he anticipated the use of AI in cybersecurity applications. And while we’re not there yet in terms of fully autonomous AI-based cybersecurity systems, having some ability to have these AI systems work and analyze their systems, understand if there’s unusual behavior, I think will really make a big difference, and I think a lot of companies are starting to think about using these kinds of systems or already using these kinds of systems.
Michael Dawson: Terrific response. Let me ask you another potentially impossible question to ask. The Gartner Hype Cycle roughly is new technology comes onto the scene, expectations shoot through the roof, but then reality sets in and people realize it’s going to be harder and take longer to get the full benefits of new technology. Where are we on the Gartner Hype Cycle?
Ariel Soiffer: I’m a huge fan of the Gartner Hype Cycle and have been for, I think, of something like 20 years. My thinking is we’re actually still going up on the initial hype cycle around AI and generative AI. I don’t think we’re yet at the top of that wave, and that’s why we’re seeing so much excitement about AI right now. I think we should all expect that, at some point, there will be a crest atop to that hype cycle and, at that stage, it will become more challenging to invest in these AI companies and it will be challenging for these AI companies to deal with the sudden pulling out of the rug from underneath them. There might be some consolidation in the industry at that stage. That’s something, if you’re an AI provider, you need to be thinking about and planning for. On the other hand, right now, it’s a good time to be raising money and growing your business, but you need to be conservative a little bit to not overspend, to not say, well, things will always be this good.
Michael Dawson: Another big focus of corporations today is within the realm of corporate social responsibility, both in terms of internal policies, how they treat their own employees and customers, but also in terms of their global impact. Are you seeing issues arising with how AI is being used in corporations, for example, in the areas of advertising or personnel management?
Ariel Soiffer: In statistics, folks will generally say garbage in, garbage out. I would add bias in, bias out. That’s also true of AI. AI is only as good as the data that it’s trained on. You can’t just point to the AI and say, oh well, it wasn’t me, it was the AI that did it, so I shouldn’t be responsible. I didn’t know about that bias. What we recommend doing is testing the outputs. If the result is unexpected, or if the result is problematic, then we want to continuously feed that back and try to address the situation.
One great example about this bias issue is Amazon, which, in 2014, used AI to screen resumes and it was trained on Amazon employees who were more likely to be male. So the system was more likely to suggest the male candidates. The good news is Amazon found out about this when they were reviewing the outputs of the tool and when they did, they stopped making use of that AI tool. The most important lesson I think folks should take from this is if you find a mistake, don’t bury it. Fix it. That’s exactly the tact that they took there, and I think it’s something that all of us can learn a lesson from. When we find problems, our job is to try to diagnose them, fix them, make the system better, and not perpetuate the problem. Testing and feedback are really important in that, and AI systems will work if you tell them, hey, you can’t do this, and that’s really important.
Michael Dawson: Excellent. President Biden’s executive order on AI has directed each of the federal agencies to appoint a chief AI officer, a CAIO. I’ve seen some corporations appoint CAIOs. Does every corporation need a CAIO?
Ariel Soiffer: I think the right answer is going to depend company to company. In some cases, it will be more like a cross-functional team where maybe there is somebody who’s the chief security officer, the chief information officer, the chief legal officer, and each one of them has some of the functions that a CAIO might have. In other cases, we might decide, hey, we need to centralize this in one individual, but it’s hard to find folks that have the right levels of expertise in all of the different domains that might be relevant. I think most companies will probably be better served with a cross-functional group than one individual, but that won’t be universal.
Michael Dawson: Shifting back to your legal practice, one of the major areas of practice at WilmerHale is in the life sciences sector, and I wanted to ask you, how are you seeing AI impact the practice of law in that sector, specifically with respect to the intellectual property rights issues that it may raise?
Ariel Soiffer: Sure. One of the AI applications that I’m really proud of was with a company ZebiAI where they applied artificial intelligence to massive data sets of interactions between compounds and proteins and used that information to predict drug candidates. That’s one of the many applications of AI helping with drug discovery. In that case, we took ZebiAI from incorporation through sale and helped them along the way with numerous agreements that worked on AI applied to drug discovery.
The good news about AI use in these applications is that the human involvement is still pretty high. Human inventorship is currently a requirement for patentability. So when I’m using an AI system to say I should develop this drug, I end up with drug 1-2-3-4-5. I still have to go and test it, I have to make sure that it works, make sure that it’s not toxic, make sure that it’s fit for purpose. At some point along that path I will say, okay, this drug is ready to be patented because of all of the human work that’s still involved.
I think it’s a fair question, by the way, as to whether this will remain the case. I should mention, human authorship is required also for something to be protected by copyright. At one point, photographs were not able to be protected by copyright, and Congress passed a law and the Supreme Court decided, hey, we really should allow photographs to be protected, there is sufficient human authorship there. I wonder whether that will also be true for AI systems. Whether we will come to look at them and say these are more like tools that help folks come up with an invention, help folks come up with the work, but they’re not the final output. So maybe we should be a little bit more flexible on the human inventorship or human authorship requirements. That’s just my speculation, not anything that is current law in the US or otherwise.
Michael Dawson: And that flexibility, is that going to require congressional action, or would that occur at the administrative level and the Patent Office, or would that be a question of interpretation for the courts? Who needs to be flexible?
Ariel Soiffer: It would at least require congressional intervention. It might also require a constitutional amendment, which might mean that it would never happen, and it would almost certainly require a constitutional interpretation as well by the courts.
Michael Dawson: I want to ask you about the international legal landscape for a moment. As happened with data privacy and GDPR [General Data Protection Regulation], the EU has beaten the US to the punch here. They’ve gone ahead and promulgated a law in March of 2024 called the AI Act, the EU Artificial Intelligence Act. It doesn’t take effect for a few years, but it’s one of the first standalone pieces of legislation around AI. What do you make of it?
Ariel Soiffer: The EU AI Act takes a risk-based approach to the use of AI. Some practices are categorically banned as an unacceptable risk. Other practices have varying degree of limitations or obligations. It’s a thoughtful way to approach this. Don’t categorically ban the use of AI, but start to think about, well, some sorts of practices should never be allowed, and other sorts of practices we need to think about and triage and have different levels of governance associated with each of them.
The important thing to note about the EU AI Act is that it regulates any company that markets or provides AI technology in the EU. This is very similar to GDPR, as you were alluding to, Michael. So even if an AI provider has no presence in the EU, but its AI system is available in the EU, that AI provider is subject to the EU AI Act once the EU AI Act goes into effect.
The EU AI Act is the one that’s gotten the most attention because it deals with regulating AI, but I actually think one of the more interesting AI-related laws was in Japan. Japan took a very permissive role in allowing AI systems to use copyrighted input works. At one recent discussion on AI, I was asked whether an AI system that was developed and trained in Japan could be used in the US. That’s an open question, but there are important policy implications to think about if we allow AI development to happen primarily overseas. We need to balance and consider copyright holders, actual or potential content licensors, and how we protect them and their content. Finding the balance between the content, the copyright holders, and the AI providers will be challenging, but it also will be important to keep our competitiveness with the rest of the world.
Michael Dawson: So what I’m hearing from you is that, even though this legislative activity is occurring today primarily in these overseas locations, it still has a big impact on US businesses. If you’re a U.S. company marketing or distributing your product into the EU, you could be covered by the AI Act in the EU. Or if you’re importing products that have been developed in the Japanese market, you’re going to need to be aware of the copyright issues that may ensue.
Ariel Soiffer: Absolutely. I think that’s going to continue to be true. The EU is going to continue to pass laws that purport to have application all over the world and even without laws like that, as advisers to American companies, we need to think about issues and developments everywhere in the world and how that might change where it makes the most sense.
Michael Dawson: The EU is moving forward, Japan is moving forward. To date, the US Congress is not. Some states are taking steps towards adopting legislation in the area. If you have a uniform set of standards in other major markets, but you have 50 or more diverse standards in the United States, it would seem to me that potentially poses a risk to the competitiveness of the US AI industry. How do you feel about that issue?
Ariel Soiffer: Yeah, Michael, that’s a great question. And I think we don’t really know where things will fall out in the US, whether we will end up with 50 different legal regimes. We’ve seen something similar in privacy where there have been many different legal regimes passed at the state level and it’s certainly possible that we’ll end up there for AI as well. There are benefits to having a unified legal regime for AI, as you mentioned, just as the EU does, and I think many folks hope that the Congress will end up passing laws that deal with AI comprehensively. They haven’t done so yet for privacy, so there is reason to believe that given the challenges and the complexities and the trade-offs involved, it may be challenging to pass at the federal level. Some of the biggest questions, I think, are going to be hard to answer at the state level, including things like rights of content licensors versus rights of AI providers to provide systems that generate output works based on publicly available content or content that’s readily accessible. Those kinds of questions are almost impossible to answer at the state level.
Michael Dawson: Interesting. Staying on the topic of the future, but applying it to your own practice, what are your predictions? How will your practice be different as a result of AI in three years or five years down the road?
Ariel Soiffer: Yogi Berra once said predictions are hard, especially about the future. That is something that I think is true, so I will give that as a sort of a disclaimer here. On the other hand, President Lincoln said the best way to predict the future is to create it. In general, I think most lawyers tend to be conservative and we’re not usually the early adopters of new technologies like AI. We are worried about being that lawyer that made that filing that had AI-generated cases and had invented citations there. We don’t want to be that lawyer. On the other hand, we don’t want to be left behind by being less efficient than our more productive peers, so we need to think about how do we manage for that risk and how do we keep up with the technology. My prediction is in the next five years, we’ll probably see lawyers really starting to use AI for at least the simple tasks.
Michael Dawson: Law firms, as they consider these issues, what factors should they weigh in deciding how fast to go in adopting AI?
Ariel Soiffer: I think the biggest one is how much can you be sure that confidentiality is being preserved. Confidentiality issues are important for every company. But for law firms, we also have heightened obligations, including attorney-client privilege. We will need to have confidence that our, well, confidences are being maintained. That’s going to be a technical question and it’s also going to be a contractual question. I think if I were an AI provider looking to provide legal services to law firms, I would say, okay, how am I going to solve for this? How am I going to give my legal clients comfort that we will, as a technical matter, protect their confidentiality and we will, as a legal matter, enter into contracts that assure them that we are protecting their confidentiality?
Michael Dawson: I love that answer. So much of the debate is around can we be more efficient? Can we be faster? But remembering that at its core, the attorney-client relationship is premised on the preservation of confidences and that that’s got to be an absolutely necessary precondition to moving forward with AI. I think that’s really well said. Are there other reasons to be optimistic about the use of AI in the legal profession?
Ariel Soiffer: There are both costs and benefits to AI, just like everything. Human beings, once upon a time, we didn’t have the plow and we had to work land by hand. AI could be like that, and it could be something that liberates us from a lot of the routine tasks. I read someone who said I don’t want AI to write my poetry and do my art. I want AI to wash my laundry and do my dishes. I think if we can get AI systems that can focus on those routine tasks and help all of us do the more value-added tasks, whether that means poetry for folks who are able to do that, unlike me, or it’s writing complex contracts or negotiating them, which is for folks like me, then I think that is where we would say, hey, look, AI might actually make it so we can deliver a lot more value to our clients, deliver it more efficiently, and overall be happier with what we’re doing.
Michael Dawson: I think that’s a great place for us to draw the conversation to a close. We could continue talking for hours, but this has been great. Ariel, I really appreciate your making the time. It’s a fascinating area. It’s an interesting area, a dynamic area, and I’m grateful to you for your expertise and your time.
Ariel Soiffer: Michael, great to catch up. Look forward to chatting with you again.
Felicia Ellsworth: Thank you everyone listening for tuning into this episode of “In the Public Interest.” We hope you will join us for our next episode. If you enjoyed this podcast, please take a moment to share it with a friend and subscribe, rate, and review us wherever you listen to your podcasts.
Michael Dawson: For our WilmerHale alumni in the audience, thank you for listening. We are really proud of our extended community, including alumni in government, the non-profit space, academia, other firms, and leadership positions in corporations around the world. If you haven't already, please join our recently launched an Alumni Center at alumni.wilmerhale.com so we can stay better connected. Our show today was produced by Schuyler Atkins and Shanelle Doher, with sound engineering and editing by Bryan Benenati and additional support from Emily Freeman, Peter Turansick, Ambi Boodrham and the rest of the WilmerHale podcast team, all under the leadership of executive producers Sydney Warren and Jake Brownell. Thank you for listening.
Felicia Ellsworth: See you next time on “In the Public Interest.”