
Today, Iâm talking with Sean Fitzpatrick, the CEO of LexisNexis, one of the most important companies in the entire legal system. For years â including when I was in law school â LexisNexis was basically the library. Itâs where you went to look up case law, do legal research, and find the laws and precedents you would need to be an effective lawyer for your clients. There isnât a lawyer today who hasnât used it â itâs fundamental infrastructure for the legal profession, just like email or a word processor.
But enterprise companies with huge databases of proprietary information in 2025 canât resist the siren call of AI, and LexisNexis is no different. Youâll hear it: when I asked Sean to describe LexisNexis to me, the first word he said wasnât âlawâ or âdata,â it was âAI.â The goal is for the LexisNexis AI tool, called ProtĂ©gĂ©, to go beyond simple research, and help lawyers draft the actual legal writing they submit to the court in support of their arguments.
Thatâs a big deal, because so far AI has created just as much chaos and slop in the courts as anywhere else. There is a consistent drumbeat of stories about lawyers getting caught and sanctioned for relying on AI tools that cite hallucinated case law that doesnât exist, and there have even been two court rulings retracted because the judges appeared to use AI tools that hallucinated the names of the plaintiffs and cited facts and and quoted cases that didnât exist. Sean thinks itâs only a matter of time before an attorney somewhere loses their license because of sloppy use of AI.
Verge subscribers, donât forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.
So the big promise LexisNexis is making about ProtĂ©gĂ© is simply accuracy â that everything it produces will be based on the real law, and much more trustworthy than a general purpose AI tool. Youâll hear Sean explain how LexisNexis built their AI tools and teams so that they can make that promise â LexisNexis has hired many more lawyers to review AI work than he expected, for example.
But I also wanted to know what Sean thinks tools like ProtĂ©gĂ© will do to the profession of law itself, to the job of being a lawyer. If AI is doing all the legal research and writing youâd normally have junior associates doing, how will those junior associates learn the craft? How will we develop new senior people without a pipeline of junior people in the weeds of the work? And if Iâm submitting AI legal writing to a judge using AI to read it, arenât we getting close to automating a little too much of the judicial system? These are big questions, and theyâre coming real fast for the legal industry.
I also pressed Sean pretty hard on how judges, particularly conservative judges, are using computers and technology in service of a judicial theory called originalism, which states that laws can only mean what they meant at the time they were enacted. Weâve run stories at The Verge about judges letting automated linguistics systems try and understand the originalist intent of various statutes to reach their preferred outcomes, and AI is only accelerating that trend â especially in an era where literally every part of the Constitution appears to be up for grabs before an incredibly partisan Supreme Court.Â
So I asked Sean to demo ProtĂ©gĂ© doing some legal research for me, on questions that appear to be settled but are newly up for grabs in the Trump administration, like birthright citizenship. To his credit, he was game â but you can see how taking the company from one that provides simple research tools to one that provides actual legal reasoning with AI will have big implications across the board.
This one is weedsy, but itâs important.Â
Okay: LexisNexis CEO Sean Fitzpatrick. Here we go.
This interview has been lightly edited for length and clarity.
Sean Fitzpatrick, youâre the CEO of LexisNexis. Welcome to Decoder.
Thank you. Great to be here.
Thank you for joining me. This is my first interview back from parental leave. Apologies to the audience if Iâm rusty, but apologies to you if Iâm just totally loopy.
Congratulations!
Iâm very excited to talk to you. Iâm very much a failed lawyer, my wife is a lawyer, thereâs a lot of lawyers on The Verge team. The legal profession in America is at a moment of absolute change, a lot of chaos, and an enormous amount of uncertainty. And LexisNexis, if the audience knows, tends to sit at the heart of what lawyers do all day. Most lawyers are using LexisNexis every minute of every day. What that product is, what it can do, and how it helps lawyers do their job connects to a lot of themes that weâre seeing both in the legal profession and in technology and AI generally.Â
So, start at the start. What is LexisNexis? How would you explain it to the layperson?
LexisNexis is an AI-powered provider of information, analytics, and drafting solutions for lawyers that work in law firms, corporations, and government entities.
Thatâs a new conception of LexisNexis. When I was in law school in the early 2000s, it was just the thing I searched to find case law.
Yes, weâve transformed over time. We were just that research provider. Over time, weâve acquired and integrated more businesses. In 2020, when we launched our Lexis+ product, we integrated all those things together, becoming an integrated ecosystem of solutions. Then, in 2023, we launched Lexis+ AI, and thatâs when we really became an AI-powered provider of information analytics, decision tools, and drafting solutions. AI capabilities have really allowed us to do more things than what weâve traditionally done in the past
That jump from being the gold standard database of legal opinions, reasonings, case notes, and all that to âweâre going to do the work for you or help you do the workâ is a big one. Thatâs a cultural jump. Obviously, there were some acquisitions along the way that helped you to make that jump, which you can talk about. What drove you to make that jump, to say, âActually, the lawyers need help drafting the motions, the proposed opinions they might give to a judge?â What made you say, âOkay, weâve got to step into actually doing the work?â
I think itâs been a natural evolution. As technology has evolved, itâs opened up new avenues of things we can do. We tend to take the latest technology, introduce it to our customers, and spend time talking to them about how they think that technology can be best applied in the legal environment. Then, we translate the ideas they came up with into products that resolve or address those opportunities.
Let me ask you a pretty philosophical question. Itâs one that I struggle with all the time. Itâs one that I talk to our audience about all the time. I think most people who encounter the legal system think itâs pretty deterministic. Our audience is pretty technically focused. Theyâre used to computers. Computers are, until recently, pretty deterministic. You put in some inputs, you get some outputs. Most people who encounter the legal system think itâs pretty deterministic. You put in some inputs and you get some predictable outputs.
And what Iâm always saying is, âThatâs not how it works at all.â You show up to court, the judge is in a bad mood, you have no idea whatâs going to happen. Youâre a big company with an antitrust appeal, you show up to the three-judge appellate review board, and you have no idea whatâs going to happen. Literally anything could happen at any time. The judicial system is fundamentally not deterministic. Even though itâs structured like a computer, trying to think about it like one can get you in all kinds of trouble. Maybe the best example of this is people on Facebook putting the words âno copyright intendedâ on the bottom of movies. They think they can issue these magic words and the legal system is solved, and they just canât.
AI is that problem in a nutshell. Weâre going to take a computer, make it better at natural language. Weâre going to make the computer fundamentally not deterministic â you canât really predict what an AI is going to do â and then weâre going to apply that to the fundamentally non-deterministic, human nature of the court system. Somewhere in there is a big philosophical problem about applying computers to the justice system. How do you think about that?
First of all, you have these massive investments happening with the foundational models. Each of these hyperscalers â Microsoft, Amazon, Google â are putting in close to $100 billion. So these models just continue to get better and better over time. Thatâs at the foundational model level. We donât really operate at that level. We build applications that utilize these foundational models. And at that level, we see prices are dropping. We used to pay $20 for 1 million tokens two years ago, and today we might pay 10 cents for 1 million tokens. That allows us to do things at speed and at scale that weâve never been able to do before.Â
And there are a lot of things about the law that make these models attractive. Most of the law is language-based, and these models are really great with language problems. The law is precedent-based, and so âÂ
Well, thatâs up for grabs. Weâll come back to it.
Iâll grant you that. You look at the activities that lawyers do: they draft documents, they do research, they summarize things. The models are all really good at these types of things. So, you have this perfect storm, with this technology and the things lawyers do coming together.Â
Yet, when people try to use these consumer-grade models, there are all kinds of problems with them. Like you said, itâs not deterministic. You canât just put information into a computer and get an answer out. If that were the case, we wouldnât need a court system. These models are just not built for the legal system. You canât go into court and say, âI found this on the internet.â You have to have authoritative content.
The cut-off date for GPT-4o was 2023, I believe. You need to have information thatâs constantly updated. Your audience probably doesnât know this,, but thereâs the citator, which traditionally has said, âThis is good lawâ or âItâs not good law, itâs been overturned.â Now, itâll tell you if itâs the law at all or if some system just made it up. These systems are probabilistic. They want to put together an answer thatâs probably right. Well, thatâs not the standard we have in legal. You canât go in with something thatâs probably right. So, you have this whole list of issues that these models donât address.Â
What weâve tried to do is address those with a courtroom-grade solution. Our system is backed by 160 billion documents and records. Our curated collection is our grounding data. So canât go into court and say, âI found this on the internet,â but you can refer to a specific case. We also have what we call a citator agent thatâll check that case to make sure that it wasnât fabricated by the system and is actually still good law. You can also look at the case law summary so you know what the case is about. You can look at the headnotes so you can see the particular points of law that were addressed in that case and see if itâs still a valid case.Â
Privacy is another issue. Thereâs a special relationship that exists between attorneys and their clients in that attorney-client privilege, so thereâs some privacy requirements that you need in order to maintain that. If youâre using one of these just consumer-grade models, you donât have the level of privacy and security that you need. Transparency is another issue. You put a question in, you get an answer back. Well, based on what? What was the logic that the system used? We open up the black box so you can see the logic thatâs being applied. We give the attorneys the ability to go in and actually change that. If this model is getting something wrong, the attorney has the opportunity to change it so that they get the outcome that theyâre driving for. But, as you said, the law is not deterministic. There are lots of different factors that go into this, but you need to have a system thatâs legally driven, thatâs purpose built for legal situations in order to really operate in a courtroom-type environment.
Thereâs two things I really want to push on. Again, I was not a good lawyer. I donât want to ever pretend on the show, to you, or to anyone else that I was any good at this. But you learn a particular way of thinking in law school, which is a pretty rigorous, structured way of approaching a problem, going to find the relevant cases and precedents, and then trying to fashion some solution based on that. That feels like weâre just moving words around, but itâs actually a way of thinking.
Before AI showed up, we would mash together using a word processor and thinking a certain way. Now, weâre pulling them apart. Weâre saying that the computer can move the words around and generate some thinking. So, thatâs one thing I want to push on. Iâm very curious about that because it feels like the lawyering part of being a lawyer is being subsumed into a system, and that might change how we lawyer.Â
The other part is if anyone is going to look at the work being done. Weâre already seeing lawyers get sanctioned for filing briefs with hallucinated case citations in them. There was just a case where, I believe, a court had to rescind an opinion because it had a hallucinated case citation in it. This is bad. This is just straightforwardly a threat to the system and how we might think about lawyers, judges, and courts. Itâs not clear to me that anyoneâs going to use the tools as rigorously as you want.
So on the one hand thereâs âWeâve made the thinking easier.â On the other hand, itâs, âOh, boy, everyoneâs going to get really lazy.â Theyâre both in your answer. Theyâre both saying, âWeâre making it easier to look at this stuff. Weâre making it faster to do the research.â Iâm just wondering where you think the thinking comes in.
I donât think that these models replace the lawyers. I think they help the lawyer and augment what the lawyer does. So, if you think about an activity that a lawyer might do â letâs say they were preparing for a deposition. They need to come up with a list of questions that theyâre going to ask the individual thatâs being deposed. You can take the facts around that particular case, load them into a vault, and point the system to that vault and say, âBased on the facts of this particular matter, develop a list of deposition questions.â Thatâs something that a lawyer wouldâve done on their own. In the past, they may have referred to a list of questions that they had previously or something â
Actually, can I just grab that example? Maybe a lawyer wouldâve done that, but more often a lawyer wouldâve told a bunch of junior associates to sit in the basement and do that. That was how those junior associates learned how to do their job. Thatâs what I mean. Weâre farming out the thinking, and some people might never actually do that thinking. That might change the profession down the line in really substantive ways.
Right. It is an apprentice system. So, if you start to take some of the layers out of the bottom, how does everyone skip the bottom layer and still make it to the second with the same capabilities and skills? Thatâs a real challenge. I think the systems are allowing lawyers to not have the associate do that work. Now they can say, âGenerate me 300 or 700 questions.â It doesnât take that long to go through 700 questions, and the models never get tired. From our experience, theyâll go through that list of questions and say, âFirst question? Yep, thatâs a good question. I wouldâve thought of that.â
The system made it a little bit faster, but it didnât really help them. Second question, same thing. Third question, same thing. Fourth question doesnât even make any sense, scratch it off the list. With the fifth question theyâll say, âOh, thatâs interesting, I wouldnât have thought to ask that but thatâs probably important, so Iâm going to add that to my list.â So, thereâs an efficiency component to it, but I think thereâs also a better outcome component.Â
In terms of the apprenticeship piece, I think people are struggling right now to figure out how thatâs going to impact the apprenticeship model. Someone was describing to me that they had worked on a situation where they were looking at securitized assets. When they were an associate, they did this project for a company that had 50 states worth of coverage, and so they became the expert in the firm on asset securitization in all 50 states. For four or five years, anytime somebody had a question, they came to that individual. It was a great way to make a career. Now, the system can do all that information for you. So his question was: âHow is that ever going to happen now in this new world?âÂ
I think firms are going to struggle with that, but I also think theyâre going to figure it out. We tend to get some of the smartest and brightest people going into the legal profession, and so far, they seem to have figured out every challenge thatâs faced the industry. I think theyâll figure this one out as well.
What are some solutions youâve seen as people try to figure this out?
I donât know that folks have come up with a lot of solutions around the apprenticeship model. What weâre for sure seeing is that people are embracing AI. Itâs here, itâs in the courtroom, itâs in the law firm. Two-thirds of attorneys are using AI in their work, according to our surveys, and our surveyâs probably a little outdated. Iâd say the numberâs probably higher. I donât know about you, but I use AI every day. Itâs now in my personal and work lives. I think the legal profession is perfectly suited for it, so itâs only going to expand.
When you see the lawyers getting sanctioned and the courts having to rescind opinions, is there a solution that involves using LexisNexis so it wonât happen to you? Or do you think thatâs a symptom of something else? Everyoneâs just using AI, I get it. Probably the biggest split for our audience right now is between the data that says everyoneâs using this stuff all the time and the hostility our audience expresses about the tools, their quality, and the fact that a lot of that usage is driven by big companies just putting it in front of them. Thereâs something happening where, to justify these enormous investments, the tools are showing up whether the consumers are asking for them or not, and then weâre pointing out that everyoneâs using the tools.
What I hear from our audience is, âWell, I canât turn off the AI overview. Of course Iâm using the tool because itâs just in front of me all the time. I canât make Microsoft Office stop telling me that itâs going to help me. Itâs just in front of me all the time.â So, when you see the errors being made in the legal system today â the lawyers getting sanctioned, the lazy AI use, the lack of apprenticeship thatâs going to impact the entire next generation of lawyers and how rigorous they are â how do you make your product address that? Or are you just not thinking about that right now?
No, weâre definitely thinking about it, and weâve incorporated things into our product. These things always make the headlines when they happen, but I think itâs a small percentage of attorneys that are causing these problems. Just taking something and bringing it into court has never been the standard. Youâve always had the responsibility as a lawyer to check the material and make sure that itâs valid before going into court. And some individuals arenât doing that. We certainly saw that in the [Tim] Burke case where some attorneys submitted a document to the court that I think had eight citations in it and seven of them were just completely âÂ
But that was inevitable. The day ChatGPT showed up, half of the legal pundits I know were like, âThis is inevitable. This outcome will happen,â and then it happened. There wasnât even a stutter step. It just happened immediately. Thatâs what Iâm trying to push on. Is the solution just that LexisNexis has a tool thatâs better and you should pay for it, or is the profession going to have to build some new guardrails as we take the rigor away from the younger associates?
Well, you can never stop an attorney from taking it into court and not doing the proper work. Thatâs going to continue to happen. I think somebodyâs going to lose their license over this at some point. Weâre seeing the sanctions start to ratchet up. So, a couple attorneys got fined $5,000 a piece, and then some attorneys in federal court down in Alabama got referred to the state bar association for disciplinary action. I think the stakes are increasing and increasing. What we do with our system is provide a link to a citation if we have it, so you can click on it and see it in our system.
And thereâs no fabricated cases within our system. We have a collection mechanism that ensures that every case in there is valid. Itâs shepardized and has headnotes and different tools that lawyers can use. So, we make it really easy for you to use our system to check and make sure that the citations youâre bringing into court are not only valid and still good law but are also in the right format. Format is important. We check for all these things and make it really easy for the lawyer to do the work they need to do. They need to make sure that case is on point, that the case is still valid.
One of the many reasons I was a horrible lawyer was because of that moment when you get your first law firm job and you realize your boss just has a library of their favorite motions on file. Theyâre just going to pull from the card catalog, change some names and dates, and file the motion. The judge will recognize the motion and the attorney, and this is all just a weird formality to get through the next stage of the process. Maybe weâll never get to the substantive part of the case because weâre just going to settle it, but we need to file this motion we had and elaborate. This truly was demoralizing. I was like, âIâm just doing paperwork. Thereâs nothing about this that is real.â
Iâm probably describing what every first year associate goes through until the check hits, and it just didnât work for me. How close are you to having Lexus AI just do that thing, have it recognize the moment and say, âWe have the banked motion and weâre just going to file it to a system?â
Well, we can connect into a document management system (DMS) that has an attorneyâs prior motions. We have our vault capabilities, so they can load their motions up. They can still use the motions theyâve already developed. And thatâs a perfectly fine way to do things because â
Well, Iâm saying, from scratch.
Right. We have the ability to do it from scratch too, but a lot of attorneys donât want to do it from scratch because theyâve reviewed every single word in that motion and they know that itâs good. If they do it from scratch, then they have to review every single word. But if they want to do it from scratch, we can do that for them today, and if they want, we can use their prior work product as the grounding content to create a new motion, or we can use our authoritative material. They can choose the source and the grounding content.
I guess Iâm asking what level of automation is there. So, youâre an attorney. Youâve got a document management system, youâve got a new client, and you need to file some standardized motion that you always file for whatever thing you need to do, like a continuance. At what point does Lexis [AI] say, âIâm watching this case. Iâm going to file this for you. Iâm just going to hit the buttons for you. Donât worry about it,â in the way that a great legal assistant might do?
Weâre always going to give the attorney the opportunity. We donât want to just be doing things on their behalf unsupervised, so weâre going to give them the opportunity. We could get to the point where we say, âIt looks like you need a continuance. Hereâs a draft of a continuance push,â and it will automatically file it. Weâre not at that point today, but if you need a continuance, we could draft it for you. Our vision is that every attorney is going to have their own personalized AI assistant, and itâs going to understand their practice area and their jurisdiction, along with having access to their prior work product.
The systems are only as good as the content behind them, so itâs going to have access to our 160 billion documents and records, and itâs going to be able to automate tasks that they do today. If you think about all the different types of attorneys and all the different tasks that they perform, thereâs probably 10,000 tasks that could be automated.Â
So, weâre working with our customers to understand what the most important tasks are, and weâre working with them to automate those tasks today. We have the largest and most robust backlog of projects that weâve ever had in our companyâs history because there are so many of things that can still be automated, and weâre working with our customers to do that. If they tell us, âWhat we really want is for you to automatically file thisâ or for us to provide them with an alert that says, âHey, this deadline is coming up and you need to file this. Hereâs a draft. Do you want to file it?â Iâm sure we can develop it.
Weâre not at that point today, but we are in the drafting phase. That vision is not a five-year vision or a three-year vision, thatâs available today. Thatâs ProtĂ©gĂ©. Thatâs what ProtĂ©gĂ© does today. There are tasks that it can do, but we havenât finished that massive backlog yet.
If you look at the sweep of other CEOs whoâve been on Decoder, theyâre going to tell you, âYou just integrate our computer vision system and weâll use [electronic case files] for you to file this motion.â Theyâll all be very happy to sell you that product, Iâm sure.Â
The reason Iâm asking it this way is because when I get the consumer AI CEOs on the show, they love to tell me that theyâre going to write my emails for me with AI, and then the next sentence they say is, âThen, weâll sort your inbox with AI.â At some point, the robots are just writing emails to each other and Iâm reading summaries. Something very important has been lost in that chain. One of the funniest outcomes of AI is my iPhone suddenly just summarizing emails and generating emails for other iPhones to summarize, and I have no idea whatâs going on.Â
Thatâs bad in the legal context. Weâre automating document generation to make the case for our clients. On the other side, the judges and clerks might be using these same tools to ingest the cases, summarize them, understand the arguments, and write the opinions that are the outcomes. Culturally, I think itâs important for you to have a point of view on where that should stop because otherwise we are just going to have a fully automated justice system of LLMs talking to each other. Maybe thereâll be some guardrails that other people donât have, but weâve taken an enormous amount of humans out of the loop.
I think you have to have the human in the loop. Itâs an important part of the process. I could see the bots going back and forth on things like if someone says, âHey, can you meet at nine oâclock?â and your system opens up the calendar, says youâre available to meet this person on your high priority list, and sets up the meeting. When youâre talking about substantive legal matters, the stakes are too high. Youâre talking about a disabled veteran getting or not getting their benefits. Youâre talking about a victim of a natural disaster getting or not getting insurance proceeds. Youâre talking about a single mother getting or not getting welfare benefits. These are all legal matters, and they really have a huge impact on peopleâs lives. The stakes are way too high for bots to be going back and forth and sharing information.
Do you think that clerks and judges should be using AI the same way lawyers should be? Thatâs where I would draw the line. I think the clerk should be made to read and interpret everything as humans, and the judges should be made to write everything as humans, but it doesnât seem like that line has been formalized anywhere.
I donât think a judge should write every line. I think that they could use AI. Itâs great when you put concepts in, put the words around that concept and structure them in an orderly way. I think that there is a component of the work that could be done by AI, but it shouldnât be a bot talking to a bot. I donât think it should be fully outsourced to AI. Youâve got a responsibility as a judge, as a law clerk, as a lawyer to review that document and make sure itâs actually saying what you intend it to say.I think most attorneys are using it that way. It will create a great draft, maybe at 80 percent, which allows you to do 20 percent of the work. But that 20 percent is the deep, analytical thought work, the things you actually went to law school to do as opposed to what you were describing earlier. Itâs going to allow lawyers to do more of that type of work.
Iâm curious to see how different jurisdictions and circuits approach the question of what the judges and clerks should be doing. I sense that that pressure is going to express itself in different ways across the field.Â
Judges are becoming forensic auditors. Theyâre reviewing this information looking for fake cases. We donât want them doing that. That should not be their job. I think things do need to change in some of these areas.
Using AI to catch AI is another theme that comes up on Decoder all the time.Â
I have utterly forgotten to ask you the Decoder questions. So let me do that, and then I want to zoom out a little bit farther. These are my own questions. You can tell, Iâm a little rusty.
Iâm looking at the LexisNexis leadership structure, itâs very complicated. Thereâs a CEO whoâs not you, Mike Walsh, but then youâre the CEO of the US and the UK. Thereâs a bunch of other VPs everywhere. Youâve got a parent company called RELX. Explain how LexisNexis is structured and how your part fits into it.
RELX is the parent company, and itâs publicly traded. It has four divisions. Legal and Professional is one of those divisions, and its CEO is Mike Walsh. I report to Mike. Iâm the CEO of our North America, UK, and Ireland businesses. The way that weâre organized, itâs a matrix. We go to market based on customer segments. So, we have a large law business, a small law business, a corporate legal business, a federal government business, a state and local government business, a Canadian business, and a UK business.Â
Then, we have functional groups that support that. So, we have product management, and theyâre responsible for our product development roadmap and the product strategy. We have an engineering team, and they take the direction from product management but actually build the products. We have functional groups that support that, finance, HR, legal, and global operations that does things like collect content for us. Once you get used to it, itâs not that complicated of a structure. Itâs really well integrated and seamlessly integrated together, which allows us to operate really quickly. We can get things done quickly and efficiently. And I would say that the whole process is customer driven.
Iâm really interested in the structure, particularly the fact that you have the UK, Ireland, and North America. Iâm fascinated by corporate structures, and one of the things that strikes me is that you are not in control of the taxonomy of your product, right? These countriesâ governments are in control of the taxonomy of their legal systems.
The English legal system and the American legal system have commonalities but wildly different structures. The Canadian legal system and the US legal system have wildly different structures. Canada actually has more in common with the UK given their shared history. How do you think about that? Are those different teams? Do they have different database structures? How does all that work?
We do have different teams and different database structures, but weâre actually trying to consolidate to the extent that we can because when we have similar things, we shouldnât have them marked up differently in different databases. Getting them marked up in a consistent way will allow us to do what we call âextreme reuse,â which is to basically use that same technology we develop in multiple jurisdictions with limited changes to that system. What that allows us to do is really focus on that core system and roll it out quickly, so that everyone across the world gets the benefits of all those changes. But you have civil law in some jurisdictions and common law in others, and the laws are structured in different ways. So, you do have things that make that more challenging, but thatâs the general idea behind what weâre trying to do.
Can you apply the same AI systems to these different legal systems in the same way, or are you actively localizing them differently?
I would say that we actively localize them, but we try to minimize the amount of work that we do because a lot of it can be done in a similar way.
Generally, thereâs a lot of concern about American legal precedents traveling across the ocean, particularly in the UK. You can see the American culture war gets exported and shows up in a lot of different ways. Do you think your tool will make that better or worse? If youâre not pulling them apart and are actually trying to minimize the differences, you might see repeat arguments or repeat structures just based on the way the AI works.
Each one is based on the content of the individual jurisdiction. So, we donât mix the content, but we do try to utilize the same technology. For example, thereâs search relevance technology to find the case thatâs most closely associated with the matter that someone is working on. We can take that and build it for the US market or the UK market, and then we can move it to another market and it will work pretty well. Then, we need to do some modifications to make it work really well for that particular jurisdiction. We get 80 percent of the DNA transferred over in that core model.
I was recently talking to Mike Krieger, who is the chief product officer of Anthropic â just a totally different conversation on a different thing â but he said this thing to me, which is stuck in my mind. He said, âI recognize Claude, I can see Claudeâs writing.â He said, âThatâs my boy,â which is cute. Does your AI have a personality? Can I recognize its writing in all these different jurisdictions?
We use a multi-model approach, and so itâs probably a little less clear which particular model drove something. Of course with agentic AI, things have really changed. I think that was probably true a year and a half ago, but now with agentic AI, when someone puts in a query⊠letâs say they wanted to draft a document. Maybe a client is sending in a request and sheâs interested in a premises liability issue around the duty to inform a trespasser about a dangerous condition on a piece of land. The query will go into a planning agent, who will then allocate that query out to other agents.Â
It needs to do some deep research, so maybe it uses OpenAI o3Â because itâs really good at deep research. At the end, it needs to draft a document, so maybe it uses Claude 3 Opus, which is really good at drafting. Weâre model agnostic, and weâll use whatever model is best in a particular task. So, the result you get back was actually potentially done by multiple different models, which probably makes it a little bit harder just to see if it was drafted by OpenAI.
Is that reflected in your structure? You describe engineering, product, and your localization, but youâve got to build that agentic orchestration layer and decide which models are best for each purpose. You could design an engineering organization around that problem specifically. Is that how youâve done it or is that done differently?
We have an engineering team that focuses on the planner agent and the assignment of the tasks to different agents.
Is that where the bulk of your investment is or is it paying the token fees?
I havenât actually broken it out that way, so I couldnât tell you. The token fees are certainly an important part of the investment. Engineering is a huge portion of the investment. The attorneys that we hire to review the output and tell us if itâs good or not good are a massive piece of the investment. So, itâs spread out over many different things, but weâve certainly spent a lot of money on that particular issue.
Tell me about those attorneys. You hire attorneys to basically do document review of the AI? Are they very senior attorneys? Are they moonlighting from big firms? Are they a bunch of junior associates in a basement?
Itâs based on the task. What we try to do is get attorneys that have experience in a particular matter. So, if weâre looking at documents related to a mergers and acquisitions transaction, we want those to be looked at by someone who has some experience in mergers and acquisitions. They can tell us that the document looks great, or tell us if itâs missing particular things. Then, we can go back and say, âWhy did we miss those particular things and what changes do we need to make to how weâre training and directing these models to correct that situation going forward?â
Whatâs the biggest thing youâve learned from that process?
The biggest thing Iâve learned is how important it is to have attorneys doing that work. I expected to hire a lot of technical people and data scientists to do this work. I didnât really expect to hire an army of attorneys. But I think one of the secret sauce components of our solution is that our outputs are attorney reviewed. Thatâs how we keep getting more relevant results.
Where were you best at to start with and where were you worst at to start with?Â
We werenât really good at anything to begin with, and I think weâre building things out. Sometimes itâs a practice area, sometimes itâs a task. If you look at all the different tasks attorneys do that we were talking about earlier, in many cases the taskâs output is some sort of a document. So, weâre really focused right now on how to improve our document drafting.
Is all this revenue positive yet? Are you making money on all this investment or do you see that on the horizon?
Our growth rate is definitely accelerated as a result of this. The main thing that weâre focused on is the customer outcome. What weâre seeing is that the customers are getting happier and happier with the solution, so I would say that itâs been very successful in that regard. Itâs the fastest growing product that weâve ever had.
Growing fast but losing money with every query is bad, right?
Weâre not there. Weâre not losing money with every query.
Are you breaking even or are you making money?
Out profit is growing.
Specifically on AI tools, or overall?
Most of our investment is in AI tools.
Let me take the last bit here and zoom out even more broadly. I mentioned that I would bring up precedent again in this conversation.Â
I think if youâre paying attention to the legal system of America right now, you know that itâs pretty much in a state of pure upheaval. Youâve got district court judges calling out the Supreme Court, which is not a thing that usually happens. You have a Supreme Court that is overturning precedents in a way that makes me feel like I learned nothing in law school.
Chevron deference is out the door. Humphreyâs Executor, the law that keeps the president from firing FTC commissioners is, Iâm guessing, out the door. Roe v. Wade was out the door. Just these foundational precedents of American law out the door.
A lot of that is based on what conservative judges would call originalism. I have a lot of feelings about originalism, but a big trend inside of originalism is using AI, or what they call âcorpus linguistics,â to determine what people meant in the past. Then, you take the AI and you say, âWell, it did the job for me. This is the answer.â Are you worried that your tools will be used for that kind of effort? Because it really puts a lot of pressure on the AI tool to understand a lot of things.
Iâm not that worried. I donât think the Supreme Court is asking LexisNexis what we think it should do.
But certainly courts up and down the chain are.
Theyâre asking legal questions, theyâre getting answers back, and then theyâre interpreting those answers. We are providing them with the raw content that they need to make the determinations, but weâre not practicing law. Weâre not making those decisions for them.
Iâm going to spring this on you, but here it is: John Bush is a Trump-appointed judge. He cited the emergence of corpus linguistics in the legal field, and he said, âTo do originalism, I must undertake the highly laborious and time-consuming process of sifting through all this. But what if AI were employed to do all the review of the hits and compile statistics on word meeting and usage? If the AI could be trusted, that would make the job much easier.â
That is him saying, âI can outsource originalist thinking to an AI.â This is a trend. I see this particularly with the originalist judges, that the job they think theyâre meant to do is determine what a word meant in the past. And AI is great at being like, âStatistically, this is what that word meant in the past, and weâre going to outsource some legal reasoning.â
This is, I think, very odd. My thoughts about originalism and stare decisis in America in 2025 aside, saying, âI will use an AI to reach into the past and determine this meaningâ seems very odd. Iâm wondering how you feel about your tool being used in that way.
I definitely understand your point there. I think about the analogy of a brick. You can use a brick to build a hospital and take care of sick children, or you could take a brick and throw it through a window. One use is really great and another is pretty negative, but in either case, itâs a brick. I think about our tool as being neither good nor bad. I think it could be used for good. I think it could be used for any type of activity that attorneys [need]. I wouldnât want to say originalism is a bad thing. I think it could be used for many different things. I think it could be used for originalism. I think it could be helpful for those who want to take that path and find a new way of looking at something.
We have all the data. They can search it, they can use the tool to find things it wasnât possible to find in the past. So, I could see them using our tool in that way. I guess itâs up to the attorneys to determine how theyâre going to use the product. Weâre not building it because weâre trying to change the law. Weâre building it because weâre trying to help attorneys do the tasks that they want to do.
But I look at the sweep of the tech industry â not the legal industry, but the tech industry â over the past 15 to 20 years, and boy, have I heard that answer many, many times. The social media companies all said, âWell, you can use it for good or evil. Weâre neutral platforms.â It turns out maybe they should have thought of some of those harms earlier.
Look at the AI companies today. Who knows if training is copyrighted. We know the answer. You canât actually just opt out of copyright law. Now, weâre going to do the lawsuits and weâll see what happens. Who knows if OpenAI doing Sora, which is TikTok for deepfakes â actually, we know. We know the answer is, you should have some guardrails.Â
So, Iâm posing you the same question. We see a particularly originalist judiciary hell-bent on using originalism to change precedent at alarming rates. I would say itâs alarming for me because I paid for a law degree that I now think is useless, but thatâs why itâs alarming to me. Itâs alarming because a lot of people have had their rights taken away as well.
Every day this is happening. And one of the ways theyâre going to do that is to defer to an AI decision engine. Theyâre going to say, âWe asked the AI, âWhat did âall peopleâ mean when the 14th Amendment was drafted?ââ and this will be how we get to a birthright citizenship case. Iâm just connecting this to the conversation we had at the beginning. Weâre going to give our reasoning to a computer in a way that itâs not necessarily accountable for, and weâre going to trust the computer. The methods of thinking and that rigor might go away.Â
So Iâve heard the answer that the tool is neutral from tech companies for years, and Iâve seen the outcomes. Iâm asking you. Youâre building a tech product for lawyers, and theyâre already using it in this specific way. Iâm wondering if youâve thought about the guardrails.
We operate under responsible AI principles, and that includes a number of things. One, we always try to consider the real-world implications of any product we develop. We want to make sure that thereâs transparency in terms of how our product works. We open up the black box so people can see the logic that weâre using, and they can actually go in and change it if they want. So, we want to make sure that thereâs transparency and thereâs control.Â
We always incorporate human oversight into product development. Privacy and security is another one of our core tenets in responsible AI creation. Another thing weâve incorporated is the prevention of bias introduction. So, those are the RELX principles for AI development, and we adhere to those. We want to create products that do good things for the world.
If you asked Lexis AI if the 14th Amendment guarantees birthright citizenship to all people born in the United States, will it make the argument that it doesnât?
Iâve never asked it that question. I canât tell you.
Do you have your phone on you? Thereâs a mobile app.
I could pop up here and ask it, I suppose. Let me pop into ProtĂ©gĂ© here. âDoes the 14th Amendment guarantee birthright citizenship or are there exceptions?â Letâs see.
Itâs generating a response, so we can come back to it in a minute.
Iâm very curious to see what it says, because up until recently thereâs only been one answer to that question. Now, the Trump administration is saying, âNope, actually, thatâs not what âsubject to the jurisdiction thereofâ means.â In order to win at the Supreme Court, they will have to construct an originalist argument to that question, and I am confident that the way theyâre going to do that is by feeding a bunch of data into an AI model and saying, âThis is what was actually meant at the time of the 14th Amendmentâs drafting.â Thatâs a thing that AI will be used for that is very destructive.
Iâm not an attorney, so Iâm just going to read the answer here:
âThe 14th Amendment of the United States Constitution guarantees birthright citizenship to all persons born or naturalized in the United States, and subject to its jurisdiction. The phrase âsubject to its jurisdictionâ has been interpreted to include nearly all individuals born on US soil with a few narrow exceptions. These exceptions include foreign diplomats, children of foreign diplomats, children of enemy forces in hostile occupation, children born on foreign public ships, and, historically, children of members of Native American tribes who owed allegiance to their tribe rather than the United States.â
It goes on.
You should send that to [Chief Justice] John Roberts right now. Can ProtĂ©gĂ© do that? Because thatâs the answer.Â
The question is, are a bunch of conservative influencers going to say ProtĂ©gĂ© is woke now? This is the cultural war that youâre in.
It does recognize that ârecent cases have affirmed this interpretation rejecting attempts to expand the exceptions of birthright citizenship,â so it does also recognize that there have been efforts to interpret it differently. The answer goes on quite a bit.
The reason I ask that question very specifically is because Reconstruction is up for grabs in a very real way in this case. Do you think you have a responsibility as the tool maker? Thatâs really the question for so many AI companies. Youâre the tool maker. Do you have a responsibility to not deepfake real people? Do you have a responsibility to not show people fake ideas? I think you were very clear on that, you have a responsibility to not hallucinate, but here you have â
We donât want to introduce or perpetuate any bias that might exist either. And to do that, we rely on the law as opposed to a consumer-grade model that probably just uses news articles, which might have a very different interpretation of things depending on the news articles. There are much more likely to be biases from introduced news articles than black-letter law, for example.
The reason Iâm curious about that is because thereâs a spectrum. I donât think thereâs any place for telling people what they can do with Microsoft Word running locally on their laptop. Do what youâve got to do. Telling people what they can do with a consumer-grade AI tool built into Facebook? I think Facebook has a lot of responsibility there, especially because the opportunity to distribute that content far and wide is at their fingertips.
Thatâs a big opportunity spectrum, and here in the middle thereâs these AI companies. Do you have the obligation to say, âWell, if you want to go make the argument that birthright citizenship doesnât protect everyone in the United States, youâve got to do that on your own. Our robotâs not going to help you.â Do you feel any of that pressure?
We try not to get into politics or any of that debate.
I do not think thatâs politics.
Weâre trying to develop a system that does not have bias introduced into it, that will give you the facts, and attorneys can do the work that attorneys do to make those important decisions. Our job is to give them the information that they need: the precedents, the facts, all the information that they need to then develop their argument, whatever that might be. But we really donât get into any of the politics of birthright citizenship being guaranteed or not.
Well, at some point you do. This is â again, to bring us back where we started, I first encountered LexisNexis as a database of cases and some case notes. There were some law professors who were very proud that their case notes were in LexisNexis when I was in law school. Now weâre drafting a little bit, going to go do the research. Now we have a agentic AI thatâs making the arguments. Maybe one day we will automate all the way to filing. Youâre taking on more of the burden. You are making the arguments. The company is making the arguments. Where is the line? Because there are lots of lawyers who wouldnât take that case, who wouldnât make that argument. Is there a line for you?
I would say our approach is to arm the attorneys with the best possible information, and help them with the drafting of those documents. Weâre really just being led by our customers and what theyâre asking us to do. We certainly are not trying to interpret the law. Weâre not trying to shape the legal system. Weâre not lawyers. Weâre not trying to do the work of lawyers. Weâre trying to help lawyers do the work they do in a more efficient way and, hopefully, help them drive better outcomes.Â
But itâs always their prerogative to interpret the information that we provide, which is what lawyers do. Thatâs what theyâre great at. The reason we have cases is because there are people on both sides. The two individuals are going to make opposite arguments, we want to support both of those attorneys as best that we can.
I get it when youâre the database of cases. I get it when youâre the word processor. I get it when youâre the specialized word processor or the case management platform. The thing that Iâm pushing on repeatedly here is if the AI system is actually doing the work, do you feel like you have different guardrails?
I think our responsibility is to develop AI in a responsible way.
Give me an example of something you wouldnât let your AI do â an argument that you wouldnât let your AI make, or a motion that you wouldnât let your AI draft.
I donât know that we would want to necessarily restrict the AI in that way. Weâre referring back to the information that we have, which is our authoritative collection of documents and materials that helps lawyers understand what the facts are, what the precedent is, and what the background is, so they can do the real, deep legal work and make those trade-off decisions, judgment decisions, the important things that, again, attorneys went to law school to do.
I think these questions are going to come up over and over again. We should have you back to answer them as you learn more. As you look out over the horizon â the next two or three years â whatâs the next set of capabilities you see for LexisNexis, and what do you think the pressures that might change how you make some of those decisions will be?
Itâs hard to say exactly what the main thing thatâs going to change the path going forward is going to look like because if I look back two years ago, I wouldâve never guessed weâd be doing what weâre doing today because the technology didnât exist or it was too expensive to implement. Thatâs totally changed over the last two years, and I think over the next two years, itâs going to change again. So, itâs really hard to say where weâre going to go.
Our vision remains the same, which is that we want to help attorneys. We want to provide them with a personalized, AI-powered product that understands their practice area, their jurisdiction. It has access to our authoritative set of materials and their prior work product. It understands their preferences, it understands their style, it understands what theyâre trying to do, and it can automate tasks that they do today manually.
We will continue to take that latest available technology, show it to our customers, and have them help us understand how we can use that technology to serve them in more modern and relevant ways. Thatâs really whatâs going to guide our roadmap in the future.
Sean, this was great. Let me know when you develop a system that can actually navigate an electronic case filing website because some of the smartest people I know canât do that. But this was great. Weâve got to have you back soon. Thank you so much.
Thank you so much. I really enjoyed our time today. Take care.
Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!

