Will AI Replace Lawyers? A Personal Injury Lawyer’s Perspective
4/17/2025 | Written by Elliot Bourne

The bottom line: AI is a powerful tool for lawyers, not a replacement for human expertise and judgment.
Will AI replace lawyers?
It’s a question on many minds as artificial intelligence (AI) makes rapid advances. As a personal injury lawyer, I’ve been asked this question a lot.
My conclusion: AI tools like large language models (LLMs) can significantly enhance a lawyer’s work, but they will not replace lawyers – any more than a wrench could replace a plumber. In fact, lawyers who understand and use these AI tools properly will be more efficient and gain an advantage over those who don’t. But without expert oversight, AI’s mistakes can be costly, especially in a technical field like law. The technology, while very useful, also has a number of flaws and limitations.
Most LLMs have a knowledge cutoff date:
For example, at the time of this post, GPT 40 only knows information up to about June 2024. It won’t be aware of laws or cases after that point. No AI model automatically stays current with legal developments unless explicitly connected to a legal database. AI will not know about legal changes after its cutoff date.
No true understanding:
An LLM like ChatGPT doesn’t actually reason like a lawyer – it’s basically a sophisticated predictor of text. It generates sentences that sound plausible based on patterns in its training data. The system is not actually capable of making informed judgments, only of generating natural-sounding sentences on a topic. It doesn’t truly grasp legal principles or check statutes for compliance. In fact, accuracy isn’t guaranteed at all – if the AI doesn’t have a clear rule in its data, it may just guess or fill in the gaps with something that sounds authoritative. The AI is essentially a black box and we do not know how it reaches its answer
AI can hallucinate:
LLMs like Chat GPT will often confidently tell you false information. The technical term for this is AI “hallucination.” An ironic trait of AI models is that they often sound most confident precisely when they are wrong. The AI will present information in a fluent, matter-of-fact tone with no hint of uncertainty, even if it’s fabricated. For example, there have been notorious incidents in the legal world where lawyers using ChatGPT got burned because the AI made up fake case citations and presented them very convincingly. (In one widely reported case, an AI provided fictitious court decisions to an attorney, who, not realizing they were fake, included them in a brief – leading to embarrassment and sanctions when the court discovered the truth.) The lesson is that AI doesn’t warn you when it’s unsure or when it’s outputting nonsense. It will cheerfully state incorrect info as if it were gospel.
LLM’s forget information the longer you chat with them:
The limit of an LLM’s working memory is called it’s “context window.” An LLM can only retain a certain amount of information. You might have noticed that if you chat with an LLM for a long time, it will seem like it gets amnesia. This is due to token limits – a kind of short-term memory restriction on how much text the AI can “see” at one time. If you’re working through a detailed legal issue over many messages, the model may no longer “remember” the foundational facts or context you provided early on. This isn’t just a technical footnote – in legal work, where continuity and nuance matter, this kind of drift can lead to subtle but serious mistakes. In a strange way, discussions with an LLM can feel like talking to a PhD graduate who only has a five minute memory.
A Warning to Non-Lawyers: AI Is Not Your Attorney
As an experiment, I asked Chat GPT 40 to write a “time-limited demand” for a hypothetical client who has a car accident claim. Georgia has a specific law, OCGA § 9-11-67.1, that governs pre-suit settlement offers in motor vehicle injury cases (i.e. before a lawsuit is answered in court). This statute was enacted in 2013 and later amended in 2024. Under this law, a time-limited demand letter must include several material terms and conditions. If it doesn’t, the offer may not count as a valid “Holt demand,” and the insurer won’t face bad faith penalties for rejecting or ignoring it.
Chat GPT wrote the following condition in its demand letter: “This demand must be accepted within 30 days of your receipt of this letter”. To be clear, that is a mistake. This is how personal injury lawyers used to write demand letters before the demand letter law was changed back in 2024. The new law requires a specific acceptance date in the demand (you can’t just say answer in 30 days from reciept anymore).
Chat GPT likely wrote it in an outdated way because it has a ton of older demand letters in its training data. Imagine a non-lawyer accident victim who, instead of hiring an attorney, asks an AI chatbot to write a demand letter or give legal advice. The AI might output a very convincing answer or document – convincing, but wrong. The person could send that demand letter to the insurance company thinking they’ve done everything correctly, when in reality they haven’t. The insurer could quietly disregard the letter, and the person would lose the leverage they might have had with a proper demand. Worse, the person might not realize the mistake until it’s too late. By the time they figure it out (perhaps after a claim is denied or lowballed), valuable time could have been lost, or legal rights might have been compromised.
The Indispensable Role of Human Lawyers
Legal strategies in personal injury cases often involve strategic judgment calls — for instance, deciding what terms to put in a demand letter, when to be flexible, or how to respond to an insurer. These decisions require experience, knowledge of local legal culture, and sometimes a creative touch. An AI doesn’t truly understand consequences or nuance.
If a law changes or a new court decision comes out, a human attorney can learn of it and adjust immediately. We stay current through continuing education, legal news, and networks. An AI won’t incorporate new changes until it’s updated (which might be infrequent). And even then, there is no guarantee that the AI is not hallucinating.
In technical fields like law, AI is not a free-standing solution. It’s a tool that must be wielded by someone who knows what they’re doing. Just as a fancy wrench won’t fix a leaky pipe without a plumber’s skill, an AI won’t resolve legal issues without a lawyer’s guidance. The lawyer’s role is to ensure that all the i’s are dotted and t’s are crossed – something an AI, however helpful, cannot guarantee on its own. Of course, “using AI” still means carefully supervising AI. The final work product that goes out the door must be reviewed by the attorney.
AI as an Aid, Not a Replacement
AI can greatly assist lawyers by automating parts of writing, research, and brainstorming. A lawyer armed with a powerful AI tool can operate more efficiently (just as a plumber uses power tools to work faster than with bare hands). In the near future, we can expect most successful law practices to incorporate some form of AI assistance to serve clients better and faster.
However, AI will not replace the core role of lawyers. There’s an old saying: “One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man.” In law, the “extraordinary” skill is the ability to interpret nuance, exercise judgment, and be accountable for the advice given. In the end, trusting an AI alone with your legal matter is like giving a wrench the responsibility to fix your plumbing. The wrench is a fantastic tool in skilled hands, but it won’t do the job by itself. It takes a human lawyer’s perspective and expertise to use that tool correctly, avoid its pitfalls, and ensure that justice is served and clients’ rights are fully protected.
AI isn’t coming for lawyers’ jobs – it’s coming for their busywork. The lawyers who know how to harness AI (and avoid its traps) will have more time to focus on what truly matters: advocating, advising, and applying legal judgment. So if you’re wondering whether to choose an AI or a lawyer for your legal needs, the answer is clear: choose a lawyer who uses AI wisely. You’ll get efficiency and expertise – and that can make all the difference.
Where is AI going in the Future?
Given that hallucinations are a known issue, researchers and developers have been actively working on ways to reduce their occurrence. A variety of strategies have shown promise in cutting down how often models stray from the facts. However – and this is key – none of these strategies can completely eliminate hallucinations, because they don’t fundamentally change the nature of how the LLM operates.
One powerful approach is to connect the LLM with external knowledge sources. Instead of relying solely on its internal memory, the model is given access to a database or search engine to retrieve relevant information in real time, and then it bases its answer on that information. In the past year, major legal research platforms LexisNexis and Thomson Reuters have rolled out generative AI assistants (Lexis+ AI and Westlaw’s AI search features) with bold promises of reliable, “hallucination-free” results. Such errors can be especially dangerous in law, where fictitious cases or statutes could mislead attorneys. LexisNexis and Thomson Reuters have touted techniques like retrieval-augmented generation (RAG). Without getting too technical, RAG is just a way to feed context to an LLM to reduce the rate of Hallucination.
A Stanford University study in 2024 put Lexis+ AI and Westlaw’s AI-Assisted Research through their paces to see if RAG had truly solved the hallucination problem as claimed.
As shown above, RAG is not a silver bullet. First, it depends on the retrieval being successful – the system might fail to find the right info (or any info) for a query, especially if the query is vague or the knowledge isn’t easily searchable. In those cases, the model may fall back to its old behavior (and hallucinate). Second, the model could misinterpret or misapply the retrieved information. Just because it has a source doesn’t guarantee it will use it correctly; it might blend retrieved text with its own generated text incorrectly, or cite a source for a claim that the source doesn’t actually support. Third, if the retrieval source itself is unreliable or contains errors, the model can end up parroting those (garbage in, garbage out).
This doesn’t mean LLMs aren’t useful – far from it. It does mean that both developers and users need to be aware and vigilant about this limitation. In applications where factual accuracy is critical (like medical advice, legal documents, scientific research, news), relying on an LLM alone is risky. These systems shine as assistants – brainstorming, summarizing, rephrasing, drafting – but require oversight when it comes to factual assertions. As a lawyer, it may be useful to think of an LLM as a new associate, fresh out of law school, who is overconfident and needs supervision.