ChatGPT and Legal Marketing – Where do We go From Here?

Written by Allen Watson: Founder & CEO of Blue Seven Content (Updated Jan 2023)

ChatGPT and legal marketing – AI is about to completely upend the legal marketing field.

Okay, not really. But that’s what a bunch of people are about to tell you. Perhaps you’ve already heard that your law firm practice area pages and blog posts no longer need to be written by a human. Maybe someone has raved about how much money you’ll be able to save by not having to pay for content anymore. Since November, all people can talk about is ChatGPT.

Let me be clear – ChatGPT is far more advanced than any other AI that’s come out, at least publicly. In fact, it can create content that’s better than some of the drivel I’ve seen on law firm websites. But I don’t think it’s a legal marketing killer, and I think law firms and legal marketing agencies need to do their research before declaring victory over human writers. 

  1. What is ChatGPT?
  2. Responses to ChatGPT
  3. How Could ChatGPT Disrupt Legal Marketing?
  4. What I Found When Using ChatGPT (Legal Content Writer Explorations)
  5. The Issues With ChatGPT for Legal Content Writing
    1. Plagiarism is a problem
    2. Incorrect information
    3. It cannot can cite sources
    4. Very surface-level content
    5. No current information to pull from
    6. Where does new information come from if everyone stops posting new content?
    7. Possible legal or legislative issues
    8. None of the ChatGPT legal marketing issues are insurmountable
  6. Embrace Technological Advances Instead of Dismissing Them

What is ChatGPT?

If you’ve been anywhere on social media recently, you’ve seen people raving (or ranting) about ChatGPT. 

But what the hell is it?

ChatGPT was created by OpenAI, which is a research lab focused on advancing artificial intelligence technologies. The organization was founded in 2015 by various individuals, including Elon Musk. However, Musk resigned from the board of OpenAI in 2018.

ChatGPT was released in beta version to the public on November 30, 2022, and amassed more than a million users less than a week after its launch. ChatGPT uses a large artificial intelligence model created by OpenAI, called GPT-3.5 language technology. This system has been trained by using a massive amount of text data from various sources.

ChatGPT is revolutionary, but we're not sure it can handle good legal content writing.
You need to understand ChatGPT and how it affects your field.

The current way to use ChatGPT is sort of like a chatbot, where a user will input a question or prompt into a search bar and watch as ChatGPT responds with what it believes to be the appropriate information for the prompt or question. Perhaps the best part of ChatGPT is that you can get it to respond in pretty much any form you want. You can have it craft a five-paragraph essay, or you can command it to give the answer or response as a poem.

Want to dig further? Tell ChatGPT to craft a response to a question or prompt in iambic pentameter or in the speaking style of William Shatner. It can do it.

I asked it to write me a love story between Luke Skywalker and Yoda. It did it, and it convinced me that was the true story behind the whole saga.

This AI system responds really well to the prompts imputed. You can get very specific and creative. I do strongly suggest you go try it out. It’s honestly great for entertainment. You’ll also see the potential for this tech to disrupt everything.

Responses to ChatGPT

To say the response to ChatGPT has been resounding and immediate is an understatement. Educators have proclaimed that the essay is dead because there will be no way to know what’s student-written and what’s generated by ChatGPT. Teachers say there is no way they’ll be able to assign take-home tests.

Some have questioned whether ChatGPT will make lawyers obsolete, as it may be able to create arguments and draft legal documents. Imagine a courtroom where all you do is wait for AI to tell you the outcome of the case because it’s already read every possible law and court case.

The Washington Post has said that Google (and other search engines) face a major threat because of ChatGPT. The argument is that ChatGPT could spell disaster for Google by providing better answers to the queries that we typically ask Google.

Google crawls and indexes billions of web pages. It then ranks this content in order of the most relevant answers (most of the time). When you perform a search, you get a list of links to click through, typically beginning with ads related to your search and then moving on to the organic links related to your search. This, my friends, is where SEO wizards have made their bones.

When individuals type in a question on ChatGPT, they are presented with a single answer based on the AI search and synthesis of the information already online. The idea is that now, instead of you having to click through the most relevant links to find the information you need, ChatGPT will handle the hard part for you and give you THE answer. The definitive answer. 

Of course, there have been significant discussions about what comes next for the internet. Web 3.0 is typically seen as the next phase, even though there is little consensus about what this means or what it looks like. We’ve discussed the metaverse as being the key component in a Web 3.0 world, and ChatGPT and other AI technologies could aid that shift.

Legal marketing SEO agencies make a living off of helping law firms rank toward the top of search engines for specific queries. The industry, quite frankly, isn’t ready to handle a world where SEO isn’t a thing. 

All I can do is approach ChatGPT from the angle of a content writer that understands and uses SEO but focuses on providing content that readers need/want to see.

I’ve been creating legal marketing content for years. I’ve written thousands of law firm practice area pages and blog posts, and I’ve supervised writers who have written tens of thousands. So, it was only natural for me to begin by prompting ChatGPT with topics that frequently crop up when crafting a page.

I asked, “What types of compensation are available for a car accident in California?” and it gave me a solid answer, one that you’d typically see on a law firm’s website.

I asked, “Is there a cap on damages available for a successful personal injury claim in Michigan?” and ChatGPT gave me a convincing answer.

I asked, “What are the most common injuries caused by a moped accident?” and the AI provided an indisputable list of injuries.

Finally, I asked, “What are the four elements of negligence for a personal injury claim?” and the AI gave me exactly what you’d expect to see on a law firm’s website.

Each one of these responses came back with data organized in a way that we would typically see on a law firm web page. There was a brief explanation, a bullet list or a number list of some sort, and often a little conclusion to wrap it up. I could certainly envision a legal content writer crafting a law firm practice area page or blog post, inputting their H2s into the ChatGPT prompt, and then copy and pasting the answer to their page.

After these basic queries, which would essentially be sections of a longer page for a law firm, I decided to get more specific with the requests. I asked ChatGPT to write a 500-word law firm practice area page targeting those who need a Chicago car accident attorney.

You know what?

The page wasn’t bad. It was surface-level, but it certainly provided enough information to maybe convince someone that they’d need an attorney if they’ve been injured in a crash.

But it was certainly not the type of page that I would create. I do see the value of using ChatGPT and other types of AI tools for coming up with ideas for a page. This is a tool, not a replacement. At least not yet.

Blue Seven Content founder Allen Watson discusses ChatGPT with Conrad Saam and John Reed.

Just because I said the responses given by ChatGPT were convincing and organized does not mean that they were without issues. In fact, everything that I put into the prompt would never pass muster at Blue Seven Content, and it certainly wouldn’t fly on a law firm’s website.

Plagiarism is a problem

The most glaring issue that cropped up was plagiarism. This is the biggest sin when it comes to writing website content, no matter the industry. If a law firm content writer plagiarizes content from either themselves or from other sources, this is going to hurt the web page. Google’s algorithms know how to spot copied content, and they can penalize a page or even an entire website for it.

  • The prompt on car accident compensation in California came back as 33% plagiarized.
  • The query about moped injuries came back as 23% plagiarized.
  • My question about the four elements of negligence came back 19% plagiarized.
  • A prompt asking how burn injuries are classified was returned as 17% plagiarized.

Not once did I ask it a “typical” legal question and get a response that was less than 15% plagiarized. This challenge is not insurmountable if you have the ability to detect plagiarism and have a competent editor (even then, all you’re doing is wordplay without originality). Right now, ChatGPT is not capable of original thought. It has to provide answers using information already available.

Also, remember that 500-word practice area page I told ChatGPT to write? Well, it came back 34% plagiarized. Sources it drew from ranged from other law firm websites to the Daily Mail. If you’re a veteran legal content writer, you already know to avoid citing competitive law firms and sources that lack credibility. 

Jan 2023 Update – I wanted to know how ChatGPT has evolved, if at all, since it’s release. I asked it to craft a law firm page for fairly simple prompts. I received answers that were less than 10% plagiarized and was fairly impressed. However, I then asked the AI to write a page that required a slightly more technical response, but still fairly basic for a law firm website. There was more than 20% plagiarism.

Bottom line so far – ChatGPT simply cannot help but provide plagiarized answers for anything more than a VERY basic prompt.

Incorrect information

Incorrect information is the last thing a law firm needs on its website.  One of the biggest problems with ChatGPT is the lack of sourcing, and the fact that you have to 100% know the material in order to detect incorrect responses.

I asked ChatGPT, “Is there a cap on damages available for a successful personal injury claim in Michigan?”

If you know anything about these caps, then you know they typically apply to non-economic damages for medical malpractice claims, which is the case in Michigan. However, ChatGPT responded that there was a cap for ALL non-economic damages in Michigan.

ChatGPT presents incorrect information as if it’s fact and in a pretty convincing way. With this tech, you can’t see that there may be other answers the same way you can when you perform a Google search. Nor does it provide room for nuance of the law or the geographic area of the law you are searching. 

The AI tech behind ChatGPT isn’t at a level where it can detect incorrect information, or at least where it can analyze and synthesize information correctly. Somewhere, the AI read that Michigan had a non-economic damage cap, and it had no clue that the information was incorrect. We can look directly at a tweet from Sam Altman, CEO of OpenAI:

I asked ChatGPT, “What are the exceptions to California’s medical malpractice statute of limitations?” The response I got was lacking in substance. 

The AI response failed to properly explain the exception for minors who sustain injuries due to a medical error. It didn’t highlight that there is a difference depending on the age of the minor when the injury occurred. ChatGPT failed to mention the exceptions to California’s medical malpractice statute of limitations for foreign objects left behind in a person’s body after a procedure. 

These are just a few of the mistakes I found during a cursory review. I can only imagine the issues that would arise for slightly more complex queries. 

It cannot can cite sources

I initially thought ChatGPT wasn’t able to cite sources, but it can. When you write your prompt, you can tell the AI to use and cite reputable sources and it will do so. However, I caution anyone doing this, because we don’t currently know how ChatGPT decides what is “reputable.” Conrad Saam, my friend and president of Mockingbird Marketing, has said that the program has given him Wikipedia as a “reputable” source. While Wikipedia is generally accurate, there’s a snowball’s chance in hell I’ll be citing it on a law firm practice area page, FAQ page, or blog post.

We also don’t want to pull information from John Doe’s hobby blog. Don’t get me wrong, we’ll use those sources as a starting point, but we have to verify the information and cite using trusted sources. 

I’m still of the opinion that, no matter what citations ChatGPT provides, there needs to be a human fact-checker. This is particularly true for those of us who write content that demands a certain degree of accuracy. This, in my opinion, would lead to the most time-consuming part of preparing a page for publishing. If you are going to cite data or statistics, then you need to be able to source the information through a hyperlink on the web page. Anyone relying on ChatGPT to craft legal content will have to have an editor go back and (1) go to the source provided by the AI (2) verify the information, and (3) hyperlink the external sources into the content.

All of this is beginning to sound like work writers already do when they create a new law firm website page from scratch, and it’s likely to take nearly as long. If not longer. Content writers often loathe having to go in and adjust or correct other people’s work. It’s typically easier to simply make a new page. 

Very surface-level content

The information returned through ChatGPT is fairly surface level, at least for the purposes of law firm website content. Even if we can get passed the plagiarism issue with good editing, the pages ChatGPT provides are equivalent to what I’d expect from someone who has never written this type of content before. It’s fluffy and lacks naunced research. .

No current information to pull from

Right now, ChatGPT relies on information only up to a certain point in 2021. The AI does not use current data or any real-time information. This will be a problem if you want to use current data and statistics or any new laws on your law firm’s website. Additionally, if you need to craft a blog post about current changes or updates to your particular field of law, ChatGPT will have no way to do this.

Ramping up ChatGPT and other artificial intelligence programs to allow for real-time updates will be a massive undertaking. This requires enormous computing power, something that will take some time to build. 

I recently read “The Metaverse: And How It Will Revolutionize Everything” by Matthew Ball, and one possible solution to this problem could be on our tables and in our pockets – our devices. Almost everyone has a computing device (or four or five of them), and the reality is that they remain dormant much of the time. 

If a larger system had the ability to tap into these devices for their computing power, this could allow for the systems needed to control a real-time AI program (as well as potentially power a metaverse immersive environment). It’s essentially crowd-sourcing computer power. 

This comes with a whole slew of privacy and legal questions that many of us are certainly not ready to think about, which highlights some of the issues that AI developers will have to overcome. 

Where does new information come from if everyone stops posting new content?

Maybe this is just my limitations on what I am able to understand about ChatGPT’s capabilities and AI in general, but if this type of technology is used to create new content, where will the AI be able to draw from and learn from in the future?

I envision a future where, if this type of artificial intelligence becomes common, we see AI copying other AI responses. Somewhere, AI systems need to intake new information from human sources in order to stay relevant. 

There will inevitably be legal issues that arise. The courts and lawmakers will step in to address these issues, but that could take a while. For example, will anyone face liability if ChatGPT or another AI gives incorrect information that then causes harm to others? Imagine a WebMD controlled by AI. Will people listen to the advice given by the AI, or will they find a way to verify what they’ve been told?

What if it’s determined that anything written with AI must be labeled as being “machine-generated,” much like the requirement on most platforms that certain posts have to be labeled as ads? Will your legal clients trust you if they see your website is created by AI?

ChatGPT is currently in beta form, and we’re all the test subjects. The more prompts we put into the system, the more it will learn. Developers will continue to tweak the code to determine what works best, and the AI will learn as it goes. 

The system will get better at understanding why incorrect information is, in fact, incorrect. It will learn that it needs to take existing information and craft it in a way that doesn’t plagiarize others. Coders can help the AI recognize what an authoritative source looks like, and they can show it how to use anchor text to hyperlink. Hell, the AI can probably teach itself how to do that.

Using AI for content writing - embrace the change, but be wary of the outcomes.
Will AI be a tool to use when crafting legal content, or is this going take over?

Microsoft and Google – The Battle Brewing

Microsoft recently announced they were investing $10 billion into OpenAI, and there is strong speculation they’ll integrate ChatGPT into their Office tools. This is the third, but largest, round of investment the tech giant has made into the AI company. Microsoft has clearly seen the value of artificial intelligence, and they’re always working to reinvent the company and stay ahead of the curve.

Google is nervous. Even though the company likely has the tools necessary to compete with OpenAI, they’re behind, and being behind in this industry is dangerous. Google has called in the big dogs, founders Larry Page and Sergey Brin, to help guide them through this credible threat to the company’s main source of revenue (search engine results and ads).

Until we see how the battles between Google, Microsoft, and other major companies end, we’ll have to keep adjusting strategies. As a legal content writer or SEO company for legal marketing, this is something you’ll need to keep an eye on over the next few months and into 2024.

Embrace Technological Advances Instead of Dismissing Them

It may seem like I’m against AI. I’m not. In fact, I want to embrace it. ChatGPT and legal marketing aren’t avoidable.

There’s never been a time when rejecting new technologies has worked out for anyone in the long run. Horse and carriage operators vehemently hated the concept of a motorized vehicle, and many people doubted whether cars would actually become mainstream. For years, people doubted that computers could ever revolutionize the way individuals went about their daily lives. Even the benefits of the internet weren’t fully understood for quite a while. In fact, many scoffed at the idea of online shopping and “social media.”

Here we are, looking at what could represent another major shift in the way we approach “knowledge.” We have a choice – both as a society and as individuals. We can reject the technology and deny its ability to shape our lives, or we can embrace this type of AI and figure out how to make it work best for us.

No matter what choice we make, the end result will be the same. There is no putting a genie back in the box. ChatGPT is already far more advanced than any other type of AI chat we’ve seen, and it’s still in a rudimentary form. For those of us in the legal marketing sphere, the idea of ChatGPT can be terrifying if we don’t understand what it means for us. 

Maybe ChatGPT or another AI program will eventually address the shortcomings I mentioned above. Why would any legal marketer want to be behind on the trend because they wanted to “protect” their industry? Protectionism only delays the inevitable. 

We don’t need protection from tech – we need to work with it. We have to embrace the possibility inevitability of change. We can use this to be better.