Your AI Website Content And Copyright Laws – Don’t Assume Too Much, Too Soon

AI website content has serious implications concerning copyright laws, whether we are talking about copyrighted material receiving copyright protection or whether the AI-generated content you use for your website violates existing copyright laws.

The astronomical rise of artificial intelligence technologies over the last few years has not only disrupted many industries but has also threatened the very existence of some. There have been serious conversations in business and government about how best to regulate AI, but there’s no consensus on what that would even look like.

At Blue Seven, we’re deeply invested in the trajectory of AI laws. We are a company of trained, professional writer-researchers, but we are well aware of the impact of AI and large language models (LLMs) like ChatGPT on our industry. We (company founders Allen Watson and Victoria Lozano, Esq.) have kept a close eye on developments, particularly the ones that directly impact website content writing.

We’re at the one-year mark since the release of ChatGPT to the world, and it’s certainly been a year. This is a good time to review the still-evolving issues surrounding AI website content and copyright laws in the US.

The field of law surrounding AI is in its infancy, and copyright issues are at the forefront of discussion.
The field of law surrounding AI is in its infancy, and copyright issues are at the forefront of discussion.

Before diving into the debate over whether or not AI-generated content violates copyright laws, we have to understand what can be copyrighted in the first place.

The US Constitution authorizes federal legislators to “secur[e] for limited Times to Authors . . . the exclusive Right to their…Writings.” As with every facet of federal law, a regulatory agency gets to interpret what the law (or Constitution) actually means (or what they think it means). Throughout history, regulatory agencies have been given significant leeway when it comes to these interpretations.

The Copyright Act was born out of the aforementioned language from the Constitution, and this Act allows for copyright protection to “original works of authorship.”

One of the main issues of note as we dive into this subject is the failure to define what it means to be an “author,” which is something that didn’t need much clarity throughout history. However, history didn’t have to contend with artificial intelligence.

We have to define what an "author" is so we can examine how copyright laws apply to new AI technology.
We have to define what an “author” is so we can examine how copyright laws apply to new AI technology.

As with all pieces of legislation, their trajectory is determined by legal precedence, which there is plenty of when it comes to the Copyright Act. The Copyright Office only recognizes a copyright for works “created by a human being.” We can look to various court cases to narrow down what the Act, through the Office, defines as “human” in the copyrighted works.

  • Courts have denied protection to non-human authors, holding that a monkey cannot receive copyright protections for photos it took because it lacked standing to sue (non-humans cannot bring a legal action in court).
  • Courts have explicitly said that some human creativity is needed for a copyright when they decided on whether or not to issue a copyright to celestial beings (seriously).
  • Courts have denied a copyright for a living garden because a garden does not have a human author (this could probably be argued otherwise, but, alas, the Courts have spoken for now).

More recently, Dr. Stephen Thaler was denied an application to register a piece of AI artwork with the Copyright Office. The piece of art was authored “autonomously” by an AI technology called the Creativity Machine. Dr. Thaler argued, unsuccessfully, that the piece did not need “human authorship” as required through existing copyright laws. A federal district court disagreed, stating clearly that “human authorship is an essential part of a valid copyright claim.”

This decision will almost certainly be appealed.

Is There Any Hope For AI and Copyrights?

All hope is not lost for those who want to obtain copyrights for material that gets created using AI.

Generative AI programs could still receive copyright protections, but whether or not they do will depend entirely on the level and type of human involvement in the creative process for the piece. One major preemptive blow for those seeking AI-generated content copyright came in the form of a copyright proceeding and copyright registration guidance. Both of these indicate that the Copyright Office is unlikely to grant human authorship for any AI program generating content via text prompts.

Before the release of ChatGPT, the major discussions around AI and copyright protections centered on artwork. In October 2022, the Copyright Office canceled copyright proceedings for Kris Kashtanova.

Kashtanova filed a copyright protection in 2022 for a graphic novel containing illustrations created by AI tech Midjourney through a text prompt. The Office said that Kashtanova failed to disclose that the images were made by AI. Kashtanova responded by arguing that the images were made through a “creative, iterative process,” but the Office disagreed. Guidance issued by the Office in March 2023 (4 months after the release of ChatGPT) says that when AI “determines the expressive elements of its output, the generated material is not the product of human authorship.”

Counterarguments have certainly been made. Many believe that AI-created works should be eligible for copyright protection because AI has been used to create works that subsequently received copyright protection. Funnily enough, we can examine a case from 1884, Burrow-Giles Lithographic Co. v. Sarony, in which the Supreme Court held that photographs could receive copyright protections in situations where the creator makes decisions about the creative elements in the shot itself (lighting, arrangement, composition, etc.). The argument could be made (and has been) that new AI tools are basically the same when a human sets the parameters for the output.

Of course, it’s much more complicated and nuanced than that. In fact, that argument doesn’t hold much weight upon closer examination. The analogy between photography and new AI is a weak thread. For example, in the Kashtanova case, the Copyright Office says that Midjourney (the technology used to create the graphic novel) is not a tool that Kashtanova controls or guides to get the desired image because it “generates images in an unpredictable way.” Whereas the photographer claiming copyright protection can distinctly point to the elements under their control, someone using generative AI will struggle with that.

The Copyright Office offered a counter-analogy by saying that using AI to create a piece is similar to a person commissioning an artist. The person who commissions the artist can’t claim a copyright for the piece if they only give general directions for its completion. The Office, again in March 2023, determined that a user does not have ultimate creative control over generative AI outputs in order to reach the level of intimacy required for a copyright.  

Even though it seems like the Copyright Office is deadset against granting copyrights to AI-generated content, the issue certainly isn’t settled (are any laws ever settled?). The Office knows this and has left the door open to the idea of copyrights for works that contain AI, but, again, it’s complicated. A copyright likely wouldn’t be available for all of a piece of work containing AI-generated content – only for the human-generated portion of the piece.

Copyright office only allows copyright protection for a person’s own contributions to works that combine both AI and human-generated content. They say that a creator must “identify and disclaim AI-generated parts of the work if they apply for a copyright.”

Having said all of that, it’s important to understand that regulatory agencies, including the Copyright Office, cannot implement regulations that are considered unconstitutional. How do regulations created by regulatory agencies pass Constitutional muster?

Why, the courts, of course. That’s for discussion later in this article.

If we work on the assumption that some AI-generated works will be eligible for copyright protection, exactly who would own the copyright?

  • Would it be the person who tells the AI technology what to do?
  • Would it be the company or entity that created or leases the AI technology?

We could even go so far as to ask whether investors in AI technology could ultimately hold copyrights for works created by the AI. For example, Microsoft is OpenAI’s largest financial backer, and they even hired OpenAI’s CEO, Sam Altman, to run their new AI division the day after he was fired by OpenAI’s board of directors. That was, until a day later when Altman was hired back as CEO of the company after nearly every OpenAI employee threatened to leave and go to Microsoft with him.

Alas, this is an interesting story of internal politics for another time, but it does illustrate just how intertwined AI technology is with investors and other major companies, perhaps ones we don’t want obtaining all of the copyrights available.

The issue of "who" is liable for copyright violations arising due to generative AI has yet to be decided.
The issue of “who” is liable for copyright violations arising due to generative AI has yet to be decided.

Chapter Two of the Copyright Act says that ownership initially falls to the author(s) of the work in question. Since we don’t have much judicial or regulatory direction about AI-created works yet, there’s not a clear rule about who an “author or authors” are for a piece of work (here we are again, debating authorship).

We would consider a photographer the author of their photographs, not the maker of the camera the photographer used. Drawing an analogy, it would seem this opens the door to copyrights for people who input the parameters for a piece of work into AI technology, not to the creators of said technology.

This particular view would equate the person who inputs the parameters for the work to an author and the initial copyright owner for the piece. However, this argument loses weight if we consider the AI creator’s claim to some form of authorship due to the coding involved in the training and the training the AI has undertaken to help it create the piece.

Companies (for-profit and non-profit) could try to claim authorship and, therefore, copyright protections for a piece, and they could do so via user agreements (the fine print we all ignore). If you don’t think this could happen, think again. OpenAI, the creator of ChatGPT, previously did not seem to give users any copyright protections for the output based on their inputs. However, OpenAI has another iteration of their Terms of Use that says “OpenAI hereby assigns to you all its right, title and interest in and to Output.”

Regardless of whether OpenAI says users do or do not have the rights to their work, the Courts will be the ultimate decision-makers on the questions of copyrights.

Perhaps of more concern for our main industry at Blue Seven (law firm content marketing) is the question of copyright infringement by generative AI, especially LLMs like ChatGPT. If you’ve been in tune with legal marketing this year, you’ll know there’s an entire industry that’s sprung into existence offering rapid, scalable content for law firms (and every other industry).

Many websites quickly pivoted. Why, CEOs and CFOs reasoned, should they pay human writers to do something that can now be done for free? That’s a topic for another conversation, but suffice to say, the quality of AI-generated legal content has been less than stellar.

The issue here is that many people jumped to AI and are still jumping ship before knowing whether or not the content produced by AI would violate copyright laws. The debate over whether or not generative AI content infringes on copyrights is raging in public and in the courts. While we understand that website owners with a non-legal background wouldn’t necessarily know much about the potential copyright issues, we do fully expect law firms and legal marketers to anticipate these issues and proceed with caution.

Some have proceeded with caution. Others have bound forward like an F5 tornado through a barn.

Do generative AI programs infringe on copyrights by making copies of existing content to train their LLMs or by creating outputs that closely resemble existing content? That’s the question we don’t have an answer to right now. But we can look at where the winds could take us.

Does the AI Training Process Infringe on the Copyrights of Other Works?

This is a question that, though it needs answering, may not affect a website owner but could certainly affect AI companies pioneering new technologies.

Every complex artificial intelligence model uses specific coding that directs it (the model) to learn. But how do they learn?

AI models like the LLM ChatGPT are revolutionary. It’s great for what it is, at least. ChatGPT is good because OpenAI has trained the LLM off of, well, everything. OpenAI has had many ChatGPT models over the years, but ChatGPT 3 is the model that enthralled the world. ChatGPT 4 was released soon after, and it now has access to the web, whereas the previous version’s knowledge base ended in 2021.

OpenAI has never denied using the works of others to train its LLM. They’ve explicitly said their model learns from many sources, including copyrighted content. OpenAI says it created copies of works it has access to in order to use them (the copies) to train their models.

Is the act of creating these copies to use an infringement of the copyright holders’ rights? The answer to that depends on who you ask.

AI companies argue that the process of training their models constitutes valid fair use and, thus, does not infringe on others’ copyrights. We’ve introduced the term fair use, which needs to be defined in the context of 17 U.S.C. § 107, which outlines four determining factors for determining fair use:

  1. The purpose and character of the content’s use, including whether the use is for commercial purposes or non-profit, educational purposes;
  2. The nature of the copyrighted material;
  3. The amount of substantiality of the used portion of the copyrighted material in relation to the work as a whole;
  4. The effect of the use of the copyrighted material on the potential market for or the value of the work.

Did you know that OpenAI is a non-profit organization? Well, kind of. They transitioned to a for-profit company but are still majority controlled by the larger non-profit. It’s complicated and probably meant to be in order to attempt to claim that they (OpenAI) aren’t using the information for commercial purposes and can claim the content they use through fair use.

But, it seems the direction the conversation is heading, at least politically and in the courts, can be seen in how the US Patent and Trademark Office describes the AI training process by saying, “will almost by definition involve the reproduction of entire works or substantial portions thereof.” The way government regulators are framing this conversation, we can see they are erring on the side of copyright holders.

Do AI Outputs Infringe on the Copyrights of Other Works?

When we examine the fourth point in determining fair use mentioned above, we see where specific issues for website content come in. The major concern, and one we should all be aware of if we operate a business website, is that AI models allow for the production of content that’s similar to other works and competes with them.

There have indeed been multiple lawsuits filed by well-known individuals in the entertainment industry against AI companies and entities. These lawsuits dispute any claims of fair use by the AI companies, arguing that the products of these models can undermine the market (and value) of the original works.

In September 2023, a district court ruled that a jury trial would be necessary to determine whether an AI company copying case summaries from Westlaw constitutes fair use. Westlaw is a legal research platform, so this case will directly affect a company in our particular realm. The court already conceded that the AI company’s use of the content was “undoubtedly commercial.” The jury would be needed, however, to handle four factors:

  • Resolve factual disputes about whether the use was transformative (as opposed to commercial);
  • Determine to what extent the nature of Westlaw’s work favored fair use;
  • Determine whether the AI company copied more content than they needed from Westlaw to train their models;
  • And determine whether the AI program could constitute a market substitute for the plaintiff.

The output of AI that resembles existing works could constitute an infringement of copyright. If we look at case law, a copyright owner could make a case that AI outputs infringe on their copyrights if the AI model (1) had access to their content and (2) created “substantially similar” outputs.

Showing element one here won’t be the issue as these cases go through the court system. These companies have been fairly open about how they’ve trained their models. It’s element two, showing that the outputs are “substantially similar,” that presents the biggest legal hurdle.

Defining “substantially similar” is tough, and the definition varies across US court systems. In general, we can see the courts have described defining “substantially similar” by examining the overall concept and feel of the piece or the overall look and feel. Additionally, the courts have examined whether or not an ordinary person would “fail to differentiate between the two works” (a comparison between the original and the AI-generated output, which is trained on the original).

Other cases have examined both the “qualitative and quantitative significance” of the copied content compared to the content as a whole. It’s likely that the courts will have to make comparisons like this in court so that a judge and/or jury can make a determination.

Two types of AI outputs raise concern, with the first concerning AI programs creating works involving fictional characters. Imagine Luke Skywalker showing up in a new book about Marco Polo’s adventures. AI could certainly do this, but it would be easier to see this as a copyright infringement.

The second area of concern focuses on prompts requesting that the AI output mimic the style of another author or artist. For example, you can attempt to have ChatGPT craft a criminal defense practice area page in the voice and style of Stephen King. While this would certainly make for an entertaining read, publishing it could constitute infringement, but that is admittedly a gray area right now.

AI companies are preemptively blaming their models’ users for any potential copyright infringement that occurs as a result of their given outputs. As the Copyright Office weighs new regulations for generative AI, they recently published a request for public comments on the new potential regulations (a standard procedure for all regulatory bodies weighing changes to existing regulations).

The public comments and replies thus far give us an understanding of how the AI companies are going to battle this in court. Notably and predictably, Microsoft, OpenAI, and Google all have something to say about this issue.

Microsoft (again, OpenAI’s largest backer with a 49% stake in the company) says that “users must take responsibility for using the tools responsibly and as designed.” The company says that AI developers have taken steps to mitigate the risk of AI tool misuse and copyright infringement.

Google quickly lays the blame on users of the technology by reiterating that generative AI can replicate content from its training but saying that this occurs through “prompt engineering.” Google’s public comments go on to say that the user who produces infringing output should be the party held responsible, not the company behind the technology.

OpenAI flatly says that infringement related to outputs from the technology “starts with the user.” They say that there would be no possible infringement on copyrights if not for the user inputs (nevermind the fact that OpenAI has copied nearly every piece of information, much of it copyright protected, to train its programs).

The Lawyers Tackling Complex AI Litigation

The beauty and beast of the dawning of the age of AI are the legal nuances that have yet to be fleshed out. As we have already reviewed, the concept of authorship and copyright issues are becoming increasingly scrutinized. However, there is also the element of how AI is trained to “learn”  how to better answer questions or provide more specific– not necessarily accurate– information. 

In an elementary understanding, AI learns by gathering data, also known as other people’s work, into an algorithm. The machine learning system can distribute the information descriptively, predictively, or prescriptively.  But who’s work is being used, where is the work going, and how is it being referenced? 

We all remember the days of citing sources. Well, AI is far from doing that. Instead, these models are benefiting and learning from people who have worked their entire lives to create beautiful, funny, and charismatic (all the adjectives) words and images that are now being filtered through an algorithm without any mention of where this information is coming from. 

From authors to comedians to legal institutes, creators are filing suits against various AI programs on the legal theory that their work is being used to train AI without proper recognition or compensation. AI is exploiting the work of artists and scholars. Lawyers like Matthew Butterick and Joseph Saveri are standing up against prominent AI companies like Open AI, META, and ChatGPT. They are shaping the future world of AI by standing up for the humans who are providing the work in order for AI to learn and adapt. 

Like anything in the legal world, all this will take time. When filing a lawsuit, you must present a legal complaint that outlines in short, plain language the facts, their application to the elements of each offense, and relief. Currently, AI programs are being sued for negligence, copyright infringement, unlawful competition, unjust enrichment, and DMCA violations. In fact, a new lawsuit was just filed by Julian Sancton, “and thousands of other writers did not consent nor were compensated for the use of their intellectual property in the training of the AI.” 

As recognized in Harvard law journal, even though the lawsuits are compiling, there is no easy clear end in sight. The courts are going to have a hard time distinguishing between AI-generated material, authorship, what is considered public sourcing, and copyright infringement, among other legal theories like templates versus AI-generated outlines. 

Eventually, once these lawsuits get past state courts and appeal processes, some cases could land at the feet of the Supreme Court as a writ of certiorari. If (when) that happens, the Supreme Court can deny the petition, and whatever opinion is delivered by the highest state court will prevail. 

Or, the Supreme Court will become the final arbiter.

How Will the Government Regulate AI

The idea of the US government, specifically our elected officials, crafting regulations for artificial intelligence is almost laughable. This new technology is far more advanced than “internet things” that came before it, and we’re still living with Internet laws from the 1990s. Alas, the efforts are being made.

Time Magazine has shown that Congressional inquiries into AI are still in their infancy as legislators proceed with caution. Senate Majority Leader Chuck Schumer held a closed-door meeting with the nation’s main tech leaders in September. The goal, according to legislators, is to pass bipartisan AI legislation sometime in 2024, but with the technology evolving so rapidly, this seems like a tall order.

However, Congressional urgency on this subject is clear. Senate Intelligence Committee Chairman Mark Warner, D-VA., told reporters after the closed-door meeting that, “We don’t want to do what we did with social media, which is let techies figure it out, and we’ll fix it later.

A side note, and one that should be of concern, is that the average of Representatives is currently 58.4 years, and the average age of a Senator is 64.3 years. These are folks who, as smart as they may be, aren’t likely fresh on the ‘newfangled’ tech.

Hopping over to the executive branch, the Biden administration has crafted a Blueprint for an AI Bill of Rights. The White House sees the potential challenges posed to democracy with the expansion of unregulated AI into our lives and admits the outcomes could be harmful, though not inevitable.

The Blueprint highlights the need to continue technological progress but in a way that protects civil rights and democratic values. Amongst the prongs of protection outlined in the piece is a stress on data privacy, which ties directly into our copyright discussion. Specifically, the Blueprint says that “[D]esigners, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible…”

The Blueprint goes on to call for enhanced protections and restrictions for data and inferences drawn from sensitive domains, saying this information should only be used for necessary functions (though “necessary functions” isn’t defined, giving leeway to the tech companies to craft the definition).

The AI lobby in the Capitol is already strong, and it’s growing. Every major tech company in the US has an interest in the outcome of any regulations, legislation, executive order, and judicial decisions related to artificial intelligence.

OpenAI says it “urges the Copyright Office to proceed cautiously in calling for new legislative solutions that might prove in hindsight to be premature or misguided as the technology rapidly evolves.”

These companies and their mega lobbying power will certainly influence the outcomes of any governmental regulations or new AI legislation. This is where we take a “wait and see” approach and hope for humanity to win out.

What the American Bar Association Says

As we wrap this up, we want to examine what the American Bar Association says about AI as this moves closer into our territory as legal content providers. The latest update we posted was in May 2023. The ABA provided a three-pronged approach to AI resolution to ensure accuracy, transparency, and accountability. In August 2023, the ABA created a task force to study the impact of AI in the legal profession. 

The task force’s objectives are to explore:

  • Risks of AI (bias, cybersecurity, privacy, and uses of AI such as spreading disinformation and undermining intellectual property protections) and how to mitigate them
  • Emerging issues related to generative AI tech
  • Using AI to increase access to justice
  • AI governance (including law and regulations, industry standards, and best practices)
  • AI in legal education

The ABA is responsible for maintaining this code of ethics for lawyers. As a profession (Victoria writing here), we uphold an oath to protect the law. But we also swear to tell on each other. There is no overseer of how lawyers conduct their work. Instead, we rely on the ABA to set the tone as well as our state bar code of ethics. 

With the introduction of this task force, we may see an increase in lawyers submitting complaints to the state bar if they have actual knowledge that an attorney is abusing AI to conduct their work. The ABA task force reinforces the importance of using AI responsibly, if at all. 

Be Smart About Using AI Content on Your Law Firm Website

The high-stakes conversations surrounding AI and copyright infringement are playing out amongst giants. They are companies, regulators, elected officials, and academics debating the next steps for this revolutionary technology.

We’re just on the sidelines, trying to stay out of the lines of fire. We don’t yet know what AI regulation will look like. Nobody does. What we can almost certainly guarantee is that AI laws will remain in flux and likely be a few years behind wherever the technology is, especially with the rapid advancement of the tech we’re seeing right now. 

As you make the decision about how to craft a content strategy for your law firm, construction company, or any other industry, we urge you to proceed with caution. A signature on a piece of new legislation, a change at the regulatory level, or a judicial decision could change everything.

Regardless of where the new era takes us, you shouldn’t take shortcuts that sacrifice the quality of your content, and you certainly shouldn’t make any moves that violate the copyrights of others. Create your content plan wisely with a team that genuinely cares about writing quality and integrity.

Written by Allen Watson and Victoria Lozano, Esq. – Founders of Blue Seven Content

Law Firms and Search Generative Experience (SGE)

Law firms are beginning to explore what the search generative experience (SGE) means for them. Surely, most law firm marketing directors or partners have spoken to their marketing agencies, and there may be some internal panic. 

At Blue Seven Content, we only generate written content for law firm websites, so SGE has the potential to significantly affect our business. In fact, if SGE and ChatGPT play out how many in the industry think, we won’t have a business at all. 

But I don’t think it’s as bad as people think. So far, as I’ve delved into SGE responses for law firms and law-related queries, I’ve been pleasantly surprised by how it’s working. 

Law firms and search generative experience, how will Google's search revolution change the legal marketing industry?

The Law Firm Short-Tail Keywords

When thinking about where to begin my search generative experience journey, I figured the best place to start would be where it all started for me – typical keywords for crafting a law firm practice area page

You know what I’m talking about:

  • Los Angeles personal injury lawyer
  • Car accident lawyer in Denver
  • Palmdale slip and fall attorney

When it comes to the SGE results, it doesn’t currently seem like Google is trying to make waves. I typed in “medical malpractice attorney Charleston SC,” after I geocoded my location to Charleston. First, I got the usual SERP results, but there was also a “generate AI response” option for me to press:

Typing medical malpractice attorney charleston sc gives me the option to click an SGE button.

When I clicked the AI button, it seemed like it pulled a list of medical malpractice attorneys in the area, and it appeared to reward reviews from various sources (uh oh, back come the directories?). However, what doesn’t seem to play a role in this generative response (yet) are the PPC or organic results you’d usually find on the SERP. They give these as 4- or 5-pack for each search:

The SGE gave me a directory of sorts.

At the bottom of the SGE response, there were a few prompts for related follow-up questions, presumably what people typically ask around the same time they are looking for a medical malpractice lawyer:

  • How long do you have to sue for medical malpractice in South Carolina?
  • What is the statute for medical malpractice in South Carolina?
  • What are the limits for malpractice in SC

These types of responses are the norm for SGE when you type in the usual keywords that would bring you to a law firm practice area page. It does not yet give you an automatic generative response – you have to choose to click it. 

We should really pay attention to the follow-up queries on the bottom of these responses. These are the type of long-tail keywords that lead to responses we already write answers for, but this gives us an idea of what Google (and readers) want to see. 

These types of queries are harder for SGE to even make a coherent response for. What are they going to do – describe what a car accident or family law attorney is? No, I think these queries will remain relevant to the traditional SERP results. 

However, the long-tail keyword queries are a different story. 

The Law Firm Long-Tail Keywords

I’ve predicted that Google would keep legal queries YMYL, but that may not actually be the case. Of course, this is all still experimental, so I may be proven right. I could just as easily be proven wrong. 

So, I decided to delve into general queries such as “steps to take after a slip and fall accident” or “when should I call a lawyer after a car accident.”

I’ve found that these types of searches generate an automatic SGE response. For these queries, we’re getting a response you could expect to find on ChatGPT, except Google can draw from, well, Google. This AI can access the internet.

When you type in these types of searches, the SGE does give you a response, and it does show a 3-pack (4-pack if you scroll right) of pages where it draws its answer from. Usually, these are law firms, but there are other sources, depending on your question. 

My immediate questions, and ones that people smarter and with more experience than me are tackling, are:

  • What makes a page “good” for SGE to draw from?
  • How do we best optimize for SGE?

I geocoded myself to Charleston, SC, again and typed “steps for a medical malpractice case in Charleston.” I got the SGE answer straight away, above the fold:

Ask a long-tail keyword geared towards a legal question, and you get an automatic SGE response.

You can see a small photo of, supposedly, where the information used to generate the response comes from. Again, I want to know what makes these the “best” pages to use for an SGE response. 

Again, we get the same follow-up prompts on the bottom that we got when we looked up the “medical malpractice attorney Charleston SC.” 

Below the SGE, we go right into what we’re used to seeing on the SERP, but not sponsored ads. It goes right into the organic search results (my content writer’s heart sings when ads aren’t first), but I also know that so many searchers won’t go beyond the SGE response. 

Something funny happened when I typed, “when should you call a lawyer after a construction accident.” I got the sponsored results first, and THEN I got the SGE response in the middle of the page, finally followed by the organic results:

I’m sure these results will be replicated the more I play with SGE queries. Again, Google is experimenting with all of this, and they will try to figure out what works best for the average user AND for them. Google is not going to throw away revenue, so having the sponsored results show up first shouldn’t surprise anyone. 

Do We Already Know How to do This?

As I think about law firms and search generative experience exploration, I was curious as to how this would work when I entered the keywords that Blue Seven Content usually ranks well for anyway. First, I typed in “law firm practice area pages”:

We already ranked second in organic for this keyword (on most days), and we show up in the SGE as well. Look what happens when I expand the SGE result:

When I expand it out, Blue Seven ranks number one in the SGE response. Now, the results don’t show the meta description that we have for that page, but that’s not surprising. Google has a way of looking at your meta and ignoring it anyway, so there’s that. 

I did the same with “law firm FAQ pages” because we’re frequently number one with that search. Here’s the result:

Here, we show up number one in organic SERP and number one and two in the SGE response:

We Still Have ChatGPT to Deal With

As I’ve noted multiple times before, ChatGPT is a “threat” to us legal content writers. Not legal marketing agencies, though. Legal marketing agencies that handle all of a law firm’s online marketing will always be around, and they’ll adapt. No, it’s the content writers who have to worry.

But do we?

Okay, maybe some legal content writers have to worry. The ones who can’t produce content better than ChatGPT are certainly on the chopping block. But that was always going to be the case. What I think will happen, as I’ve said before, is that ChatGPT has had its sugar rush. It’s given the industry a high (or a bad trip, depending on what your role is). 

But as I’ve toyed around with Google’s SGE, I’ve seen that good content matters. Google is meeting AI in a way that (1) provides simple answers that users are looking for and (2) seeks to maintain the main revenue driver for the platform – ads. 

For now, SGE results are generally pulling answers from well-ranking organic content that already answers, or closely answers, the search query. Could SGE end up pulling content that someone generated with ChatGPT and published? Yes, of course. But not if that content isn’t better than what’s already out there. 

Currently, ChatGPT has many flaws. Phantom court cases and rulings. Massive plagiarism. Predictable writing that reeks of AI. Zero human touch. 

And, of course, there’s the issue of what happens to content online when ChatGPT gains access to the internet (it’ll happen eventually) and begins learning new stuff based on content people have generated using ChatGPT. It’s a self-feeding loop with little new input from actual humans. 

Content degradation is waiting to happen.  

Was there content degradation with human legal content writers consistently regurgitating each other? Of course there was. This is why I’ve said I’m grateful to ChatGPT for snapping us (at least Blue Seven) out of any comfort zone we may have fallen into. 

We have to constantly improve. We have to be better content creators, thinkers, researchers, and writers. Writers have to be better than the silver bullet LLMs that many (lazy) marketers think will be their golden ticket. 

With my intro research into SGE responses to legal queries, I’m positive that quality, human-written content will reign supreme. Humans can and should use the tools available at their disposal, much like SEOs use Ahrefs, Semrush, and Clearscope. They should use tools like editors use, including Copyscape, Grammarly, or Hemingway. These technological advancements didn’t kill the SEO or the editor, and those who are good at their craft don’t completely rely on the tools. Because they are tools used to build the larger product – a good piece of writing. 

Law Firms and Search Generative Experience (SGE) – My Take for Now

I think SGE will seek to answer basic queries with assistance from results that already rank. Perhaps this will go to paid results eventually, but Google is drawing from organic results for now. Ranking in SGE will be more competitive because it’s taking from 3 or 4 organic sources now, then the rest of the SERP responses appear. 

Who knows what this will look like in six months or a year, but I don’t think it’s the death of the legal content writer. I think it’s the beginning of a new search experience, and we have to adapt. What we’re adapting to is still up in the air. How will law firms respond to search generative experience? Stand by, we’ll be back for more.

Written by Allen Watson – Founder and CEO of Blue Seven Content

ChatGPT and Legal Marketing – Where do We go From Here?


ChatGPT and legal marketing – AI is about to completely upend the legal marketing field.

Okay, not really. But that’s what a bunch of people are about to tell you. Perhaps you’ve already heard that your law firm practice area pages and blog posts no longer need to be written by a human. Maybe someone has raved about how much money you’ll be able to save by not having to pay for content anymore. Since November, all people can talk about is ChatGPT.

Let me be clear – ChatGPT is far more advanced than any other AI that’s come out, at least publicly. In fact, it can create content that’s better than some of the drivel I’ve seen on law firm websites. But I don’t think it’s a legal marketing killer, and I think law firms and legal marketing agencies need to do their research before declaring victory over human writers. 

  1. What is ChatGPT?
  2. Responses to ChatGPT
  3. How Could ChatGPT Disrupt Legal Marketing?
  4. What I Found When Using ChatGPT (Legal Content Writer Explorations)
  5. The Issues With ChatGPT for Legal Content Writing
    1. Plagiarism is a problem
    2. Incorrect information
    3. It cannot can cite sources
    4. Very surface-level content
    5. No current information to pull from
    6. Where does new information come from if everyone stops posting new content?
    7. Possible legal or legislative issues
    8. None of the ChatGPT legal marketing issues are insurmountable
  6. Microsoft and Google – The Battle Brewing
  7. Embrace Technological Advances Instead of Dismissing Them

What is ChatGPT?

If you’ve been anywhere on social media recently, you’ve seen people raving (or ranting) about ChatGPT. 

But what the hell is it?

ChatGPT was created by OpenAI, which is a research lab focused on advancing artificial intelligence technologies. The organization was founded in 2015 by various individuals, including Elon Musk. However, Musk resigned from the board of OpenAI in 2018.

ChatGPT was released in beta version to the public on November 30, 2022, and amassed more than a million users less than a week after its launch. ChatGPT uses a large artificial intelligence model created by OpenAI, called GPT-3.5 language technology. This system has been trained by using a massive amount of text data from various sources.

ChatGPT is revolutionary, but we're not sure it can handle good legal content writing.
You need to understand ChatGPT and how it affects your field.

The current way to use ChatGPT is sort of like a chatbot, where a user will input a question or prompt into a search bar and watch as ChatGPT responds with what it believes to be the appropriate information for the prompt or question. Perhaps the best part of ChatGPT is that you can get it to respond in pretty much any form you want. You can have it craft a five-paragraph essay, or you can command it to give the answer or response as a poem.

Want to dig further? Tell ChatGPT to craft a response to a question or prompt in iambic pentameter or in the speaking style of William Shatner. It can do it.

I asked it to write me a love story between Luke Skywalker and Yoda. It did it, and it convinced me that was the true story behind the whole saga.

This AI system responds really well to the prompts imputed. You can get very specific and creative. I do strongly suggest you go try it out. It’s honestly great for entertainment. You’ll also see the potential for this tech to disrupt everything.

Responses to ChatGPT

To say the response to ChatGPT has been resounding and immediate is an understatement. Educators have proclaimed that the essay is dead because there will be no way to know what’s student-written and what’s generated by ChatGPT. Teachers say there is no way they’ll be able to assign take-home tests.

Some have questioned whether ChatGPT will make lawyers obsolete, as it may be able to create arguments and draft legal documents. Imagine a courtroom where all you do is wait for AI to tell you the outcome of the case because it’s already read every possible law and court case.

The Washington Post has said that Google (and other search engines) face a major threat because of ChatGPT. The argument is that ChatGPT could spell disaster for Google by providing better answers to the queries that we typically ask Google.

Google crawls and indexes billions of web pages. It then ranks this content in order of the most relevant answers (most of the time). When you perform a search, you get a list of links to click through, typically beginning with ads related to your search and then moving on to the organic links related to your search. This, my friends, is where SEO wizards have made their bones.

When individuals type in a question on ChatGPT, they are presented with a single answer based on the AI search and synthesis of the information already online. The idea is that now, instead of you having to click through the most relevant links to find the information you need, ChatGPT will handle the hard part for you and give you THE answer. The definitive answer. 

Of course, there have been significant discussions about what comes next for the internet. Web 3.0 is typically seen as the next phase, even though there is little consensus about what this means or what it looks like. We’ve discussed the metaverse as being the key component in a Web 3.0 world, and ChatGPT and other AI technologies could aid that shift.

Legal marketing SEO agencies make a living off of helping law firms rank toward the top of search engines for specific queries. The industry, quite frankly, isn’t ready to handle a world where SEO isn’t a thing. 

All I can do is approach ChatGPT from the angle of a content writer that understands and uses SEO but focuses on providing content that readers need/want to see.

I’ve been creating legal marketing content for years. I’ve written thousands of law firm practice area pages and blog posts, and I’ve supervised writers who have written tens of thousands. So, it was only natural for me to begin by prompting ChatGPT with topics that frequently crop up when crafting a page.

I asked, “What types of compensation are available for a car accident in California?” and it gave me a solid answer, one that you’d typically see on a law firm’s website.

I asked, “Is there a cap on damages available for a successful personal injury claim in Michigan?” and ChatGPT gave me a convincing answer.

I asked, “What are the most common injuries caused by a moped accident?” and the AI provided an indisputable list of injuries.

Finally, I asked, “What are the four elements of negligence for a personal injury claim?” and the AI gave me exactly what you’d expect to see on a law firm’s website.

Each one of these responses came back with data organized in a way that we would typically see on a law firm web page. There was a brief explanation, a bullet list or a number list of some sort, and often a little conclusion to wrap it up. I could certainly envision a legal content writer crafting a law firm practice area page or blog post, inputting their H2s into the ChatGPT prompt, and then copy and pasting the answer to their page.

After these basic queries, which would essentially be sections of a longer page for a law firm, I decided to get more specific with the requests. I asked ChatGPT to write a 500-word law firm practice area page targeting those who need a Chicago car accident attorney.

You know what?

The page wasn’t bad. It was surface-level, but it certainly provided enough information to maybe convince someone that they’d need an attorney if they’ve been injured in a crash.

But it was certainly not the type of page that I would create. I do see the value of using ChatGPT and other types of AI tools for coming up with ideas for a page. This is a tool, not a replacement. At least not yet.

Blue Seven Content founder Allen Watson discusses ChatGPT with Conrad Saam and John Reed.

Just because I said the responses given by ChatGPT were convincing and organized does not mean that they were without issues. In fact, everything that I put into the prompt would never pass muster at Blue Seven Content, and it certainly wouldn’t fly on a law firm’s website.

Plagiarism is a problem

The most glaring issue that cropped up was plagiarism. This is the biggest sin when it comes to writing website content, no matter the industry. If a law firm content writer plagiarizes content from either themselves or from other sources, this is going to hurt the web page. Google’s algorithms know how to spot copied content, and they can penalize a page or even an entire website for it.

  • The prompt on car accident compensation in California came back as 33% plagiarized.
  • The query about moped injuries came back as 23% plagiarized.
  • My question about the four elements of negligence came back 19% plagiarized.
  • A prompt asking how burn injuries are classified was returned as 17% plagiarized.

Not once did I ask it a “typical” legal question and get a response that was less than 15% plagiarized. This challenge is not insurmountable if you have the ability to detect plagiarism and have a competent editor (even then, all you’re doing is wordplay without originality). Right now, ChatGPT is not capable of original thought. It has to provide answers using information already available.

Also, remember that 500-word practice area page I told ChatGPT to write? Well, it came back 34% plagiarized. Sources it drew from ranged from other law firm websites to the Daily Mail. If you’re a veteran legal content writer, you already know to avoid citing competitive law firms and sources that lack credibility. 

Jan 2023 Update – I wanted to know how ChatGPT has evolved, if at all, since it’s release. I asked it to craft a law firm page for fairly simple prompts. I received answers that were less than 10% plagiarized and was fairly impressed. However, I then asked the AI to write a page that required a slightly more technical response, but still fairly basic for a law firm website. There was more than 20% plagiarism.

Bottom line so far – ChatGPT simply cannot help but provide plagiarized answers for anything more than a VERY basic prompt.

Incorrect information

Incorrect information is the last thing a law firm needs on its website.  One of the biggest problems with ChatGPT is the lack of sourcing, and the fact that you have to 100% know the material in order to detect incorrect responses.

I asked ChatGPT, “Is there a cap on damages available for a successful personal injury claim in Michigan?”

If you know anything about these caps, then you know they typically apply to non-economic damages for medical malpractice claims, which is the case in Michigan. However, ChatGPT responded that there was a cap for ALL non-economic damages in Michigan.

ChatGPT presents incorrect information as if it’s fact and in a pretty convincing way. With this tech, you can’t see that there may be other answers the same way you can when you perform a Google search. Nor does it provide room for nuance of the law or the geographic area of the law you are searching. 

The AI tech behind ChatGPT isn’t at a level where it can detect incorrect information, or at least where it can analyze and synthesize information correctly. Somewhere, the AI read that Michigan had a non-economic damage cap, and it had no clue that the information was incorrect. We can look directly at a tweet from Sam Altman, CEO of OpenAI:

I asked ChatGPT, “What are the exceptions to California’s medical malpractice statute of limitations?” The response I got was lacking in substance. 

The AI response failed to properly explain the exception for minors who sustain injuries due to a medical error. It didn’t highlight that there is a difference depending on the age of the minor when the injury occurred. ChatGPT failed to mention the exceptions to California’s medical malpractice statute of limitations for foreign objects left behind in a person’s body after a procedure. 

These are just a few of the mistakes I found during a cursory review. I can only imagine the issues that would arise for slightly more complex queries. 

It cannot can cite sources

I initially thought ChatGPT wasn’t able to cite sources, but it can. When you write your prompt, you can tell the AI to use and cite reputable sources and it will do so. However, I caution anyone doing this, because we don’t currently know how ChatGPT decides what is “reputable.” Conrad Saam, my friend and president of Mockingbird Marketing, has said that the program has given him Wikipedia as a “reputable” source. While Wikipedia is generally accurate, there’s a snowball’s chance in hell I’ll be citing it on a law firm practice area page, FAQ page, or blog post.

We also don’t want to pull information from John Doe’s hobby blog. Don’t get me wrong, we’ll use those sources as a starting point, but we have to verify the information and cite using trusted sources. 

I’m still of the opinion that, no matter what citations ChatGPT provides, there needs to be a human fact-checker. This is particularly true for those of us who write content that demands a certain degree of accuracy. This, in my opinion, would lead to the most time-consuming part of preparing a page for publishing. If you are going to cite data or statistics, then you need to be able to source the information through a hyperlink on the web page. Anyone relying on ChatGPT to craft legal content will have to have an editor go back and (1) go to the source provided by the AI (2) verify the information, and (3) hyperlink the external sources into the content.

All of this is beginning to sound like work writers already do when they create a new law firm website page from scratch, and it’s likely to take nearly as long. If not longer. Content writers often loathe having to go in and adjust or correct other people’s work. It’s typically easier to simply make a new page. 

Very surface-level content

The information returned through ChatGPT is fairly surface level, at least for the purposes of law firm website content. Even if we can get passed the plagiarism issue with good editing, the pages ChatGPT provides are equivalent to what I’d expect from someone who has never written this type of content before. It’s fluffy and lacks nuanced research.

No current information to pull from

Right now, ChatGPT relies on information only up to a certain point in 2021. The AI does not use current data or any real-time information. This will be a problem if you want to use current data and statistics or any new laws on your law firm’s website. Additionally, if you need to craft a blog post about current changes or updates to your particular field of law, ChatGPT will have no way to do this.

Ramping up ChatGPT and other artificial intelligence programs to allow for real-time updates will be a massive undertaking. This requires enormous computing power, something that will take some time to build. 

I recently read “The Metaverse: And How It Will Revolutionize Everything” by Matthew Ball, and one possible solution to this problem could be on our tables and in our pockets – our devices. Almost everyone has a computing device (or four or five of them), and the reality is that they remain dormant much of the time. 

If a larger system had the ability to tap into these devices for their computing power, this could allow for the systems needed to control a real-time AI program (as well as potentially power a metaverse immersive environment). It’s essentially crowd-sourcing computer power. 

This comes with a whole slew of privacy and legal questions that many of us are certainly not ready to think about, which highlights some of the issues that AI developers will have to overcome. 

Where does new information come from if everyone stops posting new content?

Maybe this is just my limitations on what I am able to understand about ChatGPT’s capabilities and AI in general, but if this type of technology is used to create new content, where will the AI be able to draw from and learn from in the future?

I envision a future where, if this type of artificial intelligence becomes common, we see AI copying other AI responses. Somewhere, AI systems need to intake new information from human sources in order to stay relevant. 

There will inevitably be legal issues that arise. The courts and lawmakers will step in to address these issues, but that could take a while. For example, will anyone face liability if ChatGPT or another AI gives incorrect information that then causes harm to others? Imagine a WebMD controlled by AI. Will people listen to the advice given by the AI, or will they find a way to verify what they’ve been told?

What if it’s determined that anything written with AI must be labeled as being “machine-generated,” much like the requirement on most platforms that certain posts have to be labeled as ads? Will your legal clients trust you if they see your website is created by AI?

ChatGPT is currently in beta form, and we’re all the test subjects. The more prompts we put into the system, the more it will learn. Developers will continue to tweak the code to determine what works best, and the AI will learn as it goes. 

The system will get better at understanding why incorrect information is, in fact, incorrect. It will learn that it needs to take existing information and craft it in a way that doesn’t plagiarize others. Coders can help the AI recognize what an authoritative source looks like, and they can show it how to use anchor text to hyperlink. Hell, the AI can probably teach itself how to do that.

Using AI for content writing - embrace the change, but be wary of the outcomes.
Will AI be a tool to use when crafting legal content, or is this going take over?

Microsoft and Google – The Battle Brewing

Microsoft recently announced they were investing $10 billion into OpenAI, and there is strong speculation they’ll integrate ChatGPT into their Office tools. This is the third, but largest, round of investment the tech giant has made into the AI company. Microsoft has clearly seen the value of artificial intelligence, and they’re always working to reinvent the company and stay ahead of the curve.

As of February 2023, it seems that Microsoft is beginning to use ChatGPT through their search engine Bing and browser Edge. This is still in limited testing, but it seems that users will be able to conduct a search but that half of the results page will incorporate the chatbot. This could be a huge push for the search engine that’s so long been eating Google’s dust for breakfast in the search world. It could be a paradigm shift for the world of search.

Google is nervous. Google called in the big dogs, founders Larry Page and Sergey Brin, to help guide them through this credible threat to the company’s main source of revenue (search engine results and ads). Unfortunately for Google, their first foray into the competition with Bard AI was a flop. The search engine giant’s demo of the AI and their search engine resulted in an inaccurate response, and this response led to Google losing more than $100 billion in valuation in one day.

Until we see how the battles between Google, Microsoft, and other major companies end, we’ll have to keep adjusting strategies. As a legal content writer or SEO company for legal marketing, this is something you’ll need to keep an eye on over the next few months and into 2024.

Legal marketing companies and law firms may actually need to start focusing on Bing much more than they’ve done in the past. Let’s be honest – Google has driven SEO over the last two decades. That supremacy is threatened right now.

Embrace Technological Advances Instead of Dismissing Them

It may seem like I’m against AI. I’m not. In fact, I want to embrace it. ChatGPT and legal marketing aren’t avoidable.

There’s never been a time when rejecting new technologies has worked out for anyone in the long run. Horse and carriage operators vehemently hated the concept of a motorized vehicle, and many people doubted whether cars would actually become mainstream. For years, people doubted that computers could ever revolutionize the way individuals went about their daily lives. Even the benefits of the internet weren’t fully understood for quite a while. In fact, many scoffed at the idea of online shopping and “social media.”

Here we are, looking at what could represent another major shift in the way we approach “knowledge.” We have a choice – both as a society and as individuals. We can reject the technology and deny its ability to shape our lives, or we can embrace this type of AI and figure out how to make it work best for us.

No matter what choice we make, the end result will be the same. There is no putting a genie back in the box. ChatGPT is already far more advanced than any other type of AI chat we’ve seen, and it’s still in a rudimentary form. For those of us in the legal marketing sphere, the idea of ChatGPT can be terrifying if we don’t understand what it means for us. 

Maybe ChatGPT or another AI program will eventually address the shortcomings I mentioned above. Why would any legal marketer want to be behind on the trend because they wanted to “protect” their industry? Protectionism only delays the inevitable. 

We don’t need protection from tech – we need to work with it. We have to embrace the possibility inevitability of change. We can use this to be better.