If you felt a disturbance in the SEO force recently, you were not imagining it. Adobe, the software giant behind everything from PDFs to giant Photoshop files, announced its intention to acquire Semrush in a 1.9 billion dollar all-cash deal, according to Adobe’s newsroom. Marketers everywhere collectively paused mid-keyword research and said, “Wait, Adobe is buying Semrush? Is that even legal?”
Well, yes. Regulators still have to approve it, but, probably. But it is also a sign of something bigger. A shift in how brands are discovered, how marketers measure success, and how search itself is evolving because of AI.
This article breaks down what is actually happening, why Adobe wants Semrush, how it affects Adobe’s expanding empire (Adobe, I mean, Adobe?!), and what it means for you and the future of digital marketing.
Adobe beams up Semrush in a $1.9 billion acquisition. SEO continues to prove it’s value as AI search changes things.
What We Know So Far
On November 19th, Adobe announced its plan to acquire Semrush for roughly 12 dollars per share according to The Verge. That represents about a 77 percent premium over Semrush’s November 18th closing price. Semrush’s stock immediately rocketed because that is what happens when a software giant shows up with a suitcase full of cash.
The deal is expected to close in the first half of 2026, pending regulatory approvals and a shareholder vote. Semrush’s founders and key shareholders, representing over 75 percent of voting power, already agreed to support the deal according to Reuters. That makes the vote largely a formality.
If approved, Semrush will join Adobe’s Digital Experience segment and become part of Adobe Experience Cloud.
This is not a random acquisition. It is a strategic move.
Why Did Adobe Buy Semrush?
The short answer is visibility, data, and the future of search.
The longer answer is that Adobe wants to own the full customer journey. It’s not surprising to see more big companies work hard to get people and businesses into their exclusive networks. That’s where the money is.
In 2025, that journey often doesn’t begin on a homepage or even a Google search. It begins with AI. People are asking questions through AI assistants like ChatGPT, Gemini, Perplexity, and Alexa. They rely on agentic systems to make decisions for them. Agentic AI is in it’s infancy, and as that grows, legal marketing will shift into another phase of online credibility.
Adobe has been positioning itself as a customer experience powerhouse, but it did not have a dedicated search visibility intelligence platform. Especially one tuned for how AI affects discovery.
Semrush gives Adobe competitive search intelligence, backlink and authority data, keyword research capabilities, SERP insights, and developing tools for generative engine optimization. Adobe explains this vision directly in its announcement. Adobe specifically wants Semrush to help brands understand how they appear across search engines and AI-generated surfaces.
Which brings us to the next part.
An Adaptation to AI and Agentic Search
Adobe cited that traffic from generative AI surfaces to retail websites has increased more than 1200 percent year over year according to the Adobe press release. Consumers are no longer typing long-tail queries. They are asking AI assistants for recommendations.
Instead of “best running shoes for flat feet,” people are saying:
“Pick a shoe I can run a 5K in this weekend. Make sure it does not destroy my arches. Also, size 11.”
This is agentic AI. Search is becoming automated decision assistance. Search is becoming part of your everyday discussions. Your AI assistant is here, you just have to learn to use it gradually over time.
The GEO debate
Now let us address GEO, or Generative Engine Optimization. The term is controversial, especially if you peruse the writing and SEO communities on LinkedIn. Some marketers argue that GEO is a distinct discipline. Others say calling GEO new is like calling a hot dog a taco. You can try, but you are going to upset people.
Here is the real answer. AI-generated answers are becoming part of the discovery funnel. Whether it is a brand recommendation, a product summary, or your competitor being name-dropped by an LLM, AI placements matter.
Semrush has been building tools to measure that. Adobe wants it. And this acquisition means:
GEO and SEO intelligence in one place (I still argue they’re very similar)
Adobe reporting backed by Semrush visibility data
Unified tracking across search engines and AI answer surfaces
The future of visibility is not just SERPs. It is SERPs, feeds, AI assistants, and background recommendation systems. It’s your social media presence and your video presence. SEO is everything is what we used to say, and we mean it. On and offline marketing efforts are all SEO in their own way.
What This Means for Digital Marketers
SEO, content marketing, PPC, and social teams are all feeling pressure to adapt to AI-driven discovery. Adobe buying Semrush reshapes how visibility is measured and how people think about search. I wouldn’t even be surprised if we see some more acquisitions coming in the SEO space, but we’ll see.
This is what it means for your day-to-day work.
1. SEO and GEO will start to merge, whether I like the term or not
Teams will not maintain two separate mindsets forever. It is far more likely that:
SEO becomes structured optimization for search engines
GEO becomes optimization for AI answer models
Together, they will form a broader idea of visibility optimization. I hope we just call it something like “Intent Optimization” to blend the two. Adobe buying Semrush is the first major step toward that unified visibility stack.
2. New KPIs are coming for your screens
Get ready for metrics like:
Share of AI answers
LLM recommendation visibility
Assistant-driven conversion paths
Multi-surface visibility scores
“Where does my brand appear when someone talks to ChatGPT? CoPilot?”
Adobe already owns analytics, personalization, audience data, creative tools, and journey orchestration. Adding Semrush lets them extend measurement to AI-generated surfaces, too.
3. Pricing, bundles, and lock-in are coming
This acquisition will absolutely bring:
New Adobe bundles
Higher pricing tiers (we’ll have some holdovers for a bit)
Pressure for enterprise teams to standardize on Adobe
More long-term vendor lock-in
Semrush will likely remain available as a standalone tool at first. But Adobe did not spend nearly 2 billion dollars just to leave it alone. Expect deeper integration over the next two to three years.
4. Independent SEO tools will respond
Ahrefs (my current one to watch), Moz, Sistrix, Similarweb, and others will watch closely. Expect them to double down on:
Transparency
Technical depth
Independence from enterprise ecosystems
Flexible pricing (and hopefully more AI insight access at lower tiers)
More emphasis on raw data access
This is not the end of independent SEO tools, but it is the beginning of a more divided market. In fact, I personally think Ahrefs has some room to run here.
What Agencies Should Be Thinking About Right Now
1. Prepare clients for reporting changes
Reports focusing only on rankings and backlinks are going to feel outdated fast. Start introducing GEO concepts now so clients are ready.
2. Evaluate which clients are already in Adobe ecosystems
If a client uses Adobe Analytics, Adobe Target, Marketo, or Experience Manager, adopting Semrush as their visibility engine becomes very likely.
3. Start building AI-conscious content frameworks
This is where Blue Seven shines. We write the hell out of some legal pages. Our legal SEO writers are some of the best in the business. They can write content that focuses on clarity, usefulness, and structure.
Content briefs will need:
Direct answers
Clear brand authority signals
Structured formatting
Data-backed claims
Schema where relevant
Language that AI models recognize as authoritative (which, right now, lends itself to be tricked by “best” language)
However, it’s pretty evident that LLMs tend to cite brands that communicate value clearly, consistently, and confidently.
What In-House Marketing Leaders Should Expect
Leadership may become very interested in AI visibility. In fact, if you’re a law firm marketing director or a FCMO for law firms, you already spend a lot of time talking about this with leadership. Reporting will expand. Tool budgets may shift toward integrated stacks. KPIs will change. CMOs will ask questions like:
“Where do we appear in ChatGPT?”
“How does AI discovery affect our pipeline?”
This is an opportunity to reset how your organization models visibility and measures success.
So Is This Good or Bad for Marketers
Like most big acquisitions, the answer is yes. It is both. It depends. Yeah, something along those lines.
Good
Unified data
Better visibility tracking
Stronger AI search intelligence
Cross-channel measurement
Less tool chaos for enterprise teams
Bad
Prices will rise, if not right away, over time
Vendor lock-in will increase
Semrush may lose some scrappy independence
Smaller competitors may be squeezed (unless they merge)
Marketers must adapt to AI-driven search faster than expected
Adobe is signaling that the future of brand visibility will live at the intersection of SEO, GEO, content, analytics, and AI assistants. Oh, something we’ve been talking about with out idea of an SEO Ecosystem.
Final Take – A New Era of Brand Visibility
Adobe did not buy Semrush for old-school SEO. The company bought it because the idea of search is shifting, as most of you reading this already know. Everything about how brands get discovered and remembered is changing.
For brands, visibility is no longer limited to page one. It includes:
AI citations
Assistant recommendations
Conversational visibility
AI-driven shopping and decision flows (where applicable)
For marketers, it means speaking both SEO and GEO fluently, whether we love the term GEO or not. If we hate it, be ready to explain why you don’t like the term in a calm manner. GEO is what people are hearing, and when they come to you for advice, they don’t want to be treated like idiots.
If you want help navigating the shift or building content that performs in an AI-first world, Blue Seven has your back.
OpenAI has instituted some new rules for ChatGPT regarding legal and medical advice. On October 29, 2025, it was announced that the world’s most well-known LLM will no longer offer tailored legal, medical, or financial advice.
If you’re anything like us and have been around search for a while, you probably thought, “Yeah, YMYL,” harkening back to Google’s treatment of topics dealing with a person’s money or life (Your Money, Your Life). We’ll walk through what changed, what searchers will notice, how this shapes law-firm operations, and how all of it connects to the ongoing YMYL (Your Money Your Life) standards that guide Google’s understanding of trustworthy content.
What the OpenAI Policy Changes Are and What They Actually Mean
OpenAI reorganized and clarified its usage policies for models like ChatGPT. The most important section for law firms is the explicit guidance around legal and health topics. The company states that AI systems can provide general information but cannot deliver tailored advice that substitutes for a licensed professional.
For the legal world, this is not new, but it is now spelled out in a way that removes ambiguity for users, developers, and businesses that wrap services around AI models.
Here are the practical takeaways:
AI can educate but cannot instruct someone on what they personally should do in a legal dispute, estate matter, custody issue, criminal charge, or any situation where outcomes have real legal consequences.
Any platform using OpenAI’s models for legal services must include licensed attorney review for personalized guidance.
OpenAI will continue to allow legal summaries, explanations of law, definitions, and general knowledge about legal processes.
Models will be more likely to add disclaimers or redirect users to attorneys for anything that requires professional judgment (we sense a strong opportunity for ads coming).
For law firms and legal marketers, this is not a barrier. It is clarity, and clarity creates opportunity. Firms that expect transparency in AI tools can build workflows that comply with these rules while still benefiting from AI’s efficiency.
OpenAI’s health and legal advice changes through ChatGPT are a good thing for lawyers and law firms.
What Searchers Will Experience When Asking Legal or Health Questions
People using AI for legal questions will notice a few shifts in how answers appear and how far the model is willing to go. These include:
More explicit disclaimers
Users will see more prefaces that clarify the model is not a lawyer or medical provider. This is now part of the expected pattern and signals higher safety standards.
Steering toward education instead of direction
Questions like “What should I do if my landlord violated the lease?” will produce responses that outline general steps, legal principles, or typical processes rather than a prescriptive action plan.
Encouragement to involve a professional
Searchers may see referrals to attorneys, legal aid clinics, or government resources for individualized guidance. Here comes the ads we sensed.
More accuracy checks in responses
The model is more conservative with confidence, reducing subtle errors or speculative content in areas where the stakes are high.
Less room for loopholes
Attempts to bypass limitations by rephrasing questions tend to lead back to general guidance. This creates a more consistent safety layer for users.
Overall, the user experience becomes more aligned with how legal content is supposed to work (and what we’ve done for years with SEO content): helpful, broad, clear, and pointing toward qualified professionals for anything case-specific.
How These Updates Influence Law Firm Operations and the Role of Attorneys
Law firms using AI internally will need to demonstrate attorney involvement in any client-facing output. This does not mean AI becomes less useful. In fact, the updates formalize how firms should use AI:
AI drafts, the attorney reviews.
AI summarizes, the attorney validates.
AI assists with communication, the attorney signs off.
This supports ethical rules that already require lawyer review of work performed by nonlawyers. AI tools simply become part of the remote support staff. Firms that embrace this approach gain speed and reduce administrative overhead without sacrificing compliance.
On the client side, law firms may see more questions about AI-generated information. Clients often bring AI summaries to consultations. These summaries improve attorney efficiency because they shorten the path toward understanding the client’s concerns. Well, that and they may also be a pain in the ass sometimes.
OpenAI’s clarifications help attorneys position themselves as vital decision-makers rather than just providers of information. In other words, this should help bolster YOU as the experienced legal mind someone should seek.
What These Changes Mean for Law Firm Marketing Strategies
This is where the biggest practical shift will happen. As AI becomes both more cautious and more central to how people consume information online, attorney-led marketing becomes even more valuable.
Educational content will outperform generic content
Since AI will avoid giving case-specific instructions, searchers will still look to law firm websites for local, detailed, jurisdiction-specific explanations.
Google rewards this with higher rankings because it meets YMYL content standards.
Authority signals matter more
Because AI is increasingly careful, users place more trust in:
Named authors who are licensed attorneys
Clear citations to statutes and government resources
Local knowledge
Practical explanations grounded in real experience
This raises the bar for law-firm blogs, practice area pages, and FAQ content. In fact, many of our clients have taken advantage of Focal Points, which is a step above basic SEO. If you’re interested, go check them out. —–> Focal Points.
Marketing language needs to match user expectations
Searchers may arrive with partial understanding shaped by AI summaries. That means law firm content must:
Validate what the searcher already knows
Clarify what applies specifically in your location
Provide safe, accurate, and actionable next steps within ethical rules
Emphasize the importance of human review
Firms that explain responsible AI use will win trust
A growing number of clients want to know whether their attorney leverages AI. Firms that publicly outline how AI assists with efficiency, but not judgment, gain a competitive advantage. My vet’s office uses AI to record the conversations and fill out the pet’s charts. They explained how they use AI to me and made it clear how it helps them treat the animals with higher quality care. I’m fine with it, especially because they explained it to me.
This is powerful content for newsletters, blog posts, and “About the Firm” pages. Embrace AI by acknowledging clients’ doubts and explaining why and how you use it.
AI will reshape the SEO moonscape
Because “general information” is easier for AI to generate, the pages that rank will be those with:
Location-specific depth
Attorney involvement
Real examples
Unique frameworks or insights
Clear statements of professional oversight
In other words, AI pushes law firms toward higher-quality publishing. That is good for SEO and good for clients. It should also send signals that using AI only to generate content is a bad idea.
How All of This Connects to Google’s YMYL Standards
Google’s YMYL framework has governed legal SEO for years, even before AI became mainstream. Legal content is part of the Your Money Your Life category because it affects a person’s rights, safety, finances, and future.
Under YMYL:
Content must demonstrate clear expertise
Authors must be credible
Claims must be accurate and sourced
The site must be trustworthy
Any advice must be safe
OpenAI’s new rules mirror that philosophy. Both systems now prioritize safety and credibility in high-impact domains.
The overlap between OpenAI’s policy and Google’s YMYL standards means that:
Law firms that emphasize attorney review will gain more visibility
Content grounded in state law (like Illinois statute citations) will outrank vague summaries
Well-structured legal explanations will become even more valuable
Trust signals, such as attorney bios and experience, matter more than ever
Lawyers who understand this dynamic will be able to connect with searchers in a way AI cannot. The result is stronger authority, better rankings, and more qualified leads.
The Bottom Line (for now?)
OpenAI’s policy updates did not remove legal information from AI tools. They clarified the boundaries that always existed and aligned them with ethical and safety standards in law. Searchers will still use AI to educate themselves, but they will rely on law firms to interpret laws, apply judgment, and make decisions.
Firms that adapt early (hopefully you already have been) will gain visibility, trust, and clients in a moonscape where AI shapes expectations but cannot replace licensed professionals.
Written by Blue Seven Content Co-Founders – Allen Watson and Victoria Lozano, Esq.
You can also check out The Possible Podcast featuring Reid Hoffman (founder of LinkedIn) and Aira Finger. Here, they discuss AI regulations, rules, and laws.
Lawyers, Bluesky is a social media platform you shouldn’t ignore.
Bluesky Social isn’t a fad anymore. It’s skyrocketed in popularity, propelled by the 2024 election. Your clients are exploring it, your colleagues are curious about it, and the legal profession, as a whole, is about to take notice. Bluesky is for lawyers, so get ready.
Bluesky Social launched as a decentralized social media platform designed to give users more control over their data and content moderation. It’s not the same kind of “free-for-all” as Twitter (now X), and it’s a platform built to provide stability (they say) and a new approach to social media. Bluesky allows users to enjoy an experience that combines connectivity and customization, a feature that will resonate with lawyers and their clients.
According to an article in The Verge, Bluesky’s approach is built on the AT Protocol, a new social networking technology focused on decentralization and user-controlled content moderation. By allowing users more control over who sees and engages with their posts, Bluesky creates a cleaner, more personalized environment that’s very different from the increasingly polarized landscape of X.
Claire Wardle, Cornell University professor and expert in misinformation, says that Bluesky has a “quite different” value system than the big socials, which is evidenced in a March 2024 blog from Bluesky itself when they said, “The first generation of social media platforms connected the world, but ended up consolidating power in the hands of a few corporations and their leaders.”
The goal of Bluesky, if you read the undertones, is to NOT be the bad guy.
Keep in mind, as you explore Bluesky Social for your law firm or as an individual attorney yourself, that this is a young platform. It’s going to have some bumps in the road, as NBC News points out. The company doesn’t have a big team (as of late 2024 and into early 2025), so moderating explosive growth has been challenging.
How Bluesky Differs from X (Twitter)
We can lament the state of Twitter (or X) in 2025, but it’s been on a spiral toward the depths of Hades for a minute now. The chaos following Elon Musk’s (our new co-president?) acquisition and rebranding has left a noticeable void in social spaces.* Many professionals, including lawyers (and us marketers), have been frustrated by X’s unfiltered content, lack of stability, and often combative environment.
Bluesky fills that gap with a focus on community-centered engagement. Basically, build “your people.” Quite different from the crush of weird algorithms overwhelming us that always seem to be out of our control.
* Apologies for any offense to the Department of Government Efficiency (DOGE). We acquiesce to your authority.
Benefits of Bluesky Over X for Legal Professionals
Bluesky’s features could, if used correctly, speak directly to the needs of lawyers and law firms:
Controlled Content Moderation: Lawyers can engage in conversations with other professionals without the risk of unsolicited trolling (we’re all for solicited trolling, I guess) or inappropriate commentary that often plagues other platforms.
A More Curated Network: Bluesky’s user-driven curation tools create an environment where lawyers can connect with relevant audiences, including potential clients, thought leaders, and industry influencers.
Professional Atmosphere: Bluesky’s interface and moderation tools provide a cleaner, more professional space that allows attorneys to share insights and expertise without wading through irrelevant noise.
The Future of Bluesky for Lawyers – Don’t Miss Your Early Mover Advantage
Bluesky is still a young platform, and that means you have the chance to carve out your digital territory before it becomes overcrowded. Establishing your firm’s brand early could be a smart long-term strategy, allowing you to reserve the best handles and build an audience without the clutter of usernames with numbers or underscores (my first username online was “@bigcat_38” and it was for AOL Instant Messenger. Don’t knock it, you probably had a horrible handle at some point, too).
This is your chance to stand out in a professional space, maybe before your local competitors.
As USA Today recently highlighted, Bluesky’s user base has grown rapidly, but it’s still in its infancy compared to other social platforms. We think early adopters are uniquely positioned to shape the platform’s culture and establish their voice. Don’t miss this early mover advantage. It’s rare to get this opportunity, and guess what – there are some parasitic movers out there sucking up Bluesky usernames so they can sell them back to big brands later. Don’t let someone else snag your name or your law firm’s name before you can.
Why Lawyers and Law Firms Need to Pay Attention to Bluesky Social
1. Early Adoption: Build Your Personal Brand and Your Firm’s Brand
Joining Bluesky now allows lawyers to shape their own brand in a space that’s still evolving. Unlike the more crowded platforms, Bluesky offers room for growth and a chance to connect without the saturation of traditional social media.
2. Connecting with Clients and Colleagues in an Emerging Space
Bluesky’s community-focused design makes it an ideal platform for connecting with potential clients who value thoughtful content. It’s also a unique space to network with other legal professionals and industry influencers who are navigating the evolving landscape of legal marketing and technology.
3. Bluesky as a Space for Thought Leadership in Law
On Bluesky, your content isn’t just one more post in a sea of noise. The platform is primed for lawyers to share thought leadership, whether that’s offering legal insights, sharing opinions on current issues, or discussing broader industry trends. Unlike traditional social media, where engagement can be hit or miss, Bluesky’s audience is seeking meaningful content.
Calling all of our favorite lawyers! You can get on Bluesky in a few minutes.
How Do You Get Started on Bluesky as an Attorney?
If you’re a lawyer (or anyone) or a law firm (or any entity) ready to get rolling on Bluesky, it’s pretty easy. We’ve linked better instructions from Wired, but here’s our snap summary:
Set Up Your Account and Optimize Your Profile
Getting started on Bluesky is simple. Choose a professional photo, write a concise bio that reflects your expertise, and make sure to link back to your law firm’s website (hold that thought – it doesn’t seem like we have the ability to link in the profile bios yet).
Identify Key Topics and Content Themes
Successful Bluesky content will be well-researched and relevant to your audience. For lawyers, this could include Q&As, recent legal case summaries, or explanations of complex legal issues in layman’s terms.
We’re just getting started on that platform as well, so maybe we can all learn together?
Who Runs Your Social Media? Talk to Them.
If you have a team managing your social presence, make sure they understand Bluesky’s unique potential. This is a new platform with distinct expectations, and it requires a tailored approach that prioritizes professionalism and meaningful engagement.
It’s much better if YOU post on your own. In your voice. Trust us, authenticity should be the word of the decade when it comes to building a brand. Don’t worry about making it perfect. Just be you.
Experiment with Bluesky’s Unique Content Formats and Engagement Tools
Bluesky’s platform design allows for more nuanced interactions through “Blues” and other engagement tools. Take time to test these features and find ways to effectively connect with your audience. We don’t even know what everything does yet.
The Role of Content in Your Law Firm’s Bluesky Strategy
A consistent, well-crafted content strategy is key to standing out on ANY social media platform, so we can expect the same tactic to work on Bluesky. Skilled content writers, especially those experienced in legal content, help convey your firm’s message in a way that’s compelling and compliant. High-quality content engages your audience, builds your firm’s credibility, and makes sure your posts reflect your brand accurately. But, can that longer-form content translate well to smaller social posts?
Yes.
At Blue Seven, we’re writers, not social media managers. While we understand the importance of a tailored social presence, our focus remains on crafting quality content that speaks to your audience. As Bluesky’s platform grows, we may expand our services to the social sphere, but for now, our priority is creating content that elevates your online presence.
We will use this opportunity to stress the importance of a consistent branding message across your platforms. We also know that good content can be useful not just on your website or newsletter but also across your socials. Why reinvent the wheel? You can use a great, human-written blog or practice area page and chunk that into smaller bits for LinkedIn, Instagram, Facebook, X (Twitter), Bluesky, and more.
So, if you want to talk about content, we can do that today.
Crafted by Allen Watson – CEO and Co-Founder of Blue Seven Content
In early March 2024, Google began rolling out the latest updates in its ongoing mission to provide searchers with the most useful information and not the self-serving clickbait created primarily to achieve higher search result rankings.
The updates focus on two main areas. Changes are being made to the core ranking systems to better distinguish and filter out unoriginal content. Spam policies are being improved in order to keep the lowest quality content out of Search.
For human content creators, the update is somewhat encouraging. It means human input and oversight are still highly valued contributions to content creation. And though Google does not have a problem with AI-generated content per se, it does have a problem with people trying to manipulate its systems with crappy content.
Should you keep calm and stay the course if your content is genuinely helpful? Or panic and switch gears?
Google Search March 5, 2024 Updates
Generative AI has provided some new opportunities for spammers and some new challenges for Google to respond to. In 2022, Google began refining its ranking systems to better identify helpful content and provided guidance about the kind of content it was looking to reward.
1. Filtering Out Low Quality, Unoriginal Content from Search Results
Several times a year, Google makes fairly major changes to its algorithm and other systems used to identify and rank content. These changes are called core updates. Google has advised the March core update will involve several core systems and will be more complex than previous updates.
At this point, Google will only say that the innovative signals and approaches that are being used to enhance the core systems mark an evolution in how the search engine will identify helpful content going forward.
A new FAQ page has been put together to help explain the changes. It doesn’t contain a lot of new information, but it does indicate Google will be focusing on content originality and trying to make sure users are left with the feeling they’ve had a satisfying experience.
Because of the complexities involved, the rollout of the March core update is anticipated to take about a month. Google says to expect fluctuations in ranking during the integration process, but no action is required by those who have been producing helpful, reliable, people-first content.
2. Identifying and Penalizing Websites Hosting Spammy Content
Google’s spam policies detail some of the types of content and practices that can result in lower rankings or in having content completely removed from Search. Google fights spam with both automated systems and manual human review and has indicated it will be using manual review to target sites violating the new policies.
The March update added three new spam policies in response to growing trends Google was observing.
Expired domain abuse
Scaled content abuse
Site reputation abuse
Expired Domain Abuse
Expired domain abuse is the practice of purchasing expired domains and leveraging their optimization advantage to promote content that may have nothing to do with the content produced by the previous owner. The practice can mislead users into thinking the new content is affiliated with the expired site.
Scaled Content Abuse
Scaled content abuse occurs when large amounts of content are created with the intention of manipulating search results rather than providing useful information to searchers. Such content may but need not necessarily be created by AI. Of particular interest to Google is content that lures a searcher to click by promising an answer to a popular question and then fails to deliver the relevant information.
Site Reputation Abuse
Site reputation abuse involves the publication of third-party content with little or no oversight by the hosting website, allowing the third-party content to piggyback on the host site’s better ranking signals. The third-party content may have little to do with the host site’s main purpose and offer minimal value to users.
Google provides a few non-exclusive examples of what it will consider site reputation abuse as well as some examples of the type of third-party content that will not be considered spam. The more valuable the content is to site users and the more involvement by the hosting site, the less likely third-party content will be found to violate spam policies.
Possibly because the practice of hosting third-party content is fairly widespread, Google will not begin enforcing the new policy until May 5, 2024, to give site owners an opportunity to make any necessary changes in order to comply.
Google’s Stand on AI-Generated Content
Back in February of 2023, Google offered guidance about AI-generated content. The search engine clarified that it is looking for quality content no matter how the content is created. Original, people-first content demonstrating experience, expertise, authoritativeness, and trustworthiness (EEAT) will be rewarded by Google’s ranking system, whether written by people or machines.
However, Google is well aware that machine-made content without human oversight can be unreliable and unoriginal. Updates continue to emphasize the need for significant human involvement to ensure AI-generated content is written to Google’s standards and does not violate any of the spam policies.
What the Google Updates Mean for Low-Quality or Spammy Content
So, what is a website owner to do? Regarding the core update, acclaimed SEO expert Lily Ray recommends not overreacting to any early fluctuations in traffic as Google begins implementing the changes. Try to wait until everything has been rolled out because Google may need to course-correct mid-way through in order to initiate the changes it wants to make.
For sites that may be or already have been penalized for violating the new spam policies, the best thing to do to try and save the site is to own the violation, figure out a new plan to try and get on Google’s good side again, and then apply for reconsideration. Second chances aren’t guaranteed so it’s best to be thoughtful and sincere in the reconsideration request.
What the Google Updates Mean for High-Quality, Useful Content
The good news is that owners with helpful, reliable, people-first content on their sites should not have to do much of anything to make sure they are in compliance with Google’s updates. If the content on your site has always been high quality and useful, then you aren’t likely at risk of lower rankings or a spam violation.
At Blue Seven Content, we aren’t worried about the content we produce for our clients. All of our content is generated by humans who are intimately familiar with producing the kind of content Google wants to reward. Because Blue Seven focuses on content quality, originality, and engagement, our approach never has to vary much, no matter what changes Google decides to make.
The information contained in legal documents submitted to a court is expected to be accurate. The legal positions advocated are expected to be supported by existing law. This is not something that can be guaranteed when legal documents are generated by artificial intelligence, as has been demonstrated on several recent occasions.
While there is no general rule regulating how attorneys can use AI to generate legal documents, courts are starting to specifically require documents have human input prior to submission. A federal circuit court of appeals is considering adopting a rule requiring lawyers to disclose the use of AI and certify they have checked the accuracy of any AI-generated material filed with the court.
Some lawyers have made mistakes with AI – can we prevent these mistakes in the future?
Fifth Circuit Court of Appeals Proposed Amendment to Certificate of Compliance Requires Disclosing Use of AI
That generative AI could not be trusted to produce accurate legal information was first discovered last year when two New York lawyers used ChatGPT for legal research and submitted a legal brief that included fictitious case citations generated by the AI tool.
Some other courts around the US have responded by banning the use of generative AI or requiring attorneys to disclose whether or not AI was used to draft documents. The highest court to address the issue of using AI to create documents is the Fifth Circuit Court of Appeals.
The proposed change would require attorneys to certify either that they have used no generative AI to draft any document or, if generative AI was used, that all citations and legal analysis have been reviewed for accuracy and approved by a human.
A material misrepresentation in the certification of compliance could result in the court striking the document and imposing sanctions on the person signing it.
The written comment period ended on January 4, 2024, and a final decision on the adoption of the proposed rule is currently pending.
What Can Go Wrong With AI-Generated Legal Documents?
Simply put, AI-generated legal documents may contain false information that the technology just makes up. Why does this happen? It has to do with how AI models learn and process information and their tendency to predict outcomes based on patterns rather than factual accuracy.
The propensity of generated AI to fabricate information is known as hallucinating. And it’s currently a problem without a solution.
One of the latest instances of reliance on the unverified accuracy of AI-generated case law citations occurred in December 2023 when the attorney representing (Donald Trump’s former personal attorney) Michael Cohen, submitted phony case citations in a motion to the court.
Cohen had used Google Bard to do legal research and claimed to be unaware that the case citations he provided to his lawyer could be inaccurate. Cohen’s lawyer apparently accepted the sources provided by his now-disbarred client without bothering to check their validity.
How Disclosure Requirements Might Affect Attorney Use of AI
The recent incidence of attorneys trusting generative AI to their detriment suggests some lawyers may be using the technology without fully understanding its capabilities and limitations in a law practice. The negative publicity and the court requirements to disclose the use of AI and then vouch for it may deter some law firms from experimenting with content automation in their practices.
AI is not going anywhere and will continue to develop and become more specialized for various uses – including the practice of law. Lawyers need to become familiar with how AI can best be used to increase the efficiencies of providing legal services while also understanding that AI is not a replacement for professional judgment and must always be reviewed for accuracy.
Legal Community Response
The legal community, at least a portion of it, has voiced opposition to the 5th Circuit’s proposal.
Lawyers have expressed concerns, highlighting that existing professional conduct rules sufficiently cover obligations for accuracy in documents. The rise of AI in the legal profession has led to various court orders, some educating about AI, others outright prohibiting it, and most requiring disclosure of AI use.
Bar associations are also examining AI’s ethical implications, with the American Bar Association and state bars like California and Florida actively working on guidelines. The overall sentiment is that AI will significantly transform the legal industry, but there’s caution about explicit rules on AI use in legal practice.
Considerations for Law Firms that Want to Use AI to Generate Legal Documents
A recent article on “Artificial Intelligence (AI) and the Practice of Law” by US District Judge Xavier Rodriguez offers some basic information about AI and how it can be used to benefit a law practice while pointing out some potential issues law firms need to be aware of if they choose to use the technology.
The article acknowledges that AI can be of great assistance to lawyers by performing time-consuming functions such as legal research, document review, and client communications. It may be useful to help with the creation of forms and legal documents. But, any AI output must always be reviewed for accuracy and edited using professional judgment.
Revealing privileged information is an ethical issue that law firms using AI need to keep in mind. Client confidentiality may be compromised if protected client information is used as a prompt because of how AI may further use the private information. Before submitting client information to an AI platform, a law firm needs to be sure that the information will remain secure.
As for legal documents filed with a court, Rule 11 of the Federal Rules of Civil Procedure already requires attorneys to certify to the court that a reasonable inquiry was made into the information contained in a submitted document, and based on the inquiry, it is believed the information is accurate and the legal contentions are supported by existing law.
Under the law as it presently exists, it would be rather dubious to argue a reasonable inquiry was conducted when documents that contain fake case citations are submitted to a court.
What Attorneys Need to Know About Using AI to Generate Legal Documents
The Fifth Circuit’s proposed amendment to its document certification rule doesn’t impose any additional obligations on attorneys who are already expected to present accurate information to a court – including the law cited in support of a legal position.
Should the Fifth Circuit adopt the proposed amendment and other courts around the country follow suit, lawyers will merely be put on notice that AI is not a stand-in for professional judgment or ethical responsibilities, and attorneys will be held accountable for misuse of the technology.
AI website content has serious implications concerning copyright laws, whether we are talking about copyrighted material receiving copyright protection or whether the AI-generated content you use for your website violates existing copyright laws.
The astronomical rise of artificial intelligence technologies over the last few years has not only disrupted many industries but has also threatened the very existence of some. There have been serious conversations in business and government about how best to regulate AI, but there’s no consensus on what that would even look like.
At Blue Seven, we’re deeply invested in the trajectory of AI laws. We are a company of trained, professional writer-researchers, but we are well aware of the impact of AI and large language models (LLMs) like OpenAI’s ChatGPT and Google’s Gemini on our industry. We (company founders Allen Watson and Victoria Lozano, Esq.) have kept a close eye on developments, particularly the ones that directly impact website content writing.
We’re at the one-year mark since the release of ChatGPT to the world, and it’s certainly been a year. This is a good time to review the still-evolving issues surrounding AI website content and copyright laws in the US.
The field of law surrounding AI is in its infancy, and copyright issues are at the forefront of discussion.
Could AI Written Content on Your Website Violate Copyright Laws?
Before diving into the debate over whether or not AI-generated content violates copyright laws, we have to understand what can be copyrighted in the first place.
The US Constitution authorizes federal legislators to “secur[e] for limited Times to Authors . . . the exclusive Right to their…Writings.” As with every facet of federal law, a regulatory agency gets to interpret what the law (or Constitution) actually means (or what they think it means). Throughout history, regulatory agencies have been given significant leeway when it comes to these interpretations.
The Copyright Act was born out of the aforementioned language from the Constitution, and this Act allows for copyright protection to “original works of authorship.”
One of the main issues of note as we dive into this subject is the failure to define what it means to be an “author,” which is something that didn’t need much clarity throughout history. However, history didn’t have to contend with artificial intelligence.
We have to define what an “author” is so we can examine how copyright laws apply to new AI technology.
As with all pieces of legislation, their trajectory is determined by legal precedence, which there is plenty of when it comes to the Copyright Act. The Copyright Office only recognizes a copyright for works “created by a human being.” We can look to various court cases to narrow down what the Act, through the Office, defines as “human” in the copyrighted works.
Courts have denied protection to non-human authors, holding that a monkey cannot receive copyright protections for photos it took because it lacked standing to sue (non-humans cannot bring a legal action in court).
Courts have explicitly said that some human creativity is needed for a copyright when they decided on whether or not to issue a copyright to celestial beings (seriously).
Courts have denied a copyright for a living garden because a garden does not have a human author (this could probably be argued otherwise, but, alas, the Courts have spoken for now).
More recently, Dr. Stephen Thaler was denied an application to register a piece of AI artwork with the Copyright Office. The piece of art was authored “autonomously” by an AI technology called the Creativity Machine. Dr. Thaler argued, unsuccessfully, that the piece did not need “human authorship” as required through existing copyright laws. A federal district court disagreed, stating clearly that “human authorship is an essential part of a valid copyright claim.”
This decision will almost certainly be appealed.
Interestingly, The UK’s highest court also heard a case related to Dr. Thaler and his AI. There, the Court ruled that artificial intelligence cannot be listed as an inventor on a patent application. This ruling will have implications for future AI court cases in the UK. The Court found that, under existing patent law, the inventor of an object must be a “natural person.”
Is There Any Hope For AI and Copyrights?
All hope is not lost for those who want to obtain copyrights for material that gets created using AI.
Generative AI programs could still receive copyright protections, but whether or not they do will depend entirely on the level and type of human involvement in the creative process for the piece. One major preemptive blow for those seeking AI-generated content copyright came in the form of a copyright proceeding and copyright registration guidance. Both of these indicate that the Copyright Office is unlikely to grant human authorship for any AI program generating content via text prompts.
Before the release of ChatGPT, the major discussions around AI and copyright protections centered on artwork. In October 2022, the Copyright Office canceled copyright proceedings for Kris Kashtanova.
Kashtanova filed a copyright protection in 2022 for a graphic novel containing illustrations created by AI tech Midjourney through a text prompt. The Office said that Kashtanova failed to disclose that the images were made by AI. Kashtanova responded by arguing that the images were made through a “creative, iterative process,” but the Office disagreed. Guidance issued by the Office in March 2023 (4 months after the release of ChatGPT) says that when AI “determines the expressive elements of its output, the generated material is not the product of human authorship.”
Counterarguments have certainly been made. Many believe that AI-created works should be eligible for copyright protection because AI has been used to create works that subsequently received copyright protection. Funnily enough, we can examine a case from 1884,Burrow-Giles Lithographic Co. v. Sarony, in which the Supreme Court held that photographs could receive copyright protections in situations where the creator makes decisions about the creative elements in the shot itself (lighting, arrangement, composition, etc.). The argument could be made (and has been) that new AI tools are basically the same when a human sets the parameters for the output.
Of course, it’s much more complicated and nuanced than that. In fact, that argument doesn’t hold much weight upon closer examination. The analogy between photography and new AI is a weak thread. For example, in the Kashtanova case, the Copyright Office says that Midjourney (the technology used to create the graphic novel) is not a tool that Kashtanova controls or guides to get the desired image because it “generates images in an unpredictable way.” Whereas the photographer claiming copyright protection can distinctly point to the elements under their control, someone using generative AI will struggle with that.
The Copyright Office offered a counter-analogy by saying that using AI to create a piece is similar to a person commissioning an artist. The person who commissions the artist can’t claim a copyright for the piece if they only give general directions for its completion. The Office, again in March 2023, determined that a user does not have ultimate creative control over generative AI outputs in order to reach the level of intimacy required for a copyright.
Even though it seems like the Copyright Office is deadset against granting copyrights to AI-generated content, the issue certainly isn’t settled (are any laws ever settled?). The Office knows this and has left the door open to the idea of copyrights for works that contain AI, but, again, it’s complicated. A copyright likely wouldn’t be available for all of a piece of work containing AI-generated content – only for the human-generated portion of the piece.
Copyright office only allows copyright protection for a person’s own contributions to works that combine both AI and human-generated content. They say that a creator must “identify and disclaim AI-generated parts of the work if they apply for a copyright.”
Having said all of that, it’s important to understand that regulatory agencies, including the Copyright Office, cannot implement regulations that are considered unconstitutional. How do regulations created by regulatory agencies pass Constitutional muster?
Why, the courts, of course. That’s for discussion later in this article.
Who (or What) Owns the Copyright to Generative AI Outputs?
If we work on the assumption that some AI-generated works will be eligible for copyright protection, exactly who would own the copyright?
Would it be the person who tells the AI technology what to do?
Would it be the company or entity that created or leases the AI technology?
We could even go so far as to ask whether investors in AI technology could ultimately hold copyrights for works created by the AI. For example, Microsoft is OpenAI’s largest financial backer, and they even hired OpenAI’s CEO, Sam Altman, to run their new AI division the day after he was fired by OpenAI’s board of directors. That was, until a day later when Altman was hired back as CEO of the company after nearly every OpenAI employee threatened to leave and go to Microsoft with him.
Alas, this is an interesting story of internal politics for another time, but it does illustrate just how intertwined AI technology is with investors and other major companies, perhaps ones we don’t want obtaining all of the copyrights available.
The issue of “who” is liable for copyright violations arising due to generative AI has yet to be decided.
Chapter Two of the Copyright Act says that ownership initially falls to the author(s) of the work in question. Since we don’t have much judicial or regulatory direction about AI-created works yet, there’s not a clear rule about who an “author or authors” are for a piece of work (here we are again, debating authorship).
We would consider a photographer the author of their photographs, not the maker of the camera the photographer used. Drawing an analogy, it would seem this opens the door to copyrights for people who input the parameters for a piece of work into AI technology, not to the creators of said technology.
This particular view would equate the person who inputs the parameters for the work to an author and the initial copyright owner for the piece. However, this argument loses weight if we consider the AI creator’s claim to some form of authorship due to the coding involved in the training and the training the AI has undertaken to help it create the piece.
Companies (for-profit and non-profit) could try to claim authorship and, therefore, copyright protections for a piece, and they could do so via user agreements (the fine print we all ignore). If you don’t think this could happen, think again. OpenAI, the creator of ChatGPT, previously did not seem to give users any copyright protections for the output based on their inputs. However, OpenAI has another iteration of their Terms of Use that says “OpenAI hereby assigns to you all its right, title and interest in and to Output.”
Regardless of whether OpenAI says users do or do not have the rights to their work, the Courts will be the ultimate decision-makers on the questions of copyrights.
Copyright Infringement – Does Your Website Content Already Violate Copyright Law?
Perhaps of more concern for our main industry at Blue Seven (law firm content marketing) is the question of copyright infringement by generative AI, especially LLMs like ChatGPT. If you’ve been in tune with legal marketing this year, you’ll know there’s an entire industry that’s sprung into existence offering rapid, scalable content for law firms (and every other industry).
Many websites quickly pivoted. Why, CEOs and CFOs reasoned, should they pay human writers to do something that can now be done for free? That’s a topic for another conversation, but suffice to say, the quality of AI-generated legal content has been less than stellar.
The issue here is that many people jumped to AI and are still jumping ship before knowing whether or not the content produced by AI would violate copyright laws. The debate over whether or not generative AI content infringes on copyrights is raging in public and in the courts. While we understand that website owners with a non-legal background wouldn’t necessarily know much about the potential copyright issues, we do fully expect law firms and legal marketers to anticipate these issues and proceed with caution.
Some have proceeded with caution. Others have bound forward like an F5 tornado through a barn.
Do generative AI programs infringe on copyrights by making copies of existing content to train their LLMs or by creating outputs that closely resemble existing content? That’s the question we don’t have an answer to right now. But we can look at where the winds could take us.
Does the AI Training Process Infringe on the Copyrights of Other Works?
This is a question that, though it needs answering, may not affect a website owner but could certainly affect AI companies pioneering new technologies.
Every complex artificial intelligence model uses specific coding that directs it (the model) to learn. But how do they learn?
AI models like the LLM ChatGPT are revolutionary. It’s great for what it is, at least. ChatGPT is good because OpenAI has trained the LLM off of, well, everything. OpenAI has had many ChatGPT models over the years, but ChatGPT 3 is the model that enthralled the world. ChatGPT 4 was released soon after, and it now has access to the web, whereas the previous version’s knowledge base ended in 2021.
OpenAI has never denied using the works of others to train its LLM. They’ve explicitly said their model learns from many sources, including copyrighted content. OpenAI says it created copies of works it has access to in order to use them (the copies) to train their models.
Is the act of creating these copies to use an infringement of the copyright holders’ rights? The answer to that depends on who you ask.
AI companies argue that the process of training their models constitutes valid fair use and, thus, does not infringe on others’ copyrights. We’ve introduced the term fair use, which needs to be defined in the context of 17 U.S.C. § 107, which outlines four determining factors for determining fair use:
The purpose and character of the content’s use, including whether the use is for commercial purposes or non-profit, educational purposes;
The nature of the copyrighted material;
The amount of substantiality of the used portion of the copyrighted material in relation to the work as a whole;
The effect of the use of the copyrighted material on the potential market for or the value of the work.
Did you know that OpenAI is a non-profit organization? Well, kind of. They transitioned to a for-profit company but are still majority controlled by the larger non-profit. It’s complicated and probably meant to be in order to attempt to claim that they (OpenAI) aren’t using the information for commercial purposes and can claim the content they use through fair use.
In December 2023, Google launched Gemini, their response to ChatGPT. Google says that Gemini will include three sizes and will underpin its Bard program as well as SGE responses. According to the company, Gemini Ultra “is the first model to outperform human experts on MMLU (massive multitask language understanding).” This model combines 57 different subjects, ranging from medicine and ethics to history, physics, biology, and more. With the release of Gemini into an already rapidly growing field, you can be sure the courts, regulators, and legislators will move into high gear to create a framework for regulation.
It seems the direction the conversation is heading, at least politically and in the courts, can be seen in how the US Patent and Trademark Office describes the AI training process by saying, “will almost by definition involve the reproduction of entire works or substantial portions thereof.” The way government regulators are framing this conversation, we can see they are erring on the side of copyright holders.
Do AI Outputs Infringe on the Copyrights of Other Works?
When we examine the fourth point in determining fair use mentioned above, we see where specific issues for website content come in. The major concern, and one we should all be aware of if we operate a business website, is that AI models allow for the production of content that’s similar to other works and competes with them.
There have indeed been multiple lawsuits filed by well-known individuals in the entertainment industry against AI companies and entities. These lawsuits dispute any claims of fair use by the AI companies, arguing that the products of these models can undermine the market (and value) of the original works.
In September 2023, a district court ruled that a jury trial would be necessary to determine whether an AI company copying case summaries from Westlaw constitutes fair use. Westlaw is a legal research platform, so this case will directly affect a company in our particular realm. The court already conceded that the AI company’s use of the content was “undoubtedly commercial.” The jury would be needed, however, to handle four factors:
Resolve factual disputes about whether the use was transformative (as opposed to commercial);
Determine to what extent the nature of Westlaw’s work favored fair use;
Determine whether the AI company copied more content than they needed from Westlaw to train their models;
And determine whether the AI program could constitute a market substitute for the plaintiff.
The output of AI that resembles existing works could constitute an infringement of copyright. If we look at case law, a copyright owner could make a case that AI outputs infringe on their copyrights if the AI model (1) had access to their content and (2) created “substantially similar” outputs.
Showing element one here won’t be the issue as these cases go through the court system. These companies have been fairly open about how they’ve trained their models. It’s element two, showing that the outputs are “substantially similar,” that presents the biggest legal hurdle.
Defining “substantially similar” is tough, and the definition varies across US court systems. In general, we can see the courts have described defining “substantially similar” by examining the overall concept and feel of the piece or the overall look and feel. Additionally, the courts have examined whether or not an ordinary person would “fail to differentiate between the two works” (a comparison between the original and the AI-generated output, which is trained on the original).
Other cases have examined both the “qualitative and quantitative significance” of the copied content compared to the content as a whole. It’s likely that the courts will have to make comparisons like this in court so that a judge and/or jury can make a determination.
Two types of AI outputs raise concern, with the first concerning AI programs creating works involving fictional characters. Imagine Luke Skywalker showing up in a new book about Marco Polo’s adventures. AI could certainly do this, but it would be easier to see this as a copyright infringement.
The second area of concern focuses on prompts requesting that the AI output mimic the style of another author or artist. For example, you can attempt to have ChatGPT craft a criminal defense practice area page in the voice and style of Stephen King. While this would certainly make for an entertaining read, publishing it could constitute infringement, but that is admittedly a gray area right now.
This is a New Era of Digital and Copyright Law
AI companies are preemptively blaming their models’ users for any potential copyright infringement that occurs as a result of their given outputs. As the Copyright Office weighs new regulations for generative AI, they recently published a request for public comments on the new potential regulations (a standard procedure for all regulatory bodies weighing changes to existing regulations).
The public comments and replies thus far give us an understanding of how the AI companies are going to battle this in court. Notably and predictably, Microsoft, OpenAI, and Google all have something to say about this issue.
Microsoft (again, OpenAI’s largest backer with a 49% stake in the company) says that “users must take responsibility for using the tools responsibly and as designed.” The company says that AI developers have taken steps to mitigate the risk of AI tool misuse and copyright infringement.
Google quickly lays the blame on users of the technology by reiterating that generative AI can replicate content from its training but saying that this occurs through “prompt engineering.” Google’s public comments go on to say that the user who produces infringing output should be the party held responsible, not the company behind the technology.
OpenAI flatly says that infringement related to outputs from the technology “starts with the user.” They say that there would be no possible infringement on copyrights if not for the user inputs (nevermind the fact that OpenAI has copied nearly every piece of information, much of it copyright protected, to train its programs).
The Lawyers Tackling Complex AI Litigation
The beauty and beast of the dawning of the age of AI are the legal nuances that have yet to be fleshed out. As we have already reviewed, the concept of authorship and copyright issues are becoming increasingly scrutinized. However, there is also the element of how AI is trained to “learn” how to better answer questions or provide more specific– not necessarily accurate– information.
In an elementary understanding, AI learns by gathering data, also known as other people’s work, into an algorithm. The machine learning system can distribute the information descriptively, predictively, or prescriptively. But who’s work is being used, where is the work going, and how is it being referenced?
We all remember the days of citing sources. Well, AI is far from doing that. Instead, these models are benefiting and learning from people who have worked their entire lives to create beautiful, funny, and charismatic (all the adjectives) words and images that are now being filtered through an algorithm without any mention of where this information is coming from.
From authors to comedians to legal institutes, creators are filing suits against various AI programs on the legal theory that their work is being used to train AI without proper recognition or compensation. AI is exploiting the work of artists and scholars. Lawyers like Matthew Butterick and Joseph Saveri are standing up against prominent AI companies like Open AI, META, and ChatGPT. They are shaping the future world of AI by standing up for the humans who are providing the work in order for AI to learn and adapt.
Like anything in the legal world, all this will take time. When filing a lawsuit, you must present a legal complaint that outlines in short, plain language the facts, their application to the elements of each offense, and relief. Currently, AI programs are being sued for negligence, copyright infringement, unlawful competition, unjust enrichment, and DMCA violations. In fact, a new lawsuit was just filed by Julian Sancton, “and thousands of other writers did not consent nor were compensated for the use of their intellectual property in the training of the AI.”
As recognized in Harvard law journal, even though the lawsuits are compiling, there is no easy clear end in sight. The courts are going to have a hard time distinguishing between AI-generated material, authorship, what is considered public sourcing, and copyright infringement, among other legal theories like templates versus AI-generated outlines.
Eventually, once these lawsuits get past state courts and appeal processes, some cases could land at the feet of the Supreme Court as a writ of certiorari. If (when) that happens, the Supreme Court can deny the petition, and whatever opinion is delivered by the highest state court will prevail.
Or, the Supreme Court will become the final arbiter.
How Will the Government Regulate AI
The idea of the US government, specifically our elected officials, crafting regulations for artificial intelligence is almost laughable. This new technology is far more advanced than “internet things” that came before it, and we’re still living with Internet laws from the 1990s. Alas, the efforts are being made.
Time Magazine has shown that Congressional inquiries into AI are still in their infancy as legislators proceed with caution. Senate Majority Leader Chuck Schumer held a closed-door meeting with the nation’s main tech leaders in September. The goal, according to legislators, is to pass bipartisan AI legislation sometime in 2024, but with the technology evolving so rapidly, this seems like a tall order.
However, Congressional urgency on this subject is clear. Senate Intelligence Committee Chairman Mark Warner, D-VA., told reporters after the closed-door meeting that, “We don’t want to do what we did with social media, which is let techies figure it out, and we’ll fix it later.
A side note, and one that should be of concern, is that the average of Representatives is currently 58.4 years, and the average age of a Senator is 64.3 years. These are folks who, as smart as they may be, aren’t likely fresh on the ‘newfangled’ tech.
Hopping over to the executive branch, the Biden administration has crafted aBlueprint for an AI Bill of Rights. The White House sees the potential challenges posed to democracy with the expansion of unregulated AI into our lives and admits the outcomes could be harmful, though not inevitable.
The Blueprint highlights the need to continue technological progress but in a way that protects civil rights and democratic values. Amongst the prongs of protection outlined in the piece is a stress on data privacy, which ties directly into our copyright discussion. Specifically, the Blueprint says that “[D]esigners, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible…”
The Blueprint goes on to call for enhanced protections and restrictions for data and inferences drawn from sensitive domains, saying this information should only be used for necessary functions (though “necessary functions” isn’t defined, giving leeway to the tech companies to craft the definition).
The AI lobby in the Capitol is already strong, and it’s growing. Every major tech company in the US has an interest in the outcome of any regulations, legislation, executive order, and judicial decisions related to artificial intelligence.
OpenAI says it “urges the Copyright Office to proceed cautiously in calling for new legislative solutions that might prove in hindsight to be premature or misguided as the technology rapidly evolves.”
These companies and their mega lobbying power will certainly influence the outcomes of any governmental regulations or new AI legislation. This is where we take a “wait and see” approach and hope for humanity to win out.
What the American Bar Association Says
As we wrap this up, we want to examine what the American Bar Association says about AI as this moves closer into our territory as legal content providers. The latest update we posted was in May 2023. The ABA provided a three-pronged approach to AI resolution to ensure accuracy, transparency, and accountability. In August 2023, the ABA created a task force to study the impact of AI in the legal profession.
The task force’s objectives are to explore:
Risks of AI (bias, cybersecurity, privacy, and uses of AI such as spreading disinformation and undermining intellectual property protections) and how to mitigate them
Emerging issues related to generative AI tech
Using AI to increase access to justice
AI governance (including law and regulations, industry standards, and best practices)
AI in legal education
The ABA is responsible for maintaining this code of ethics for lawyers. As a profession (Victoria writing here), we uphold an oath to protect the law. But we also swear to tell on each other. There is no overseer of how lawyers conduct their work. Instead, we rely on the ABA to set the tone as well as our state bar code of ethics.
With the introduction of this task force, we may see an increase in lawyers submitting complaints to the state bar if they have actual knowledge that an attorney is abusing AI to conduct their work. The ABA task force reinforces the importance of using AI responsibly, if at all.
Be Smart About Using AI Content on Your Law Firm Website
The high-stakes conversations surrounding AI and copyright infringement are playing out amongst giants. They are companies, regulators, elected officials, and academics debating the next steps for this revolutionary technology.
We’re just on the sidelines, trying to stay out of the lines of fire. We don’t yet know what AI regulation will look like. Nobody does. What we can almost certainly guarantee is that AI laws will remain in flux and likely be a few years behind wherever the technology is, especially with the rapid advancement of the tech we’re seeing right now.
As you make the decision about how to craft a content strategy for your law firm, construction company, or any other industry, we urge you to proceed with caution. A signature on a piece of new legislation, a change at the regulatory level, or a judicial decision could change everything.
Regardless of where the new era takes us, you shouldn’t take shortcuts that sacrifice the quality of your content, and you certainly shouldn’t make any moves that violate the copyrights of others. Create your content plan wisely with a team that genuinely cares about writing quality and integrity.
Written by Allen Watson and Victoria Lozano, Esq. – Founders of Blue Seven Content
The American Bar Association has responded to ChatGPT.
The increasing use and fascination of ChatGPT are reminiscent of Wikipedia. I remember in school being told not to use Wikipedia as a source because the author cannot be trusted. We see the legal industry viewing the use of ChatGPT much the same. Even though you input a question or demand, the answer is curated by an untrustworthy and anonymous author – an algorithm.
The ABA’s Take on AI
The American Bar Association has responded to the legal community’s concerns about the use of AI by adopting Resolution 6043 at its 2023 midyear meeting. In summary, the resolution addresses how leaders should address accountability, transparency, and traceability in artificial intelligence. The resolution states three major components:
1. Adopt Guidelines
Using AI as a tool to better accomplish tasks is one thing. To rely on it completely is another. The initial factor is to ensure guidelines are created to maintain a human as the authority over the AI product. For example, there may be copyright and trademark issues at hand if a software engineer uses AI to help them create code for a page. Depending on the application of AI, the code may not be eligible for copyright. Thus rendering any monetization of the engineer’s idea useless.
2. Be Accountable and Take Reasonable Steps to Mitigate Harm
The second prong of the resolution calls for entities and organizations to take accountability for their misuse of AI unless they can show that steps were taken to mitigate the harm. This sounds vaguely familiar to an employment attorney trying to defend a discrimination charge by stating that there were policies and steps in place for the alleged victim to have used. Conversely, it is asking companies and businesses to be more proactive about how their employees and leaders use AI tools.
We know that algorithms can carry biases. We know that there are sets of data and codes that also carry biases. If a company is aware that this kind of activity is occurring, then it is on them to take accountability. For example, Amazon was using an AI-generated program for hiring. The algorithm for that program perpetuated a bias toward women’s applications. This resolution definitively places the blame on the companies and entities that use such programs to help with things like their hiring process.
3. Document Key Decisions Regarding Use of AI
This final prong of the resolution seems to speak to the developer’s use of AI when curating intellectual property, data sets, designs, coding, etc. Using AI is inevitable and should be used thoughtfully while also making diligent notes of its use. Illinois had been one of the thought leaders implementing strict laws regarding the use of AI. For example, for any biometric data collected, there has to be a series of notices provided to the user about where their data will be stored, how it will be protected, and, hopefully, deleted. They are asking for a similar standard for the use of AI.
The Reason for the Resolution
The purpose of this American Bar Association ChatGPT / AI resolution is beyond the scope of using the tool to write an email. Instead, they are considering the production of self-driving cars, medical development in surgeries and medical devices, and autonomous systems. So how does this relate back to legal marketing? It is all about the audience.
How Resolution 604 Relates to Legal Marketing
Attorneys do not want the headache of being the brunt of a lawsuit or being the reason for a lawsuit. Using ChatGPT to curate your law firm’s pages means inputting your law firm’s data into a database and relying on an unreliable author to curate your media/content.
Importantly, that goes against the first prong of the resolution and easily bleeds into the second. How is your firm mitigating the harm of AI if they are letting the marketing department use AI to curate its marketing content? It begs the question, how else are they using AI in their firm? What guidelines are they implementing in one department but not the others?
The legal industry is heavily reliant on reputation. If you are reputable, then you are referable. If your site does not reflect your reputation, how can you keep relying on those referrals?
It’s not a new message. We have been writing about this since the initial launch of Chat GPT. The use of AI is a delicate dance. You want to use it as a tool but not rely on it like a religious text. However, as the legal industry continues to explore the use and implication of AI, we start to see more and more legal barriers.
We also have to consider how Google is reacting to the AI trend. We know that ChatGPT is limited in its scope, and certain industries are trying hard not to let details like their pricing structure fall into the ChatGPT data hole for fear of trademark issues or breach of trade secrets. Many companies are legally asking their employees not to use ChatGPT so that OpenAI does not get information like that stored in its database.
Like the old Wikipedia, ChatGPT is an unreliable author (and regurgitator of information). It can provide great outlines, but its content is repetitive, lacks accuracy, and can be curating material from biased sources which ultimately leaves your reputation in question. The ABA’s resolution lays down the groundwork for how the legal industry needs to frame the use of AI: as a tool but not as the “end all, be all.”
Blue Seven Content is Attorney Driven, Human Curated
At Blue Seven Content, we are attorney driven, human-curated. Our writers are well-equipped and experienced to research and include reliable sources that will drive your authority on a subject and increase your site’s credibility. When a reader who needs you views your website, their instinct to trust will help turn that viewer into a client. If you have any questions about ChatGPT or your law firm’s content, reach out to us for a chat today. We are powered by attorneys who understand American Bar Association resolutions and trends with AI and ChatGPT. We’re here to help.
Written by Victoria Lozano – Attorney, Co-Founder & Consultant
The water was clear and blue, which was fitting. Blue Seven Content was born in a beach town, so traveling to another beach was the perfect way to begin the next journey for the company. This year, we decided to attend the Legal Marketing Association’s 2023 conference in Hollywood, FL.
Blue Seven is a growing company, but we (founders Victoria and Allen) made a decision soon after we started the company to grow slowly. After all, you can’t have a company that does only content if your content is less-than-stellar. Growing too fast leads to the inevitable issue of quality control. But recently, we’ve felt ready for the next steps towards growth, even if that just meant laying down the tracks for us to roll on later.
In other words, we expected to go to Florida to make good connections and possibly gain some new business.
We actually didn’t have high expectations going in. We’d been told numerous times that this conference probably wasn’t up our alley, that we’d be wasting money. To be clear, it does cost a chunk of dollars to even attend these events as a vendor, and that’s before factoring in travel, lodging, meals, and entertainment.
But we went for it, and we brought five of our team members along.
I won’t go into every detail of the trip, because the truth is that every single day was a new experience that felt like a good dream. Aside from the warm weather (and the usual Florida showers), there was a sense of accomplishment. When we started this company, we never thought we’d attend a major conference. It never crossed our minds. We’re just a “content” company, after all.
But we went in with an enthusiasm that was undeniable. We all felt it. We had some of our writers with us along with our English to Spanish translator. Our booth was perfectly placed for traffic, and we had great neighbors (shout out to the On the Map Marketing team!) We also met some great people and companies (special thanks to our new friends at Legal Growth Marketing). We’re also glad to finally get to meet Wayne Pollock in person! Those are just a few of the great connections we made.
The event was wonderful. We had a snafu with power at our booth on day one, but that got resolved quickly after some conversations with the LMA and third-party staff. My one word of caution is to read ALL of the fine print. This year, power was supplied by a third-party and it cost $757 for a single outlet. We aren’t the only ones who missed it, and this is something LMA must address moving forward, especially if they want to attract more vendors like us.
Our booth was great, not because it was the best one there, but because it was our booth. We made it happen.
The Blue Seven Content booth at LMA23
The best part of this event is that there was no pressure to capitalize. This wasn’t a “do or die” event for us. Actually, we’re in such a good spot that no new business was a perfectly acceptable outcome.
What actually happened was that we fully expect a decent chunk of new business to come from the event. We had wonderful, genuine, and in-depth conversations with so many firms and agencies. The conversations touched on their pain points as well as how our content process differs.
Honestly, I think we’ve surprised some people. Nobody expected a company that focuses ONLY on written content to become a viable presence in this industry. But we’re here, and even with the rise of AI and ChatGPT, we’ve doubled in size (not counting any new business that’s coming).
Victoria and Allen could never have done this alone. The team members who were there and those who couldn’t make it this time are the real winners. We’ve gone from a two-person operation in 2020 to a 19-person operation (writers, editors, translators). Our team is special, and they understand the goals of Blue Seven. They are professional writers with various educational backgrounds (educators, lawyers, admin professionals). As Gen Z would say, the vibe is immaculate.
A few members who attended manning the booth!
When we had downtime, which wasn’t often, we made it to the pool, beach, or a restaurant. We spent time on our beautiful balcony (we all shared a giant Airbnb overlooking the water). We spring breaked like it was 1999 (with various new aches none of us had years ago and a considerable amount of Advil from walking and standing for 14 hours a day).
Before I wrap this up, I do want to touch on a subject that was heavy on our minds. Florida has become a negative place for those in the LGBTQ+ community, at least from a governmental and policy standpoint. That bothered us. The laws being passed in Florida are the opposite of how we approach things as a company. We almost made the decision not to attend.
BUT…I’m glad we did. Current LMA President Roy Sexton explained it best – there are people in the LGBTQ+ community who work and live in Florida, and they deserve our support. Boycotting the state from these events hurts the ones who can’t get away. In other words, let’s work on changes from within. So, we showed up and showed our support.
We offered some great giveaways at the event, including free lodging for a weekend in Surfside Beach, SC as well as two content packages! We haven’t announced the winners yet, but that’s coming soon.
That’s all, folks. We just wanted to provide a little update on what we’ve been up to, and we have some more posts related to the event that will delve deeper than an update, including conversations we had with attendees about ChatGPT.
The immaculate view from our Airbnb in Hollywood, FL
Law firm marketing is essential, and we’ve made no secret that Blue Seven Content is focused exclusively on providing the best legal content writers in the business. We do this by seeking out the writers who already have proven writing skills then provide them with our legal content writing guides and an interactive training session. Blue Seven also has monthly writer meetings. We know that there isn’t a single legal marketing agency out there that does any of this.
But we also know that business partnerships are essential. Because we only provide the written content that law firms need, we’ve partnered with webLegal so law firms have even more options available to them. Formerly known as WebsLaw, webLegal provides a range of services that help ensure a law firm’s brand shine online.
Our Law Firm Marketing Partnership is Ongoing
webLegal has been around for quite a while. They have had ongoing relationships with law firms throughout the United States for the better part of a decade. Blue Seven Content founders, Allen Watson and Victoria Lozano, both started their legal writing careers at webLegal.
As Allen and Victoria moved forward on their own paths, it was always inevitable that they’d end up working with webLegal in one way or another. The partnership is stronger than ever, as both companies work hand-in-hand with one another with dozens of clients each month.
What Services Come With This Partnership?
Blue Seven provides great legal content for law firms throughout the US. On any given day, our writers are hard at work on:
webLegal can bring you the whole package when it comes to helping law firms land prospective leads that turn into quality clients. webLegal focuses on:
Creative development
Law firm digital strategy
Organic digital marketing
Local SEO
Paid digital solutions
PPC for law firms
Google Ads for law firms
Law firm video production
When a law firm needs to ramp up its digital marketing, a call to webLegal and Blue Seven Content will completely change the ballgame.
Contact Our Teams to Get The Law Firm Marketing Results You Need
If you are wondering what your next steps should be to get your law firm’s online presence rolling in the right direction, Blue Seven Content and webLegal are ready to help. You can reach out to either of our teams to get started.
You can contact Blue Seven for a free consultation by clicking here or calling us at 843-580-3158. When you contact Blue Seven, you will be connected directly with the company’s founders.
Whether you only need written content for your existing webpage or are looking to start or revamp your entire law firm website, you can count on the Blue Seven Content and webLegal partnership for help.
Written by Allen Watson – CEO and Co-Founder of Blue Seven Content