Google's NEW(?) statement on AI-generated content

After months of vague Q&A answers and tweets, it looks like Google has finally settled their position on AI-generated content…

Just yesterday (Feb 8) they posted a statement to their Search Central Blog with some guidance.

Here’s what I think is the position many were wanting to hear, taken from the FAQ:

Is AI content against Google Search’s guidelines?
Appropriate use of AI or automation is not against our guidelines. This means that it is not used to generate content primarily to manipulate search rankings, which is against our spam policies.

One interesting (perhaps humorous) aspect of all this has been Google constantly implying that their position has been clear and consistent this whole time when obviously that wasn’t the case.

Anyway - the post also contains some interesting information about potential citations or bylines for AI. While Google “strongly encourages adding accurate authorship information to content where readers might expect it”, they don’t recommend author bylines for AI.

I recommend reading the full FAQ section on the post because I think it contains some interesting little tidbits and raises some interesting questions…

Like: what are the “reasonable expectations” for disclosing the role of AI in writing a piece a content? What kind of involvement counts and how much do people want to know?

There’s been a NEW new update on Google’s policy/guidelines:

On February 8, Google added a post to their Search Central blog, outlining their position on AI-generated content.

To quote the key-takeaways from our own extensive blog post about this ongoing saga:

Here are some of the key points regarding Google’s official stance on AI content:

  • The use of AI or automation is not against guidelines if that use is “appropriate”
  • Inappropriate use is that which violates spam policies or is intended primarily to manipulate rankings
  • AI content will be subject to the same standards as any other content
  • Google will continue to rely on systems like SpamBrain to detect spam, whether created by humans or AI

Of course, Google tends to play a double-game with these sorts of announcements, both trying to convey useful and satisfying information about how its algorithm works, but without also actually telling us how it works, and then also trying to motivate content creators to do what directly benefits them (for example, even if Google had no way to detect spam, they’d still have a strong incentive to say “you shouldn’t post spam” to protect the value of their own product).

So this announcement still leaves a lot of questions open. Like what are the factors within our control that convey “primary intent” and what can Google actually pick up on?

It’s good to hear that the appropriate use of AI or automation is not against guidelines, provided it is used appropriately and does not violate spam policies or manipulate rankings. This can provide content creators with more confidence that they can use AI-generated content without running afoul of Google’s policies.

However, as you noted, Google tends to play a double-game with these sorts of announcements, leaving some questions open about the factors that convey “primary intent” and what Google can actually pick up on. While Google’s announcement provides some helpful guidance, there is still much that is unclear about how AI-generated content will be evaluated and regulated.

Overall, it’s important for content creators to remain vigilant and ensure that their use of AI-generated content is appropriate and follows Google’s guidelines. By doing so, they can create high-quality content that provides value to readers while also adhering to Google’s policies and regulations.