Chances are, you’ve already had the opportunity to utilize A.I.-powered tools like ChatGPT or Bard, perhaps even creating content with their assistance. Google has generally adopted the stance that content produced by artificial intelligence (AI) is acceptable as long as it serves the betterment of people and avoids manipulation of search results.
However, indications suggest that this perspective might be evolving, and this change is already underway.
Insights from Google’s Merchant Center Policy Google recently introduced a revision to its merchant center policy, which reads as follows:
Automated Content: We don’t allow reviews that are primarily generated by an automated program or artificial intelligence application. If you have identified such content, it should be marked as spam in your feed using the <is_spam> attribute.Google’s Merchant Centre Policy
In simpler terms, content originating from A.I. for the purpose of reviews is now classified as spam. The rationale behind this shift is clear – Google intends for reviews to be authentic and informative. Genuine reviews provide potential customers with valuable insights, helping them determine whether to engage with a specific company or purchase a product.
An A.I.-produced review lacks the nuance and context that a human-authored review provides.
An Expanding Perspective: A.I.'s Limitations
While A.I. holds remarkable potential, it is not without its limitations. It’s important to remember that its output relies on the input it receives.
The input data typically encompasses a wide range of information from the internet, and it’s widely recognized that a substantial portion of online content is inaccurately portrayed. Consequently, A.I. can produce flawed outputs based on this misinformation.
Moreover, improvement over time is not guaranteed. An illustrative example is a Stanford study that observed ChatGPT’s performance decline from accurately answering simple math questions 98% of the time to a mere 2% accuracy rate over time.
Inaccuracy also presents itself in domains like healthcare. When 284 medical questions were posed to ChatGPT by physicians across 17 specialties, it demonstrated an accuracy rate of 92%. While this might constitute a strong grade in an academic setting, the potential consequences of incorrect medical advice are significantly graver.
The Anticipated Course of Action for Google While search engines generally don’t take issue with A.I.-generated content, they will likely seek to regulate its applications, particularly in relation to search results.
For instance, it’s unlikely that Google would favor A.I.-generated content when it comes to matters of finance, healthcare, or other areas where misinformation could have detrimental consequences.
Similar to the stance on A.I.-crafted reviews, Google will likely aim to limit the use of A.I.-generated content that might prove harmful to individuals.
Conversely, less critical topics might receive more leniency. Google might not be overly concerned if A.I.-generated content about “how to tie a tie” contains minor inaccuracies. The worst outcome in such a scenario might be a slightly crooked tie.
That where Link building becomes a powerful tool in bridging this gap. By acquiring links from reputable websites, your content gains a validation that automated content might struggle to achieve.
If you’re creating content using A.I., it’s advisable to subject it to human review. This ensures accuracy and enhances the value of the content for others.
A discernible trend is emerging, with companies increasingly leveraging A.I. technology for various content creation purposes. However, in the long run, algorithms are likely to show a preference for human-authored content due to its rarity.
Human input remains integral to A.I.’s effectiveness, with human-authored content being the optimal form of input. Consequently, algorithms are expected to prioritize content generated by humans. Even if A.I. is utilized in the initial writing process, human oversight and modification remain essential.