At the Google Search Central Live Tokyo 2023 event, Google’s Gary Illyes and other industry leaders addressed a myriad of questions surrounding Artificial Intelligence (AI) at Google. The event, presented by Japanese search marketing expert Kenichi Suzuki, provided new insights into Google’s stance on AI-generated content. Suzuki later published a detailed summary of the event on his blog in Japanese.
Does Google Distinguish AI-Generated Content?
An intriguing aspect raised during the session was whether Google discerns between AI-generated and human-authored content. Gary Illyes confirmed that Google does not categorize or label content based on its source.
This implies that Google’s search engine algorithms treat AI-generated content in the same manner as they would human-authored content. When analyzing and indexing content for search engine results, Google’s primary concern is the content’s value to the user, its relevance to the search query, and its overall quality. These factors constitute the core of Google’s evaluation process, rather than the method by which the content was produced.
Should Content Publishers Label AI-Generated Content?
In an attempt to counter misinformation and fake news, the European Union (EU) has proposed that social media companies voluntarily label AI-generated content. Google currently encourages (but doesn’t mandate) publishers to tag AI-generated images using IPTC image data metadata, with the anticipation that image AI companies will soon automate the addition of this metadata.
When it comes to text-based content, however, there is no explicit requirement to label it as AI-generated. Suzuki reiterated that, according to Google, publishers are not obligated to label AI content. Google entrusts publishers with the discretion to determine if labeling enhances the user experience.
Human Oversight for AI Content
Google strongly advises against publishing AI-generated content without a prior review by a human editor. This recommendation applies to translated content as well, emphasizing the importance of human intervention to maintain content quality.
AI might not always understand the subtle connotations of certain words, the cultural context, or the emotional aspects that could significantly affect the reception of content by the audience. Therefore, having a human editor review AI-generated content helps to rectify potential inaccuracies, and inappropriate word choices, and ensure the overall sense and flow of the content is maintained.
Google’s Ranking of Natural Content
Google’s algorithm is designed to prioritize and highlight content that is not just written by humans but also crafted with human readers in mind. This predilection emerges from Google’s primary aim of providing the most useful, relevant, and high-quality content to its users.
The machine learning (ML) algorithms employed by Google are built to mimic human understanding to a great extent. They can detect and appreciate the natural flow, context, and richness of human-written content. Consequently, such content often resonates better with the algorithms, leading to a higher likelihood of being ranked higher in search results.
This aspect emphasizes the importance of maintaining a human touch in content creation, even in an era increasingly dominated by artificial intelligence. While AI can undoubtedly assist in the content generation process, it is the human-like aspect of the content that often holds the key to better visibility and higher rankings in Google’s search engine results.
AI Content and the E-A-T Principle
Experience, Expertise, Authoritativeness, and Trustworthiness (E-A-T) is a principle that has been highlighted in Google’s search quality raters guidelines. It suggests that the content author should demonstrate substantial experience in the concerned topic. An AI, at present, is incapable of claiming such expertise or experience.
This raises the question of how AI-generated content can satisfy the E-A-T requirements, particularly for content that necessitates substantial experience. Google representatives mentioned that internal discussions are ongoing regarding this matter, and the company will release a policy once a decision has been reached.
The Evolving Landscape of AI Policies
The accelerated advancement and widespread integration of AI into various sectors have led us into an era of significant transition. This change, however, brings with it a degree of uncertainty, particularly concerning the trustworthiness and reliability of AI. Given the nascent stage of AI technology, there are still unknowns and potential pitfalls that fuel apprehensions.
Some mainstream media companies were quick to embrace the potential of AI in content generation, seeing it as a revolutionary tool for enhancing productivity and streamlining workflows. However, the reality of AI’s limitations, coupled with the challenges in ensuring consistent quality and the unique human touch in AI-generated content, have prompted a reevaluation of these early adoption strategies.
These organizations are now pumping the brakes on their AI endeavors, opting instead to reassess their approach. They are investing time and resources into understanding the nuanced implications of AI-generated content, and determining how to best leverage AI without compromising content quality and integrity.
The goal is to strike a balanced approach, where AI can be a valuable ally in content creation, but not at the expense of human ingenuity and judgment. This reflective pause is indicative of the broader industry-wide recalibration underway in the face of the ever-evolving AI landscape.
Prioritize Quality Content
Technologies like ChatGPT and other generative AIs, like Bard, were not explicitly trained to generate content. Consequently, Google continues to recommend that publishers prioritize content quality, irrespective of the source. The landscape of AI policies is still evolving, and stakeholders must remain adaptive and vigilant to navigate this transition successfully.