[go: up one dir, main page]

Information and content

you can trust.

At Google, we aim to balance delivering information with protecting users and society. We take this responsibility seriously. Our goal is to provide access to trustworthy information and content by protecting users from harm, delivering reliable information, and partnering with experts and organizations to create a safer internet.

Protecting you from harm.

We keep you and society at large safe with advanced protections that not only prevent, but also detect and respond to harmful and illegal content.

Prevent

Preventing abuse

To keep people safe from abusive content, we use protections powered by machine learning. Gmail automatically blocks nearly 10 million spam emails from inboxes every minute, and Search has tools to prevent Autocomplete from suggesting potentially harmful queries. Automatic detection helps YouTube remove harmful content efficiently, effectively, and at scale — in Q2 of 2023, 93% of policy-violative videos removed from YouTube were first detected automatically. We also implement safety guardrails in our generative AI tools to minimize the risk of their being used to create harmful content.

In addition, each of our products is governed by a set of policies that outlines acceptable and unacceptable content and behaviors. Our policies are continuously honed and updated to address emerging risks. When it comes to our work in AI, we also rely on our AI Principles to guide product development and help us test and evaluate every AI application before it launches.

Detect

Detecting harmful content

As the tactics of bad actors evolve, we must work even harder to detect harmful content that gets onto our products. AI is helping us scale abuse detection across our platforms. AI-powered classifiers help quickly flag potentially harmful content for removal or escalation to a human reviewer. In 2022, automated enforcement helped us detect and block over 51.2 million ads with hate speech, violence, and harmful health claims. Additionally, Large Language Models, a breakthrough type of AI, show the promise of exponentially reducing the time it takes to detect and evaluate harmful material, especially from new and emerging risks.

We also work with outside organizations, who flag content they think may be harmful. Both Google and YouTube take feedback from hundreds of Priority Flaggers, organizations around the world with cultural and subject matter expertise who escalate content to us for review.

Respond

Responding appropriately

We rely on both people and AI-driven technology to evaluate potential policy violations and respond appropriately to content that is flagged. When a piece of content violates our policies, we can restrict, remove, demonetize, or take account-level actions to reduce future abuse.

In 2022, Google Maps blocked or removed over 300 million pieces of fake content, 115 million policy-violating reviews, and 20 million attempts to create fake Business Profiles. In Q2 of 2023, YouTube removed over 14 million channels and 7 million videos for violating our Community Guidelines.

To evaluate context and nuance, while reducing the risk of over-removal, we rely on ~20K expertly trained reviewers to work in a variety of roles to enforce our policies, moderate content, and evaluate flagged content across Google’s products and services.

If a creator or publisher feels we’ve made a wrong call, they have the ability to appeal our decisions.

Delivering reliable information.

We enable confidence in the information and content on our platforms by delivering reliable information and best-in-class tools that put you in control of evaluating content.

How we organize information

Intelligent algorithms

Our constantly updated algorithms are at the heart of everything we do, from products like Google Maps to Search results. These algorithms use advanced Large Language Models and signals like keywords or website and content freshness so that you can find the most relevant, useful results. For example, YouTube prominently surfaces high-quality content from authoritative sources in their search results, recommendations, and info panels to help people find timely, accurate and helpful news and information.

Tools to help you evaluate content

We created a number of features to help you understand and evaluate the content that our algorithms and generative AI tools have surfaced, ensuring you have more context around what you’re seeing online.

Managing Content Responsibly on YouTube

YouTube is committed to fostering a responsible platform that the viewers, creators, and advertisers who make up our community can rely on.
Learn more about our approach.

Partnering to create a safer internet.

We proactively collaborate, inform, and share our resources and technologies with experts and organizations.

Sharing Knowledge

Exchanging knowledge to keep you safe

We partner with experts from civil society, academia, and governments to tackle global issues like misinformation, ad safety, election integrity, AI in content moderation, and combating online child exploitation. We also publish research findings and release datasets to academics to further progress in this field.

At YouTube, we regularly consult with our independent Youth and Families Advisory Committee on product and policy updates, including our Youth Principles, as well as a series of product updates centered on teen mental health and wellbeing.

Sharing Signals

Working with experts to fight illegal content

We also work with partners to uncover and share clearcut abusive content signals to enable removal from the wider ecosystem. We share millions of CSAM hashes with the US National Center for Missing and Exploited Children every year. We also participate in Project Lantern, a program that enables technology companies to share signals in a secure and responsible way. Additionally, YouTube co-founded the Global Internet Forum to Counter Terrorism (GIFCT), which brings together the technology industry, government, civil society, and academia to counter terrorist and violent extremist activity online.

Sharing Resources

Supporting organizations dedicated to safety

We support organizations around the world dedicated to online safety and media literacy through robust programs offering training and materials, such as Be Internet Awesome, YouTube’s Hit Pause, and the Google News Lab. Additionally, Google and Youtube announced a $13.2 million grant to the International Fact-Checking Network (IFCN) to support their network of 135 fact-checking organizations. All together, our collaborations have equipped over 550K journalists in digital verification skills, and we’ve trained another 2.6 million online.

Actively sharing Safety Technology

We share Application Programming Interfaces (APIs) that help other organizations protect their platforms and users from harmful content.

Download PDF

GSEC DUBLIN

Taking on Content responsibility in Dublin

Our Google Safety Engineering Center for Content Responsibility in Dublin is a regional hub for Google experts working to tackle the spread of illegal and harmful content and a place where we can share this work with policymakers, researchers, and regulators. Our network of Google Safety Engineering Centers give our teams the space, inspiration, and support to develop the next-generation solutions to help improve safety online.

A helpful, safer internet experience — by design.

Never has the impact of our work to provide trustworthy information and content mattered more. To evolve with content moderation challenges, we’ll continue to invest in developing and improving policies, products, and processes that provide you peace of mind and build a safer online experience for all.

Explore how Google helps
keep everyone safe online.