Google Clamps Down on Firm Analysis into ‘Delicate Matters’

According to internal communications and other information from Reuters, Google has a firm grip on the investigations carried out by its employees.

A new review process for “sensitive topics” has “asked the authors not to cast their technology in a negative light” in at least three cases. The news of Google’s new trial came after Reuters got its hands on internal Google communications and was able to speak to researchers involved in the work. The new review policy identified at least 13 “sensitive issues” including bias, China, COVID-19 and Israel.

Google executives intervened in the late stages of a summer content recommendation technology research project. A senior Google executive who reviewed the study urged the study authors to “be very careful to find a positive tone,” according to internal Google correspondence. Further discussion between researchers and reviewers indicated that the authors “were updated to remove all references to Google products,” reported Reuters.

Reuters was able to pull up an early draft of the study mentioning YouTube, Google’s sister company. It also raised “concerns” that content recommendations and personalization technologies can promote “disinformation, discriminatory or otherwise unfair outcomes” and “insufficient diversity of content” as well as lead to “political polarization”.

The final published version instead said that these systems can promote “accurate information, fairness and diversity of content,” Reuters reported. No credit was granted to Google researchers either.

Google was quick to lay off controversial employees at the company, even last week. James Damore, a software engineer at Google, was fired in 2017 for writing a memo raising questions about Google’s gender diversity efforts. In it he suggested “that at least some of the differences between men and women in technology are due to biological differences,” said Damore himself. “[A]And yes, I said that bias against women is also a factor, ”he added.

In Google’s new review policy, researchers were asked to seek approval from the company’s legal, policy and public relations teams before initiating projects on topics such as “facial and sentiment analysis and categorization of race, gender or political affiliation”.

According to Reuters, scientific studies have argued that “facial analysis software and other AI can perpetuate prejudice or undermine privacy.” Two of the “sensitive issues” that Google would shy away from when it comes to understanding how best to implement their technology.

The insightful report also indicated that four employees believe Google is starting to disrupt “critical studies of potential technological damage.” For the past year, “more than 1,000 projects become published articles each year,” said Jeff Dean, senior vice president of Google Research. Including “more than 200 publications that focused on responsible AI development in the last year alone”.

Conservatives are attacked. Contact Google at (650) 253-000 or email 1600 Amphitheater Parkway Mountain View, CA 94043 and request transparency on the platform: Businesses need to develop open systems so that they can be held accountable and at the same time privacy is weighted concerns. If you’ve been censored, contact us using the Media Research Center contact form and help us hold Big Tech accountable.

Comments are closed.