Hi all, Gerrit here. Last week, a battle erupted at Google, spilling out onto social media and finding its way into dozens of news articles and blog posts. Timnit Gebru, a prominent artificial intelligence researcher best-known for showing how facial recognition algorithms are better at identifying white faces than black and brown ones, said on Twitter that she’d been fired from the company after a dispute over an academic paper she co-authored.
Gebru, one of the few high-profile black women in her field, was co-head of Google’s Ethical AI team—a group of researchers that study the downsides of AI and how to make sure the tech is implemented without hurting people. The exact reasons behind Gebru’s ouster are being contested, publicly, right now. Google’s head of AI, Jeff Dean, told employees in an email he later posted online that Gebru’s paper left out key information and that she didn’t follow the company’s process for vetting work to be published externally. Gebru and her supporters—which include hundreds of current and former employees of Google parent Alphabet Inc.—say the real reason for her dismissal was her outspoken criticism of Google’s progress on improving conditions for people of color at the company.
Gebru is not the first employee to depart after calling Google out. Other internal activists have quit or been let go after getting into public fights with the company, and in August Gebru spoke out about being criticized by co-workers and managers for her activism.
But what about the paper at the center of the conflict? Gebru and her co-authors, several of whom are also Google employees, argue that there need to be stronger safeguards for AI and more contemplation of the potential risks of unleashing the tech on the world before companies and scientists charge ahead.
Specifically, Gebru's paper warns that large language models that scoop up billions of sentences from the internet and use them to “learn” how to speak and write like humans need more guardrails. You may have interacted with these algorithms already—examples of how eerily good they are at doing everything from imitating the speech patterns of specific people to writing poems and computer code have skittered across the web in recent months. Large language models are the latest darlings of the AI community, and Google was behind much of the research that allowed them to exist in the first place.
Gebru and her co-authors point out that by training on massive collections of English text from the web, the algorithms likely take on whatever societal racism, sexism and other biases are reflected in the writing. They acknowledge efforts to root out those biases, but add that because these algorithms aren’t fully understood by their creators, it's impossible to guarantee they aren't corrupted. The paper also says the environmental cost of running huge, electricity-hungry data centers to train the algorithms should be more seriously considered.
By questioning the tech, Gebru and her co-authors are suggesting one of Google’s most exciting inventions is flawed and might need to be reined in. The implications wouldn’t only matter to Google’s research division. Its own language learning model, BERT, is now being used to parse almost every Google search done in English. When the company first started using BERT in its search engine last year, an executive called it a “huge step forward” for search results.
Google is extremely self-conscious about its image. Even though it dropped its “don’t be evil” motto years ago, the sentiment still matters to many employees, who generally see Google as providing the world with helpful and accessible tools that make billions of peoples’ lives better. AI is also a core part of the company’s identity and its scientists and salespeople alike see the technology as essential to its future.
It’s also indisputable that Google and Alphabet's AI work has led to big benefits for society, from language translation to breakthroughs in medical science. But while belief in the promise of technology is core to Google's identity, Gebru’s research paper is much less optimistic, arguing that there are problems big enough to warrant slowing down or reconsidering the way AI is developed. Even if the paper isn’t what ultimately got Gebru fired, it’s easy to see why that perspective might cause some consternation within the company's leadership. The resulting episode provides a window into the still-roiling debate at Google, and society as a whole, over the promise and perils of technology. It's not likely to calm down any time soon. —Gerrit De Vynck
No comments:
Post a Comment