The University of Washington computer science department denounced comments made online by a retired professor over a debate about AI ethics, Timnit Gebru’s controversial exit at Google, so-called “cancel culture,” and more.
A heated back-and-forth involving longtime AI researcher Pedro Domingos and the response from the UW demonstrates the complexity of public discourse on controversial topics. It also highlights unanswered questions related to the societal implications of artificial intelligence, and is the latest example of the backlash that can occur when politics collides with academia and the tech industry.
Domingos, who joined the UW faculty in 1999 and is the author of The Master Algorithm, sparked the initial discussion on Twitter after he questioned why the Neural Information Processing Systems (NeurIPS) conference was using ethics reviews for submitted papers.
“It’s alarming that NeurIPS papers are being rejected based on ‘ethics reviews,’” he tweeted last week. “How do we guard against ideological biases in such reviews? Since when are scientific conferences in the business of policing the perceived ethics of technical papers?”
His opinion drew a number of responses from other top data scientists and those involved with NeurIPS.
“The problem here is that folks like him lack the humility to admit that they do not have skills in qualitative work and dismiss it all as a ‘slippery slope,’” tweeted Rumman Chowdhury, founder of Parity and former global lead for Responsible AI at Accenture Applied Intelligence. “Qualitative methods have rigor. Ethical assessment can be generalizable and sustainable.”
Hi Pedro, I helped create the NeurIPS ethical review process. Looks like there’s a healthy discussion going on here already, but let me know if I can answer any specific questions. Up front, I should say that the ethical reviewers gave feedback; they did not accept/reject papers.
— raia hadsell (@RaiaHadsell) December 8, 2020
The discourse on Twitter then shifted to last year’s decision to rename NeurIPS. There were concerns over the previous name NIPS due to racial slurs and sexism.
That set off the beginning of a long exchange between Domingos and Anima Anandkumar, a professor at Caltech and director of machine learning research at NVIDIA who led a petition to change the name of the conference. Pornography came up in a discussion about web search results for the term “nips,” sparking a response from Katherine Heller, chair of diversity and inclusion for NeurIPS 2020, and Ken Anderson, chair at the University of Colorado’s computer science department.
So you get porn sites and I don’t? Must be Google’s personalization algorithm.
— Pedro Domingos (@pmddomingos) December 11, 2020
Hi! This was flagged to me as an inappropriate conversation, that I will ask you to stop. Porn sites were associated with the old name for years and having that denied further hurts members of our community. We have now moved on. Thanks.
— Katherine Heller (@kat_heller) December 12, 2020
As a professor and chair of a department of computer science at a public university, I find this behavior unacceptable, as would many of my colleagues. CS departments must continue our work of broadening participation in computing to be united in opposing this behavior.
— Ken Anderson (@kenbod) December 12, 2020
As of Tuesday, Anandkumar’s Twitter was no longer active. She declined to comment for this story. Update: Anandkumar posted a public apology on her blog Wednesday. She also said she deactivated her Twitter account “in the interest of my safety and to reduce anxiety for my loved ones.”
NeurIPS posted a statement on ethics, fairness, inclusivity and code of conduct on its homepage. We’ve reached out to the conference for comment.
“Having observed recent discussions taking place across social media, we feel the need to reiterate that, as a community, we must be mindful of the impact that statements and actions have on our peers, and future generations of AI / ML students and researchers,” it reads. “It is incumbent upon NeurIPS and the AI / ML community as a whole to foster a collaborative, welcoming environment for all. Therefore, statements and actions contrary to the NeurIPS mission and its Code of Conduct cannot and will not be tolerated.”
The Twitter chatter also delved into the recent departure of Gebru, a top AI ethics researcher at Google, and whether she was fired by the company or resigned following a controversy related to an AI ethics paper. Domingos tweeted that Gebru “was creating a toxic environment within Google AI” and said that she was not fired, despite Gebru stating otherwise.
I had read it, and I am interested in facts. You, on the other hand, seem to be more interested in insulting people, which is perfect for an ethics researcher.
— Pedro Domingos (@pmddomingos) December 11, 2020
When the person insulting people accuses the people he’s currently insulting of insulting people….a lot of people have been learning about the phrase gaslighting recently and you continue to further educate us on it.
— Timnit Gebru (@timnitGebru) December 11, 2020
Heller then tweeted at Domingos and said he was violating the NeurIPS code of conduct.
Later that evening, the UW’s Allen School of Computer Science and Engineering issued a lengthy statement via Twitter. The school’s leadership took issue with Domingos “engaging in a Twitter flame war belittling individuals and downplaying valid concerns over ethics in AI,” and for his use of the word “deranged.” Here’s the statement in full:
#UWAllen leadership is aware of recent “discussions” involving Pedro Domingos, a professor emeritus (retired) in our school. We do not condone a member of our community engaging in a Twitter flame war belittling individuals and downplaying valid concerns over ethics in AI. We object to his dismissal of concerns over the use of technology to further marginalize groups ill-served by tech. While potential for harm does not necessarily negate the value of a given line of research, none of us should be absolved from considering that impact. And while we may disagree about approaches to countering such potential harm, we should be supportive of trying different methods to do so.
We also object in the strongest possible terms to the use of labels like “deranged.” Such language is unacceptable. We urge all members of our community to always express their points of views in the most respectful and collegial manner.
We do encourage our scholars to engage vigorously on matters of AI ethics, diversity in tech and industry-research relations. All are crucial to our field and our world. But we are all too familiar with counterproductive, inflammatory, and escalating social-media arguments.
We have asked Pedro to make clear he tweets as an individual, not representing the Allen School or the University of Washington. We would further argue that this whole mode of discourse is damaging and unbecoming.
The Allen School is committed to addressing AI ethics and equity in concrete ways. That work is ongoing, and many of our activities are listed on our website.
One key component is to expand the inclusion of ethics in our curriculum and prepare students to consider the very real impact that technology can have, especially on marginalized communities.
In recent years, we have added multiple classes on this topic at both the graduate and undergraduate levels, and we plan to continue to work toward expanding that aspect of our curriculum.
As a school, we have stated our commitment to be more inclusive and to consider the impact of our work on people and communities. We will not be deterred, by naysayers inside or outside of our community, from putting in the hard work required to achieve those aims.
Members of the Allen School Leadership
Magdalena Balazinska, Prof. and Director
Dan Grossman, Prof. and Vice Director
Tadayoshi Kohno, Prof. and Associate Director for Diversity, Equity & Inclusion
Ed Lazowska, Prof. and Associate Director for Development & Outreach
Domingos described the school’s response as “cowering before the Twitter mob.”
A heartfelt thanks to everyone who has expressed their support by tweet, email and voice. My department’s cowering before the Twitter mob was as craven and blinkered as you’d expect, but it’s heartening to see so many people who can still think. Keep up the fight!
— Pedro Domingos (@pmddomingos) December 13, 2020
We followed up with Magdalena Balazinska, a well-regarded researcher and educator who took over as the Allen School director last year. Here’s what she had to say about the matter:
“As leader of the Allen School, one of my highest priorities is to promote a culture and an environment that is diverse, equitable, and inclusive. I also deeply care about an environment in which people discuss issues, even potentially controversial ones, openly, with empathy, and without bullying. Witnessing what happened on Twitter this past week was disheartening. We need to find ways to come together. The entire tech industry should work toward all these goals, and we have much work to do.”
Ed Lazowska, a longtime leader at the Allen School, said the department is committed to academic freedom and freedom of speech.
“We encourage good-faith dialogue, including on controversial issues,” he said. “But we expect members of our community to engage in that dialogue in a respectful, collegial, and constructive manner that is free from personal attacks and is not dismissive of people’s lived experiences. Pedro failed to live up to those standards and we felt compelled to make clear where we stand.”
Lazowska added: “Pedro is within his rights to tweet. We felt it was important to distance the school from his views.”
In an email exchange with GeekWire, Domingos said the Allen School should have “stood by my right to voice my opinions, and back me up in my efforts to free the machine learning community from the miasma descending on it.”
“Instead, they chose to pay their obeisance to the ultra-left crowd, as they have before,” Domingos said, referencing Stuart Reges, another UW computer science professor who was criticized for his 2018 essay that claimed women are underrepresented in software engineering because of personal preference, not because institutional barriers deter them from pursuing careers in tech.
Reges told GeekWire he was disappointed that the Allen School “has thrown Pedro under the bus.”
“He has raised significant questions about the activism surrounding Timnit Gebru’s termination from Google and new efforts to inject ethics reviews into all aspects of AI research,” said Reges. “The greatest sin he has committed has been to refer to ‘deranged activists.’ The unified mob reaction to try to cancel him proves that his opponents and the Allen School leadership are not willing to engage in meaningful dialog to explore the issues.”
Domingos said the Twitter spat highlights how the machine learning community is being “progressively strangled by political correctness and extreme left-wing politics.”
“The larger problem is that academia and the tech industry, not just machine learning, are being strangled by a crowd that refuses to allow the free exchange of ideas on which research depends, and is successfully imposing an increasingly far-left orthodoxy,” he told GeekWire. “People live in fear of their attacks.”
If you’ve been targeted by the cancel crowd, don’t hide in shame. Shout it from the rooftops. Bring shame and opprobrium on them. That’s how we end this.
— Pedro Domingos (@pmddomingos) December 16, 2020
Daniel Lowd, an associate professor at the University of Oregon who earned his PhD from the UW in 2010, tweeted that he “would like to publicly disavow and distance myself from these comments by my PhD advisor and collaborator.”
I would like to publicly disavow and distance myself from these comments by my PhD advisor and collaborator.
I have worked with Pedro on a number of projects, and I respect his insight in some areas, but his rhetoric here is both false and harmful. https://t.co/Lk3f0F0s6S
— Daniel Lowd (@dlowd) December 11, 2020
I’m sad, too, Pedro. I thought I had a colleague who respected people with different experiences and viewpoints, who listened to evidence and considered when he might be wrong, who argued in good faith. And I was wrong.
— Daniel Lowd (@dlowd) December 15, 2020
I sympathize, and now I understand better where you’re coming from. Of course I respect their humanity. But – crucial point – that doesn’t justify the cancel culture.
— Pedro Domingos (@pmddomingos) December 15, 2020
The reaction to Domingos’ original tweet about ethics reviews of AI papers also reflects the pressing dilemma of AI ethics as the technology increasingly infiltrates everyday life.
Considering the ethical impact of AI research is “absolutely essential,” said Oren Etzioni, a UW computer science professor emeritus (retired) who is now CEO of Seattle’s Allen Institute of Artificial Intelligence.
“That said, it’s hard to argue with Pedro’s observations about online attacks and the refusal to allow the free exchange of ideas,” said Etzioni, who noted that he was speaking to GeekWire as an individual and not a representative of any institution.
Etzioni called out a platform his father launched called Civil Dialogues that encourages deliberation on pressing issues. He also noted his “Hippocratic oath” created in 2018 as a way to encourage AI software developers to remember their ethical burden.
Asked about Domingos’ comments on Twitter, Seattle University senior instructor and AI ethics expert Nathan Colaner said “it seems that his underlying attitude is that ethical concerns in AI are overblown, and that ethicists are making too much of their concerns, specifically when it comes to algorithmic bias.”
“I think that’s the wrong attitude to have,” Colaner said. “First of all, there is no legitimate debate to be had about whether algorithms are ‘neutral.’ It is also now clear that AI is not going to remove human bias, as we sometimes used to hear. But what is still unclear is whether human bias is a worse or less bad problem than algorithmic bias.”
Colaner said there are plenty of unanswered questions that need answers as AI innovation continues at a rapid pace. The AI ethics community is “basically scrambling,” he said, adding that he supports the Allen School’s statement. Colaner is managing director of the Initiative in Ethics and Transformative Technologies, an institute at Seattle U made possible through a donation from Microsoft.
“Healthy debate sharpens everyone’s minds,” Colaner said, “but since we in the AI ethics community have serious, time-sensitive work to do, distraction is not useful, which is why Twitter made the ‘unfollow’ button.”