Conference aims to make trust and safety hot topics in computer science

Oct. 4, 2022, 11:27 p.m.

A woman with dyed hair and a branded TikTok jacket chatted with a man dressed like an academic in the palm-shaded Alumni Center pavilion Friday morning. Stanford’s first annual Trust and Safety Research Conference was a gathering of all kinds.

Online trust and safety is an interdisciplinary field, and the two-day conference brought together experts in computer science, law, and the social sciences to unify their work on online harm and distrust — an unprecedented effort for the field, participants said. 

Across Thursday and Friday, attendees dropped in on panels and research presentations taking place on the first floor of the Alumni Center and networked outside in the courtyard. Popular presentation topics included improved tools for online moderation, the spread of misinformation and how organizations and companies can design and implement tailored policies for online safety.

The conference was hosted by the Stanford Internet Observatory and the Trust and Safety Foundation. Early bird tickets ran attendees from academic and civil society $100, with the entry fee hiked to $500 for attendees from the industry.

Public content moderation expert and law assistant professor Evelyn Douek described the goal of the conference as a way to connect those working on internet safety across academia, industry and policy for the first time.

“Community building is really important,” Douek said. “Actually getting people from lots of different disciplines in a room, meeting each other, building those bridges.”

In Thursday’s introduction to the Journal of Online Trust and Safety’s research presentation,  communication professor Jeff Hancock described how he co-founded the publication with other Stanford researchers in the field to “fill that gap” between those studying online safety from different disciplines. Alongside the Stanford Internet Observatory (SIO), the researchers aim to understand and prevent potential harm from happening online. 

Added SIO director and cybersecurity expert Alex Stamos in an interview, “One of our goals at SIO is to make [online] trust and safety a legitimate academic topic.”

In the past two years, the threat of internet-enabled violence and public mistrust has become difficult to ignore. Several mass shootings were preceded by hateful screeds posted on the online forum 8chan. Online misinformation has been linked to COVID vaccine hesitancy, and conspiracy theories fueled the organization of last year’s Capitol insurrection on forums and social media sites.

“Security wasn’t seen by CS academics as a real field,” Stamos said. “But these days security is seen as one of the absolute hottest parts of computer science. We need to have the same kind of transition in trust and safety, but we don’t have fifteen years.”

Panelists emphasized that a one-size-fits-all framework for online safety simply cannot exist; the internet is too big, run and used by too many people. 

It would be impossible to create a single governing force to regulate online content and behavior, said Del Harvey, vice president of trust and safety at Twitter, on a panel.

“I keep hearing this: ‘What we need to do is make it so that the companies aren’t making the decisions, and instead this benevolent entity that we create, that will have all the information that is informed by all the things that are right and just and good in the world will [enforce online safety],’” Harvey said. However, Harvey added, “We are nowhere near the utopian world where that can exist.”

To panelist Mike Masnick, a blogger and tech policy expert, the recent deplatforming of hate forum Kiwifarms by infrastructure provider Cloudflare demonstrated how important decisions about online safety are often left in the hands of a few small companies.

“The reality was that the situation was up to [Cloudflare],” Masnick said. “And a decision to do nothing meant that people were going to get harmed.”

Some participants said there may be no single system that can prevent the harms of the internet, but they expressed hope that actors in the internet ecosystem can take steps to prevent harm and preserve public trust.

“The fact of the matter is that there is no perfect decision,” Douek said. “Every decision is still going to involve harm. There needs to be trust that you’ve thought about those decisions.”

Lana Tleimat '23 (... maybe '24) is the Vol. 260 executive editor for digital. She was formerly managing editor of humor. She is from Columbus, Ohio, and isn't really studying anything. Contact her at ltleimat 'at' stanforddaily.com.

Login or create an account

Apply to The Daily’s High School Winter Program

Applications Due NOVEMBER 22

Days
Hours
Minutes
Seconds