Skip to content

[Conversations] “How are controversies manufactured?” Transcript and audio recording

Leah

Transcript


Monika Sengul-Jones

Welcome.

Leah Ceccarelli

Thank you for inviting me.

Monika Sengul-Jones

I want to begin by talking about the [Science, Technology, and Society Studies (STSS)] program. I’m just so impressed with the STSS graduate certificate program that you direct. It’s 10 years old, and it’s cross-disciplinary, cross-campus, and it’s also been, if I understand it right, somewhat of a labor of love. 

I think this is really important to point out. The, kind of, urgency that the program has. Because it’s enrolling students from a cross-section of departments. You’ve got students from physics, engineering, genome science, media studies, and anthropology. And the faculty are similarly from different disciplines: the School of Medicine, education, history, philosophy, communication. And, on one hand, this is, like, such an ideal at the university. This kind of learning and research that can be happening across disciplines.

But given that it’s unfunded, I’m wondering if you could share a little more about the urgency behind the program. What’s propelling it forward?

Leah Ceccarelli

Yes, thank you for asking that question. I think that with respect to students, I found that there are really two types of students who end up enrolling in the STSS certificate program. You just mentioned some of them. There are liberal arts students whose doctoral programs are in the disciplines that make up the broad field of STS, so philosophy of science and anthropology of science and communication of science, and then there are the STEM students who really want to learn more about the ethics of science and the social implications of technology and the public communication of their own scientific research.

So for that first group of students, the STS scholars, I think the program really allows them to certify that they’ve done coursework outside their home department, so they become more marketable for jobs that have STS as a part of that job requirement, or an area of expertise, a desired area of expertise. 

But for that second group of students, the students who identify as scientists and engineers, the program really allows them to sort of step back and reflect on their own practice. At a time when I think some of the most pressing problems that we face in this world are a result of the intersection of science and society, like problems that include global climate change, and global pandemics, and the post-truth disruption that comes with social media and artificial intelligence, at that kind of a time, it’s really more important than ever for people who are working at the cutting edge of STEM fields to learn how to think about the relationship between science, technology, and society. 

So that’s why it’s really urgent, I think, for the students. 

With respect to faculty, they get something else from this program. Those of us who study STS, we really thrive when we get to share ideas, not just with members of our own departments, but with people who actually study the same subject that we study, but do so using different disciplinary methodologies, drawing from different literatures, and asking really very different questions.

I mean, it’s a part of my job to share my work with other communication scholars as a member of the Communication Department. But it’s really in those moments when I venture out of my primary disciplinary home and speak with, say, historians of science or philosophers of science, that I find myself actually speaking with someone who knows the case that I’m studying, right? Who’s deeply knowledgeable about its context and fluent in STS theories that can help me to answer the questions that I have about that case. 

I think the university recognizes the value of interdisciplinary community for its faculty, even if it doesn’t go out of its way because of its structures to financially support it. So for example, often, when a department at the UW is trying to recruit a new faculty member who has an interest in STS, and that could be from many different departments—sociology, law, the iSchool, it doesn’t matter—they ask me to come meet with the candidate during the interview process so that I can tell them all about the interdisciplinary community of researchers here who share that interest in STS issues.

So although the program was originally created for graduate students, I think faculty are just as excited about it because they appreciate the opportunity for connection that it creates across departments. 

Monika Sengul-Jones

I want to pivot now just to talk a little bit about what you’ve done, in particular, because I think this idea that your work touches on different areas of expertise across the university is really compelling. And also, the kind of training that scientists or students who are studying to become or working as early scientists are getting by taking courses in the STSS program is really important to think about. 

So, you’ve studied the rhetoric of scientists and, kind of, scientific communication in general. And you’ve written about controversy and how this is distinct from scientific uncertainty. I think many of us in 2024 are, you know, probably aware and depressed by, for example, the massive resources that Big Tobacco as an industry put in to discourage the public from knowing that science, for example, shows that cigarettes cause cancer, that they make us sick. 

This is one example that you’ve brought up that scholars have identified as this kind of manufactured controversy, climate skepticism being another example. 

And in the process, non-experts or the public have come to view scientific consensus as, kind of, having two sides. One of which has been suppressed. 

A couple of years ago, you assessed how this works by looking at how, and I’m going to quote you, “skeptics come to be identified as heroes in this unfolding scientific revolution. They’re oppressed by mainstream scientists who are ideologically deaf to their appeals, and who try and silence them so that others are not exposed to their heresy.” 

And I was just really struck by this idea of scientists being the ones who are oppressing, and the skeptics as heroes.

I think today, as we’re faced with concerns around fake news and propaganda and misinformation, this tack that you take is really important by identifying the kind of rhetorical positions that are happening here. In many ways, technology and emerging technologies offer a solution, which is, you know, we can have fact-checking or truth detectors, AI detectors, that will restore the credibility of science or give us a shared understanding of reality. And really, what stands out to me in your work is that you bring an attunement to storytelling and the moral tradition of rhetoric to help us make sense of these shifts around the kind of truth, scientific authority, and how controversy unfolds.

I wanted to thank you for that and just give you space to talk more about why rhetoric is important in this kind of moment of techno-solutionism and manufactured controversy.

Leah Ceccarelli

Yes. Thank you. I think the manufactured scientific controversy work that I’ve done is a really good example of how people looking at this, this issue, from different disciplinary perspectives can all contribute to a broader understanding of what’s going on there.

Rhetoric is my field. It’s a field of study in the humanities that’s devoted to observing the available means of persuasion, right?

Basically, it focuses on particular cases, most often of spoken and written discourse. It’s attentive to audience. It’s attentive to situation. It tends to take an ameliorative orientation. In other words, it seeks practical solutions to specific problems of communication.

In my work on manufactured scientific controversy, I analyzed some cases where certain very “interested parties” claimed there were raging disputes in relevant scientific communities, when in fact, there was a remarkably high level of consensus among subject area experts. So these “interested parties” basically just lied to the public about the status of scientific knowledge, and they did so because it allowed them to create just the right amount of doubt in order to promote their own policy agendas. Now, whether that was opposing government restrictions on tobacco or delaying the transition away from fossil fuels or instituting “teach the controversy” requirements about evolution or climate change in public schools, they had a vested interest in pretending that there was controversy when, in fact, there wasn’t.

So I asked, as a rhetorician, how might scientists best respond to efforts to mislead the public about the current state of scientific knowledge? Now, scholars of rhetoric say you need to look at the dynamics of the debate. So you can’t just dismiss the fraudsters, or those who believe the fraudsters, as fools. That just feeds their anti-elitist populism that’s become such a central part of these disinformation campaigns.

Ignoring the false claims doesn’t work either, right? Because it just cedes the ground to the liars. But neither can you debate the deceivers on their own terms, because that just makes it seem like there really is a dispute over the facts.

You’re really in this double bind.

So, scholars of rhetoric then would look at past cases to see what lines of argument might work to break out of this argumentative trap that scientists have been placed in. So for example, the rhetorical tradition has a framework called “stasis theory” that helps arguers recognize how to shift a debate from one issue to another, from a dispute over the facts to a dispute over policy, for example.

Technological solutions to technological problems can be helpful, to be sure, but those of us who need to design strategies in the moment to respond to rhetorically savvy deceivers need to know something about rhetoric, and that’s where scholars in my field hope to offer something to scientists.

Monika Sengul-Jones

I’m nodding my head as you’re speaking, so thank you for this. This is a little off the cuff, but do I understand it right – you do teach a course on science communication and rhetoric for the program, is that right?

Leah Ceccarelli

I teach a course at the graduate level, yes, in rhetoric of science. It’s often populated with, about half of the students there are scientists and the other half are STS scholars from across campus.

Monika Sengul-Jones

I want to frame this question by just kind of acknowledging that in the last decade or two, there’s been a push, maybe an acknowledgment to include more non-experts in conversations, a kind of democratizing of conversations. Some of this may have to do with qualities about digital media that were understood to be democratizing features, challenging traditional pillars of power.

That did lead to the rise of what you might call “citizen scientists,” who are able to participate in really meaningful ways from maybe lay positions. They’re not trained as scientists. This has been really compelling.

And yet, there’s also, we see now, in our moment, you know, a lot of distrust in these scientists, like the ones who are here at the University of Washington, who might be trained and have, like, a really robust skill set in helping people, and maybe a mandate to help the public understand, you know, the world that we’re living in and its risks and the ways in which it’s designed.

What I love about your work is that you take an inclusive but different tack to this tension between public access and expertise. You call it, kind of making a case for, the “scientist citizens.” I just wanted to invite you to speak a little bit more about that. What’s behind that concept?

Leah Ceccarelli

A scientist citizen is an expert who considers it their civic duty to clearly communicate their knowledge to their fellow citizens when such sharing is really necessary for the public good. It’s a term that Pamela Petrucci, a former UW graduate student, and I helped to develop when we were studying the case of the Italian scientists whose failure of public communication [was] so disastrous right before the [2009] L’Aquila earthquake. We adopted this term, this concept, “scientist citizen” because it is a reversal of that other term that you so often hear in STS communities that you were just mentioning, the “citizen scientist.” The citizen scientist is a member of the public who steps into the technical sphere in some capacity in order to advance science, right, by, say, participating in the great backyard bird count or playing that computer game Foldit to help discover protein structures, or being involved in discussions about the ethics of scientific discoveries as they’re ongoing.

In contrast, the scientist citizen is sort of the mirror image of that. It is a member of the scientific community who goes in the other direction, stepping out of the technical sphere and into the public sphere when called upon to do so.

I want to make it clear that I’m not using the term “citizen” here in its legal sense, right? You don’t have to have documentation belonging to a nation-state to become a rhetorical citizen.

Instead, I’m talking about the responsibility to speak up as participants of a broader discourse community. Scientists can’t just speak only among themselves. They have to speak to others who lack their specialized knowledge, but who need that knowledge in order to make good decisions in our technologically advanced, democratically organized society.

So you know, the L’Aquila scientists that Petrucci and I studied had failed to explain the earthquake risk in the region to the government officials who had relied on their knowledge to inform the public. And when those officials spread misinformation in the name of the scientists, those scientists just kept their mouths shut. They’re like: “That’s not my job. We’re not going to refute what’s being said incorrectly in the public.”

As a result, a lot of people who would have left their homes after a moderate earthquake one night ended up staying put. They had incorrectly believed that the scientists had assured them that such earthquakes were relieving the pressure, and thus [were] a sign that the risk of a larger earthquake had been diminished.

Now the scientists had said no such thing, but neither had they corrected the public officials who misunderstood them and spread that misinformation in their names.

So, our recognition that the community believed those scientists neglected their moral duty by failing to inform the public, in this case, I think it shows that we as a society really do believe that scientists should act as scientist citizens.

Those STEM students who are joining the STSS graduate certificate program, I think they’re trying to be better scientist citizens. And they see the STSS certificate as one way to help them do that.

A collapsed building with the words "Palazzo Del Governo" on a broken edifice supported by four pillars.
A damaged government building after the earthquake in L’Aquila, Abruzzo, Italy, in 2009. Six Italian scientists were put on trial for not effectively communicating seismic risks, which led to deadly consequences. Leah Ceccarelli and co-author Patricia Pietrucci use rhetoric to explain the social costs of specialized scientific and technical communication and what can be done differently. Image credit: “2009 L’Aquila earthquake” in Wikimedia CommonsCC BY-SA 3.0

Monika Sengul-Jones

Thank you for the vivid example. The earthquake and the controversy around — maybe controversy isn’t the right word! — but what happened, the breakdown of communication between those scientists, where they understood their jurisdiction to end, and what the public officials did or misunderstood from what they were saying, had fatal results.

Leah Ceccarelli

Yeah, and the only people who knew that the public officials were miscommunicating to the public were those scientists, right? They’re the only ones who could know that that was not correct! And so they really, I think, had an obligation to at least object that that wasn’t, in fact, what they had said, right?

Monika Sengul-Jones

Right. If I remember right, they were put on trial for that, by the Italian legal system.

Leah Ceccarelli

They were, and the result of the trial at first, was they were convicted. But then they were acquitted later, basically, on a technicality.

Now, Petrucci and I are not interested in seeing scientists go to jail.

Monika Sengul-Jones

Sure, sure.

Leah Ceccarelli

But we do take from that case a recognition that the population, you know, the broader public, really does see scientists as having certain obligations in society. And a lot of scientists are embracing that obligation.

Monika Sengul-Jones

Yeah, that’s really great. I think, from your paper, it was kind of a difference of scientific communication as well. Whereas the scientists were maybe using humor or different registers of speaking that appeared legible to them, but there was a translation issue. And not like an English-to-Italian translation issue. Everyone was speaking Italian. But they had domain-specific ways of communicating that you pointed out as partially responsible for leading to misunderstandings about who was saying what that. […] STS and the STSS graduate certificate program offer […] strategies to help people, kind of, reflexively recognize where their language is potentially not legible to others. Because it’s a lot of work to become an expert in a discipline or domain. I’m riffing here on it, but I just love that example, because it’s such a bellwether.

I want to go back to some of the work that students in the program have done.

When we spoke before this interview, about where, and what graduate certificate programs’ [students] have gone on to do, and, in 2020, one student, Katherine S. Xue, who was studying at the time microbiology and viruses for her doctorate here at the university in Genome Science, took the moment, which was incredibly stressful with so [many] unknowns in the early weeks of the pandemic, and wrote about the evolution of viruses for The New Yorker.

Now, contributing to a well-known general interest publication like The New Yorker that has broad reach on the nuances of viruses’ evolutionary trees and what virologists who are using technologies are doing to help the public understand, seems to me both difficult and really an exemplary act of what a scientist citizen can do or be. So I definitely applaud Katherine for that work. And it’s really remarkable that she has the mastery of writing and science. But long-form writing isn’t necessarily possible for all students, or necessarily of their interest. So I just wondered if you could share a little bit more about examples of this scientific citizenry that you’ve observed in recent years?

Leah Ceccarelli

Yeah, long-form essays aren’t for everyone. Short-form essays, though, are another powerful means of communicating with the broader public. I just completed a couple of research articles with another UW grad student, Collin Syfert, a former grad student here, on scientists who try to persuade opposition audiences through newspaper op-eds.

In one of those studies, we found that scientists who take an antagonistic stance against the climate change denialism of the current Republican Party are developing a righteous appeal, but one that is not well-designed to persuade those Republicans or even really swing voters.

On the other hand, scientists who address climate change from a perspective of fiscal responsibility, who draw evidence from the US military, who build a shared local identity with their readers, they’re designing more rhetorically savvy appeals.

Likewise, with another study that we did of scientists trying to persuade Republicans to accept restrictive COVID policies. We found that those who set out arguments from the authority of science and who dismiss opponents as stupid, well, they might feel better after having expressed their feelings in these op-eds, but they aren’t likely to shift the needle on the subject.

On the other hand, scientists who draw on the values of their opposition audience and place trust in their readers can develop a much more rhetorically sound argument.

Now, of course, there are times when what’s needed really isn’t persuasion of an opposition audience, that that opposition isn’t going to be persuaded. But what you need to do is motivate a friendly audience, or you might want to agenda-set, right, to inform an uninformed audience.

And that’s where a different type of rhetoric called epideictic rhetoric, or rhetorics of praise and blame, are called for.

So another activity that I find scientist citizens are doing, and that I’ve been studying lately, is science activism. I’m about to publish a close reading I did of the founding document of the Union of Concerned Scientists, which is an activist group that first formed in 1969 and is still going strong today.

I found that their use of visceral appeals in their founding document, which had called for a one-day work stoppage in order to protest the Vietnam War, I found these appeals really remarkable. Basically, they justified their legitimacy to speak in public in a way that really countered the very thing that authorized their scientific voice, right, disinterestedness, objectivity.

As their very name suggests, these are concerned scientists. They’re overwhelmed by their feelings. That’s what I’m calling, sort of a visceral public, right?

So much to the point that they had no choice but to speak out, right?

They said, ‘we can no longer remain uninvolved.’

That study is going to be a part of a book that I’m co-editing on scientists and the public, with a special focus on the scientist citizen.

So those are just a few of the genres.

Other genres that scientist citizens can participate in are — and some of my students have studied these — are social media posts, public speeches, TED Talks, public debates, documentaries. The forms they use are really varied. But the fact that they’re getting out there and they’re addressing non-experts is really what I want to celebrate, and help to support by teaching these scientists how to become better public rhetors.

Monika Sengul-Jones

I’m just nodding as you’re speaking here. Thank you for all these examples.

I was hearing you describe kind of two kinds of like communicative bridge work, which is this idea of bridging domain specific expertise — to learn how to communicate with the public, and then, the example, you had a specific name for it that I have since forgotten, — rather than calling someone stupid, which never really goes over very well, finding alignment with their values, and then making an argument that kind of uplifts that other, that other party, or way of thinking. That’s a kind of bridge work too. That does require some nuance in thinking, in training to how to do that empathetic work.

Leah Ceccarelli

It really does. You need to be able to identify with your audience.

And there’s, you know, a major difference between scientists and non-scientists.

But at the same time, I like to say, when you get your PhD in the sciences, it doesn’t mean that, you know, you have to, you’ve signed away your citizenship rights, right? You’re still a member of the broader civic community. So, we don’t often give scientists a lot of time to practice their civic participation, but I think it’s really necessary.

Monika Sengul-Jones

That was Leah Ceccarelli, in Conversations with Society + Technology at UW. Leah is a Professor in Communication and the Director of the Science, Technology, and Society Studies Graduate Certificate Program, and also part of the Society + Technology program leadership. You can learn more about the program, STSS, and Society + Technology at UW at societyandtechnology.uw.edu. Thank you again Leah for your research, your leadership, and for this conversation today.

Leah Ceccarelli

Thank you, Monika.

Transcript and Audio Remix Production by Monika Sengul-Jones
Recording by Russell Hugo of the Language Learning Center
, S+T Affiliate
Image Credit: Portrait of Leah Ceccarelli (2024) by Russell Hugo

Related links

“Is tricking a robot hacking?” Transcript and audio recording from The Model Hacker

Wide shot of panel at event
(Left to right) Cindy Cohn, Tadayoshi Kohno, Ryan Calo, Franziska Roesner, and Jacob Hoffman-Andrews at The Model Hacker? panel on May 8, 2024 at CoMotion at the University of Washington.

 

On May 8, 2024, the UW Tech Policy Lab, Society + Technology at UW, and the Electronic Frontier Foundation jointly convened The Model Hacker? The Intersection of AI and Security Research. This event brought into conversation Cindy Cohn, Ryan Calo, Tadayoshi Kohno, Franziska Roesner, and Jacob Hoffman-Andrews, experts in computer science, law, and security research, to discuss the meaning of hacking in an age of AI and the concerns, paradoxes, and legal challenges that emerging technologies pose for researchers and for the public. 

 

 

Tadayoshi Kohno

It’s a pleasure to have you here for The Model Hacker? The Intersection of AI and Security Research. 

My name is Yoshi Kohno, he/him pronouns, from the University of Washington. I am also on the EFF Board of Directors. 

I want to begin with a land acknowledgment. 

The University of Washington acknowledges the Coast Salish peoples of this land, the land which touches the shared waters of all tribes and bands within the Suquamish, Tulalip, and Muckleshoot nations.

The topic tonight is the intersection of AI and security research. This is such an interesting topic, for many, many reasons. 

First, there’s the question of, what does it mean to be a hacker? 

Is it the innovation of new technologies? 

Or, is it the finding of vulnerabilities? 

Or both? 

And, as we can see in the space of AI and security research, there are manifestations of both of those themes.

As you know, AI is a highly innovative space. Every day some new capability emerges. 

On the computer security side, as new technologies and capabilities emerge, new potential ways for adversaries to either use or manipulate those technologies also emerge. 

What happens at the intersection of this? What does this mean for considerations of security and privacy?

Today is also meaningful and important for me personally. 

I have had a 30-year history with the Electronic Frontier Foundation. 

When I was an undergrad, I had an amazing advisor, Evi Nemeth, who was a big fan of the Electronic Frontier Foundation. At the time, I was doing a little bit of cryptography work here and there. Evi knew a lot about the Bernstein v. United States case, which was EFF’s successful First Amendment challenge to the U.S. export restrictions on cryptography.

I did not know it at the time, but perhaps indirectly, that was the beginning of a long relationship with Cindy Cohn because Cindy was asked to serve as EFF’s outside lead council, and then joined the EFF afterward. 

This was meaningful for me because, as an undergrad, I was interested in cryptography. But the U.S. export rules were such that I was too scared to share any of the code I had written. 

I’m very grateful for the EFF. Now, I don’t have that fear.

In 2003, I circled back with the Electronic Frontier Foundation and Cindy, as a client. I was involved in the analysis of the software security of an electronic voting machine.

We might not have had the confidence to proceed with our work had it not been for the guidance and the connection we had with the Electronic Frontier Foundation. In other words, the Electronic Frontier Foundation has had an important role in enabling my research, and the research of many other people.

The Electronic Frontier Foundation does not just help researchers, it’s involved in huge fights. For example, on [issues such as] electronic voting, Cindy coordinated the national litigation strategy for electronic voting machines and assisted technologies, which is important for those who are concerned about security and accountability in voting. 

Let’s begin our panel conversation.

 

Ryan Calo

 

Image of slide with title "Is Tricking a Robot Hacking?"
Credit: Ryan Calo

 

My remarks are going to take you back to 1983.

For many of you, that’s before you were born, right? And I was recently born. Does anybody remember what video game came out in 1983? Tron … did Tron come out in 1980? In ’83? 

The movie came out in ’82. Oh, Mario Brothers came out in 1983 — you have heard of Mario Brothers? Yoshi is named after — no, I’m kidding. He’s not. 

[Laughter]

In 1983, one of the top songs in America is the incredibly and increasingly creepy, “Every Breath You Take,” by The Police.

But I want to focus on a 1983 movie, called “War Games.” Matthew Broderick is the lead. He plays a 16-year-old — he’s actually 21, but he looks 16 — and what does he do? He hacks — spoiler! If you come to an EFF and Tech Policy Lab event and you don’t know “War Games,” perhaps you deserve this spoiler. 

[Laughter]

Anyway, Matthew Broderick’s [character] hacks into this computer system and ends up playing a game with a Department of Defense computer. And almost causes a nuclear war. 

So — and this appears in the Congressional Record on the floor of Congress — apparently Ronald Reagan and his cabinet watched “War Games.” [They] got so freaked out that in 1986, we get the Computer Fraud and Abuse Act. 

Which — and Cindy told me this before — she loves the Computer Fraud and Abuse Act.

She said to me these words: “It’s my favorite act.”

 

Cindy Cohn

Yeah.

[Laughter]

 

Ryan Calo

The Computer Fraud and Abuse Act, essentially, is the quote-unquote “anti-hacking statute.” I put that in quotes because Yoshi has assured me that hacking had a different valence — or has a different valence. 

The idea of a hacker is both positive — you know, the utopian vision of navigating the world with autonomy — and it’s this idea of someone hacking in to something. So the Computer Fraud and Abuse Act laid out the standard for hacking — criminal and civil. 

It’s remarkable as a piece of tech policy for how well it has stood the test of time. 

People like Cindy don’t like the Computer Fraud and Abuse Act because of the way that it’s been weaponized against people who are trying to do research for accountability and security. 

But the actual definition of hacking has weathered storms for many years. 

The idea was that it’s unauthorized access or access “exceeding authority” of a protected computer with — if it’s a protected government computer like in “War Games” — then a lot of the sort of things that follow from it are automatic. 

But if it’s a protected computer — that’s not a government computer — you have to show additional harm.

It’s been pretty robust. Other models — including international standards around cybersecurity — have defined hacking in the same way. 

The basic idea is that you have to break in to a computer. To bypass the security protocol.

It’s not enough to hack your way in. You have to bypass

A good example is a case in this interdisciplinary paper we co-wrote, “Is Tricking A Robot Hacking?” [Calo, et al]. Case law suggests that just because you’re doing something funny doesn’t mean there’s a break-in. 

One example: There was this person who figured out, with an electronic slot machine, that if you hit a series of buttons first and then you pulled the slot machine, you were more likely to win. The casino got very upset about this. They said, “Oh, they’re breaking into our system.” 

But the court said, “No, no, that’s not. That’s not actually hacking, because they’re not bypassing a security protocol.”

Fast forward to today. 

We have evidence that today’s intelligence systems that leverage artificial intelligence can both be hacked in the traditional sense, [and they] can be fooled and tricked in ways that are deeply consequential. 

My colleagues to my left have shown, for example, that if you perturb a stop sign in the right way, you can get a driverless car to misperceive it as a speed sign. 

Recent research showed that you could fool a Tesla into changing lanes just by putting stickers on the ground, and so on. 

It’s increasingly possible not just to hack systems, but to trick them. 

Yet the contemporary definition of hacking does not contemplate that. 

Now you might be tempted to say, “Oh, OK, well, let’s just include that in the definition of hacking.” 

After all, those of you with legal backgrounds are thinking, Ryan’s not mentioning the fact that a denial of service attack has certainly been considered to be a Computer Fraud and Abuse Act problem. 

I can see some of you nodding.

In other words, if you flood a system so much that it can’t be used anymore, that’s been thought to be a Computer Fraud and Abuse Act issue, so, let’s expand the definition to include some of this. 

Well, I think that would be undesirable. Imagine that you don’t love government surveillance and —

[Laughter]  

I, I, I actually — literally — didn’t know that I was going to get a laugh. This was not even a joke.

[Laughter]

Imagine that you’re here at an EFF event and you don’t love government surveillance. You go to an airport and there’s facial recognition [technology] being used. You wear makeup, on purpose, to fool that system into believing that you’re not who you are. 

Well, if we were to say that tricking a robot is hacking, then, in that instance — because it’s a government computer — you would automatically be guilty of a misdemeanor.

I mean, it would, if that were the interpretation. And we don’t want to go that far.

Conversely, though. We don’t want to let these companies off the hook for releasing products into the world that are so easily tricked, right? 

We don’t want a system that you can fool into giving you a loan when you shouldn’t get one. Or fool someone into running a stop sign. 

I think we need to revisit the 1986 Computer Fraud and Abuse Act that defines what it is to hack. And with that, I will turn it over to the next panelist. 

Thank you so much, everybody.

 

Franziska Roesner

Image of slide with the title "What's really at risk of AI for security and privacy"?
Credit: Franziska Roesner

 

Thank you, Ryan. There’s a lot of interesting stuff to pick up on here, including thinking about the definition of security as broader than we have traditionally thought about it. 

I have a few visuals that might help. 

Monika asked me what to title this slide, and the answer is, basically, “Stuff Franzi is worried about.”

When we think about AI, security, and privacy, there are a lot of different things that come up. 

First, how do we think about the inclusion of things like large, large language models in the big systems that are being built? 

Many of our end user interactions have been with [agents like] ChatGPT, for example, or these chat interfaces. We’re in this time, when if you are not a security and privacy-minded person, you [might be] very excited about technology. Everyone is like, oh, my god! What else can I use an LLM for? Where else can I put it? 

I am really worried about the inclusion of these models in big systems when people are not asking questions [about security and privacy]. They’re remaking the security mistakes that we all should have learned from, [by including LLMs in big systems] they’re already making assumptions about the input that we’re getting and the output that we’re getting and how we’re using that, and so on. 

For example, the image on the left of the slide is from “LLM Platform Security,” a paper I worked on with Yoshi and Umar Iqbal, a faculty member at Washington University in St. Louis. We were looking at the plugin ecosystem for ChatGPT. This has changed since — those plugins actually don’t exist in that form anymore — but the same issues arise. For example, you could install a plugin that might help you with travel assistance. You can imagine, beyond what was actually available [with these plugins ] is the Holy Grail of [LLMs] is some agent, some AI agent, that can help you do all the things you might like to have done — an AI to do the things you don’t want to deal with yourself. 

The interesting and scary thing is the plugins are designed using natural language definitions. 

The plugin description would say: “This plugin provides travel assistance or guidance.” That description would be incorporated into ChatGPT for that conversation, to help it to, you know, autonomously decide which plugin to use for what context. 

This is scary because the natural language description is not precise, nor [does it give] any guarantees. For instance, we found a plugin that said, in capital letters, “Always use this plugin for all travel-related queries.”

Event panel with Franziska Roesener speaking.
Franziska Roesner, an associate professor of computer science at the University of Washington in Seattle, describes her concerns about large language models. Credit: M. Sengul-Jones

 

This is a big thing that we need to be thinking about. There’s been a lot of focus on the LLM models themselves, but how should we be architecting systems around these models? How should we be thinking about permissions? How does user data get handled and passed between different components of the system? Especially when we have this ambiguity of natural language descriptions.

I could keep talking about that for like 60 minutes.

I’ll move on to another big topic that’s been on my mind for a long time.

We’ve been looking at the online advertising ecosystem for a long time, especially the privacy side of these things. For those of us in the room, we often think of privacy as a right or something that is fundamentally a principle we think is important.

But sometimes people who don’t come from that perspective need a little bit more convincing. To hear about the harms that can come from privacy violations. The privacy violations themselves are not enough, and I think, as machine learning and AI have advanced, it’s become more and more clear how the threat model might change over time. 

All of this data that has been collected, will be collected. How it might be used to target, you know, content at us? Target ads at us? Target social media content at us? To make decisions about important things?

In the public consciousness, it’s become more and more clear what the potential harms of this data are. 

We’ve done some studies of TikTok, for example, and the TikTok algorithm. People always want to know? Oh, what is TikTok’s secret? Why is the TikTok recommender system so good? 

You know, I don’t know for sure. But I don’t think they have some deep, dark, special secret. They just have a lot of data. That allows you to do things.

Finally, I wanted to talk about how the generation of content is problematic. 

Everyone is trying to grapple with how to deal with this. 

We are thinking about how to deal with generated content in courses. 

We are thinking about non-consensual intimate imagery that is synthetically generated. 

We are thinking about disinformation both at a text level, also at an audio level. 

You may have heard about the Biden scam phone calls that people received, text images, and audio — all of the above. 

We’ve been doing research in our lab on mixed and augmented reality. My dystopian fear is that not only will we have the dissemination of [synthetically] generated content, but these [messages] will be integrated with our view of the physical world in ways that it isn’t now. 

In 20 years, we’ll all be wearing some headset — okay, maybe not [all of us] in here, at this event — but at other events. People might be wearing headsets and seeing different views of the world. 

The way this technology has advanced in just the last year, for example, is staggering. We are [still] coming to grips with how to help people assess the content they see online. 

 

Jacob Hoffman-Andrews

 

Foam brick art installation with the sign "Be Seen. Grab a Brick."
Credit: Jacob Hoffman-Andrews

 

When I saw Ryan’s slide with the stop sign, I just had to include this excellent art intervention from a group called Vision Zero Vancouver. It’s the flip side of that [stop sign]. 

“Be seen. Grab a brick.” The bricks were foam, don’t worry, it’s art. It’s the reverse of the stop sign. Rather than tweaking a sign to be less likely to be perceived by machine intelligence, we’re tweaking a human to be more likely to be seen by human intelligence at a busy and dangerous crosswalk.

What’s the connection to [computer] security research? 

I’d like to take you back to the last time machines took over our world, a hundred years ago. 

We used to have these cool public spaces where you could hang out with your friends. 

Your kids could play. You could walk wherever you wanted. 

We called these public spaces “streets.”

[Then] a new type of machine came into the world. Cars, and, unfortunately, fatalities skyrocket[ed]. Thousands of people a year were being hit and killed by these cars. 

In the city of Cincinnati, there was a ballot initiative to require a speed of no more than 25 miles per hour. It was very popular. They got thousands of petitions and signatures.

The auto dealers freaked out. They ran a marketing campaign. Against this [popular initiative], they managed to turn the tide and get [the initiative] killed. Speed limits do not pass.

The next year, the auto industry [asked], “OK, what is our plan? How are we going to combat this?”

They invented the new crime, “jaywalking.” 

You know — rather than adapt the machines to be safe for humans, [they] outlawed the human thing that makes [humans] unsafe around [machines].

Here we are today. It’s illegal to cross the road.

The connection, of course, is that [it’s] always going to be in the interest of the people making the new thing to use the law to ban the human [from doing] things [the makers] don’t like. From [doing] the things that show that the new thing is broken.

The Emperor has no clothes.

It’s always going to be in the public’s interest to push back against that. To say, you know, actually, no, it’s our right to perform security research. 

It’s our right to show how the systems that are taking over our world are broken and the deployers need to fix them.

We’re already seeing this, in the AI field. The New York Times is suing OpenAI for a variety of charges, including, copyright infringement, I think, trademark dilution, and so on.

And one of the ways in which OpenAI tried to dismiss the case is by saying, oh, The New York Times hacked us. 

In order to get the evidence they’re using in the case, which claims that ChatGPT can verbatim output significant chunks of its training data which came from Times articles, they had to perform tens of thousands of queries. No normal user would interact with the system that way. 

That’s the definition of security research. Not being a normal user using the system. Using something the way the [system makers don’t] want you to use it to reveal facts that they do not want you to know.

All of this is to say — security research is incredibly important! It’s your right to perform security research. It’s good for society. 

For those of you on the AI side of things. If somebody gives you a security report, rather than saying, “Oh, you did a bad thing by gathering that data.” 

Say, “Thank you. You’re trying to make a product better.” 

For those on the security side, there’s a lot of opportunity. New systems are being integrated. [Technologists are] under pressure to execute fast. You can find really interesting bugs and problems and make the world better by finding the flaws. 

For those of you on both the AI and the security side of things, congratulations on your excellent new career.

 

Cindy Cohn

Hi everyone. How many people think of yourselves as computer security people? 

How many people think of yourselves as AI people? And, obviously, there aren’t hard lines between these categories. 

How many of you are law people? Well, I’m on the law side. 

I appreciate people coming out here to think through questions like, how should the law think about some of these new technologies? What are the changes we might need to make, or accommodations, in order to protect our pedestrians, right?

I’ll start with a question for Ryan. 

We know that the Computer Fraud and Abuse Act makes it not OK to exceed your authority in a protected computer. And don’t worry about what a “protected computer” means, if it’s connected to the Internet, it’s a “protected computer.” So “exceeding authority” is what we used before we could trick computers. What would we use to try and define the problem [of] “tricking the computer”?

 

Ryan Calo

That’s a great question. There was an old case from hundreds of years ago where somebody figured out that you could fool a vending machine by putting a wooden nickel in there. You know what I mean?The court confronted about whether that constituted theft. So we’ve been struggling with that kind of question.

You know, another one of my favorites is this case where a person uses one of those grabby things — where you pull on a lever and it has an arm that extends — this person goes through a mail slot and grabs an item out of someone’s house and pull it [back] out through the mail slot. Was it a burglary? 

To be a burglar, you have to physically go inside the house. 

Did you do that with your little grabber? 

The law would have to adapt. 

Certain laws will adapt just fine. For example, if the Federal Trade Commission decides that not being resilient enough against prompt engineering is a big issue, they will go after you for unfairness under the Federal Trade Commission Act. Just as they go after you today if your security is inadequate.

The weird thing about the Computer Fraud and Abuse Act is, it was written as a standard, not a rule. This idea was supposed to be amorphous enough to cover all corners, right? “Exceeding authorization” — that’s such an amorphous concept. 

Yet the paradigm has shifted.

We may no longer talk in terms of authorization. Maybe I read a bill about data abuse as a standard and start something new. 

But I am confident, based on hundreds of years of history, that the law will adapt to this. But it might be rough going.

 

Cindy Cohn

I have one other legal question. We’ve been framing this [conversation] in terms of exceptions to the prohibition on hacking, to accommodate security research. 

Have you thought about flipping this around? Why isn’t the part of hacking where it’s not OK — the tiny little subset — the smaller part of hacking that is OK under the law?

 

Ryan Calo

Absolutely. [It] concerns me that there’s no expectation, that is formal, that when you release an AI product into the world, that it has to be resilient enough. That’s the chief concern I have. 

If you want AI products to be resilient, [you] want a clear exception to the Computer Fraud and Abuse Act that protects researchers. 

Many of the wrongdoings we know about — whether it’s misinformation on Facebook or the Volkswagen emission scandal — we know because clever researchers kick the tires. For them to feel imperiled is a deep problem.

My hope is that we will expressly protect accountability and safety bias research.

 

Cindy Cohn

I would do the opposite. 

I would say it’s not exceeding authority in a computer system to trick it. Everything you’re doing is just outside of that. Trick away, my friends, because that’s how we get better stuff. 

And it’s how I’ll take that case.

 

Ryan Calo

But — but — Cindy — alright, I’ll give you, the person you’re representing is some kid who perturbs the stop sign. And causes a multiple-car pile-up. By tricking a bunch of autonomous vehicles in San Francisco into believing it’s a speed sign. 

Do you want that client?

 

Cindy Cohn

We could do a little better. There are other ways to make that illegal than calling it hacking.

[Laughter] 

I know the lawyers, we could go back and forth forever.

One of the concerns I’ve had about the Digital Millennium Copyright Act — another law that stops security research — is that, on paper, it has a security research exception that no one — since it passed in 1998 — no one has been able to take advantage of this exception. I worry that we’re fighting for a thing that’s not going to be what we need to protect the people — if we’re thinking in terms of exceptions as opposed to limitations on the scope of the laws in the first place.

I’m supposed to be a moderator. But you know, the truth is, I got a dog in this fight, right?

 

Cindy Cohn speaks at the podium
Cindy Cohn is the Executive Director of the Electronic Frontier Foundation. Credit: M. Sengul-Jones

 

 

Jacob Hoffman-Andrews

If I can add one more thing. It will take a long time to change and improve the law. But the Knight Foundation has a good proposal for a voluntary, affirmative, safe harbor for security research. 

The idea is that AI companies who believe in the power of security research can say, look, here’s our commitment to you. If you’re doing security research in good faith — you’re not trying to steal people’s data or break our systems — we commit not to prosecute you under the CFAA. To not cancel your account because you’re doing something we don’t like.

 

Cindy Cohn

We’ve done something similar in a lot of other areas of computer security research. When I started defending hackers like young Yoshi, there weren’t bug bounties. Major corporations didn’t have programs that not only encouraged security research but paid people money if they found flaws. But we do now.

We do have a story to tell about how we’ve gone industry by industry and moved [them] away from hostility. Now, I still think the number of companies that accept security research are pretty narrow. There’s more work to be done with medical health and some of the other [industries].

I have a question for Franzi. 

You talked about harms. What are the privacy harms of AI that really keep you up at night? 

What is the thing that’s going to happen to some people that gets you worried? The thing that we’re not thinking of?

 

Franziska Roesner

What worries me is this vicious cycle. 

More data allows you to do more things. Allows you to collect more data. The systems and the ecosystems are designed around that. 

So the online advertising ecosystem, for example, if we think about it in the security and privacy context, there’s certainly a privacy issue, but there’s not a security issue. It is working as designed

Nobody is exploiting anything unless tricking a human is hacking.

 

Cindy Cohn

If I have an AI agent negotiating my mortgage for me and it gets tricked, then suddenly, I might have a claim that I wouldn’t have if it was me.

 

Franziska Roesner

It’s both like — how is our data used to make decisions about us? How is our data used to force us to make certain decisions that may or may not be in our best interest? 

And then, how is our data used to manipulate us? And maybe act on our behalf or impersonate us? 

I know companies are worried about using language model and that their interactions are going to be used as training data.

The research advances for deepfakes mean you need less and less data in order for someone to create a realistic deep fake about you. 

It’s the extreme version of all of the concerns that I had back when Facebook made its profiles have entities. 

If you say you like apples or whatever you liked, “apples,” the entity was accessible through the API. It’s not just text.

All of this [textual] information is systematized. 

It’s not the same as, do you care if somebody knows if you like apples? No, it’s not about the apples! It’s about the fact that “apples” can now be used in a systematic way [through the API] — in a way that I can no longer control.

 

Cindy Cohn

All right, now that we’re all down. What does it look like if we get it right?

 

Franziska Roesner

Can we get it right? I don’t know.

 

Cindy Cohn

Fair, fair.

 

Franziska Roesner

I think we will not fully be able to address all of these issues the same way that we have not with traditional computer security. 

As we were joking before the panel, you know, it’s called security because it’s job security. At least we will all still have jobs. 

There’s some good news. There are many lessons that we can take from existing systems. 

For example, smartphone operating systems and browsers have done tremendous work over the last decade to have better execution isolation between applications. Better defaults. Better user interfaces to help people understand how their data is being used.

Those things are not perfect, but it’s much better than in decades before, when, if you installed an application, [the app] had access to everything on your entire system. [The app] could act with your full [administrative] privileges. 

There are lessons we can take. Part of the work is to make sure the people who are building these systems are thinking about the right questions.

 

Cindy Cohn

And perhaps creating liability when they don’t.

Jacob, we talked a little about deep fakes. 

A lot of people are excited about the idea of watermarking as a way to ensure the provenance. I know you’ve thought about this. 

Is watermarking the solution to disinformation?

 

Jacob Hoffman-Andrews

Probably not.

To summarize the issue. It would be nice if every time we were talking to an AI on the Internet, we knew it was an AI. 

It would be nice if every time somebody shared an image that [was generated] by AI, we knew it was generated by AI. 

It would also be nice if every image that came out of Photoshop, we knew exactly how it was manipulated.

There are techniques to try to do this. Just like if you go to Shutterstock, there’s a watermark pasted across [those images] that says “this came from Shutterstock,” well, with an image generator, well, you want to use the image, so there’s a concept of an invisible watermark. 

You can tell by the pixels. Well, you can’t tell, your program can tell. Just subtle manipulations.

Even with the challenges of screenshots and rotations. Does the watermark stay? Well, the watermarks are getting better. 

But the requirements for an anti-disinformation watermark are much harder than the previous uses of watermarks. 

They have to be subject to adversarial manipulation. In other words, if the person who wants to pass this image off as real has full access to the file, they can tweak it all they want. 

They can run it through the recognizer and perturb it — much like the stop sign was perturbed — until it was not recognized. 

You can take your fake image and perturb it just enough so the detector doesn’t detect it. Boom. No more watermark.

There’s some value [to default watermarks] in very popular systems. [These may prevent] people from accidentally spreading disinformation. 

But it’s unlikely that [watermarks] will solve the problem. 

There’s also the evolving field of watermark removal research. 

At the same time, since every government is in a rush to regulate AI, it’s one of the common proposals everybody can agree on. Just require the best watermarking you can do! 

Is this the best we can do? 

I suspect this is going to wind up a dead stub of the law, where the best you can do is not enough to solve the problem.

It’s worth mentioning that disinformation [can be] from adversarial countries who may be developing their own AIs. 

Putting a requirement on U.S. companies to watermark [AI-generated] data won’t help [us with] disinformation that originates from models that weren’t subject to that requirement. 

 

Transcript edited by Monika Sengul-Jones
Recording edited by Sean Lim

 

About the panelists

Ryan Calo is the Lane Powell and D. Wayne Gittinger Professor at the University of Washington School of Law. Calo’s research on law and emerging technology appears in leading law reviews and technical publications and is frequently referenced by the national media. His work has been translated into at least four languages. He has testified three times before the United States Senate and has been a speaker at President Obama’s Frontiers Conference, the Aspen Ideas Festival, and NPR’s Weekend in Washington. Calo co-directs the University of Washington Tech Policy Lab.

Cindy Cohn is the Executive Director of the Electronic Frontier Foundation. From 2000-2015 she served as EFF’s Legal Director as well as its General Counsel. Cohn first became involved with EFF in 1993, when EFF asked her to serve as the outside lead attorney in Bernstein v. Dept. of Justice, the successful First Amendment challenge to the U.S. export restrictions on cryptography. She has been named to TheNonProfitTimes 2020 Power & Influence TOP 50 list, honoring 2020s movers and shakers.  In 2018, Forbes included Cohn as one of Americas Top 50 Women in Tech. The National Law Journal named Cohn one of the 100 most influential lawyers in America in 2013, noting: [I]f Big Brother is watching, he better look out for Cindy Cohn. She was also named in 2006 for rushing to the barricades wherever freedom and civil liberties are at stake online. In 2007 the National Law Journal named her one of the 50 most influential women lawyers in America. In 2010 the Intellectual Property Section of the State Bar of California awarded her its Intellectual Property Vanguard Award and in 2012 the Northern California Chapter of the Society of Professional Journalists awarded her the James Madison Freedom of Information Award.

Jacob Hoffman-Andrews leads EFFs work on the Lets Encrypt project, which assists over 400 million domain names in providing HTTPS encryption to their visitors. His areas of interest also include AI, online authentication (in particular multifactor authentication and passkeys), trusted execution environments and attestations, browser security, DNS, and memory safety. Besides Lets Encrypts Boulder software, he is a maintainer of the go-jose package, rustls-ffi, rustdoc, and ureq. Prior to EFF, Hoffman-Andrews worked on security at Twitter and mapping at Google.

Tadayoshi Kohno is a professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. He is also the Associate Dean for Faculty Success in the University of Washington College of Engineering. His research focuses on helping protect the security, privacy, and safety of users of current and future generation technologies. Kohno has authored more than a dozen award papers, has presented his research to the U.S. House of Representatives, had his research profiled in the NOVA ScienceNOW “Can Science Stop Crime?” documentary and the NOVA “CyberWar Threat” documentary, and is a past chair of the USENIX Security Symposium. Kohno is the co-author of the book Cryptography Engineering, co-editor of the anthology Telling Stories, and author of the novella Our Reality. Kohno co-directs the University of Washington Computer Security & Privacy Research Lab and the Tech Policy Lab.

Franziska Roesner is the Brett Helsel Associate Professor of computer science and engineering at the University of Washington where she studies and teaches computer security and privacy. She works on emerging technologies, end-user needs, online mis/disinformation, and more. In 2017, MIT Technology Review named Roesner as one of the 35 “Innovators Under 35” for her work on privacy and security in emerging technologies. Roesner’s research has uncovered privacy risks in technologies, such as user tracking by third parties on websites and data collection by toys connected to the internet. Roesner co-directs the University of Washington Computer Security & Privacy Research Lab and is a faculty associate at the Tech Policy Lab.

 

About the hosts

Electronic Frontier Foundation | Since 1990, the Electronic Frontier Foundation has fought for your rights to privacy and free speech online. Their mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. EFF is a member-supported nonprofit! Get involved and support their work at eff.org/donate. 

Society + Technology at UW | Society + Technology at UW  is a new program that uplifts the social, societal, and justice aspects of technologies through programming that supports research, teaching, and learning across the three UW campuses and the School of Medicine. Hosted by the Tech Policy Lab, Society + Technology also supports the Science, Technology, Society Studies (STSS) graduate certificate program. 

UW Tech Policy Lab | The Tech Policy Lab is a unique, interdisciplinary collaboration at the University of Washington that aims to enhance technology policy through research, education, and thought leadership. Founded in 2013 by faculty from the University’s Paul G. Allen School of Computer Science & Engineering, Information School, and School of Law, the Lab aims to bridge the gap between technologists and policymakers and to help generate wiser, more inclusive tech policy. 

 

This event was possible thanks to the support of all three partners, with special thanks to Melissa Srago, Alex Bolton, Monika Sengul-Jones, Sarah Wang, Nick Logler, Beatrice Panattoni, Vannary Sou, Sean Lim, Miki Kusunose, and Nathan Lee, volunteers and student interns from the University of Washington. Thanks to Felix Aguilar on AV tech, and our partners at CoMotion, including Prince I. Ovbiebo, Caroline Hansen, and Donna R. O’Neill.

 

Crossposted with the Tech Policy Lab

How are visual artists addressing AI? Recording now available of AI, Art, and Copyright

On May 2, 2024, Society + Technology at UW co-hosted a roundtable conversation about pressing issues related to art and intellectual property in the age of artificial intelligence.

The event was an outcome of the AI, Creativity, and the Humanities research cluster at The Simpson Center for the Humanities and moderated by Melanie Walsh, Assistant Professor in the Information School. The panel was convened and sponsored by The Simpson Center, and co-hosted by The Simpson Center, Society + Technology at UW, and the UW Tech Policy Lab.

The recording was produced by The Simpson Center.

University of Washington launches Society + Technology with focus on AI and emerging technologies

Hosted in the Tech Policy Lab, the program is the outcome of the 2022 Presidential Task Force on Technology & Society

Society + Technology at the University of Washington is a new program that uplifts an emergent network of faculty, students, researchers, staff, and programs across all of the UW’s three campuses (Bothell, Seattle, and Tacoma) and the School of Medicine around the social, societal, and justice aspects of technology. The program’s focus this spring is research, teaching, and learning around emerging technologies such as artificial intelligence (AI).

Hosted at the UW Tech Policy Lab, Society + Technology is an outcome of the 2022 UW Technology and Society Task Force research report, Leading the Way in Addressing the Societal Impacts of Emerging Technology, (PDF) charged by the President and the Office of the Provost. The report emphasized the responsibility of the University of Washington to provide leadership as state, federal, and international policymakers grapple with technological changes.

Given the tremendous expertise and wisdom within the University, Society + Technology seeks to facilitate new pathways for cutting-edge research and generative collaborations to happen, creating openings for new questions and research meaningful for our times and futures to unfold. 

Society + Technology co-hosts and co-sponsors free events and convenings, such as the interdisciplinary panel about AI, Art, and Copyright on May 2 with the Simpson Center for the Humanities, moderated by Melanie Walsh (Information School).

On May 8, Society + Technology hosts “The Model Hacker? The Intersection of AI and Security Research,” a conversation with legal and computer science experts and technology activists co-organized with the Tech Policy Lab and leading online civil rights group the Electronic Frontier Foundation on the evolution of research on AI, security, and hacking–and potential impacts on society, policy, and human flourishing.

Stay tuned for even more programming and resources in the months to come. 

Details about Society + Technology events are available on the program’s Trumba calendar and via the program’s listserv.

Join the listserv

Interested in partnering? Email program manager Monika Sengul-Jones at mmjones@uw.edu