Ben Zhao, a Neubauer professor of computer science at the University of Chicago, joins us today. We discuss his work at the nexus of generative AI and security in our conversation. We concentrate on Ben’s most recent initiatives, Fawkes, Glaze, and Nightshade, which employ “poisoning” techniques to give users security and defense against AI intrusions. The first technology we talk about, Fawkes, effectively prevents people from being recognized by face recognition models by subtly “cloaking” photographs in a way that makes them appear severely distorted to models. Next, we delve into Glaze, a tool that uses machine learning algorithms to compute minute changes that are invisible to the human eye but capable of deceiving the models into thinking there has been a substantial change in the style of the artwork, providing artists with a special protection against style emulation. Finally, we discuss Nightshade, a tactical protection tool for artists that functions similarly to a “poison pill” and enables them to make subtle adjustments to their images that “break” generative AI models that have been trained on them.
Fighting Generative AI with Data Poisoning
Previous article
Next article
Youtube @blockgeni
Israel’s use of AI offers a terrifying glimpse at where warfare could be headed
05:55
The AI Revolution Is destroying Thousands of Languages
06:40
CEO of Ripple predicts crypto market reaching 5 Trillion this year
02:01
Microsoft and OpenAI are Planning a $100 billion Supercomputer
03:37
Crypto Options preferred by Goldman’s Hedge Fund Clients
03:20
New Guidelines on Government Use of AI
02:43
Are College AI Degree Programs Really Worth it ?
03:12
Sam Altman Wants Trillion Dollars to Transform the Chip and AI Business
04:31
China's 1st AI Child
01:37
How Blockchain is changing the Gaming industry
04:54