HomeData EngineeringData MediaFighting Generative AI with Data Poisoning

Fighting Generative AI with Data Poisoning

Ben Zhao, a Neubauer professor of computer science at the University of Chicago, joins us today. We discuss his work at the nexus of generative AI and security in our conversation. We concentrate on Ben’s most recent initiatives, Fawkes, Glaze, and Nightshade, which employ “poisoning” techniques to give users security and defense against AI intrusions. The first technology we talk about, Fawkes, effectively prevents people from being recognized by face recognition models by subtly “cloaking” photographs in a way that makes them appear severely distorted to models. Next, we delve into Glaze, a tool that uses machine learning algorithms to compute minute changes that are invisible to the human eye but capable of deceiving the models into thinking there has been a substantial change in the style of the artwork, providing artists with a special protection against style emulation. Finally, we discuss Nightshade, a tactical protection tool for artists that functions similarly to a “poison pill” and enables them to make subtle adjustments to their images that “break” generative AI models that have been trained on them.

Source link

Most Popular