3️⃣ AI Ethics with Lauren
Stable diffusion is an incredible tool which has already demonstrated its power and versatility since its first use! Given Robin Rombach et al.’s work demonstrating the ability to use a mere image representation in place of an image and generate a full image based on it, the ethical application that came to mind for me was obfuscation.
Loosely defined, obfuscation is the concept of protecting one’s privacy and protesting the overuse of personally identifiable information with noise in various forms, which provides obfuscation. A great application of obfuscation is Fawkes, a tool from the University of Chicago Department of Computer Science co-led by Emily Wenger and Shawn Shan. The innovative tool adds noise to personal image to prevent facial recognition algorithms from properly identifying a person, misidentifying them in most cases and adding a layer of privacy protection. This obfuscates a person’s biometric data and respects their wish to remain anonymous, often when there is no option to do so.
If we combine these two incredible technologies, stable diffusion may counteract the efforts of Fawkes by reversing its obfuscation efforts, processing the noise-addled image to reverse engineer a person’s identity, revealing biometric data and violating the wish to remain anonymous. This is a specific application and is not meant to generalize about stable diffusion, but it is presented as an interesting use case that warrants more investigation and should be analyzed through different privacy lenses. On the flip side, we already have many positive use cases of stable diffusion, including some of the most hilarious and entertaining memes of recent years. So far the applications have been incredibly useful and positive, and I am excited to see many more use cases to come!
- AI Ethics segment by Lauren Keegan
|