The MIT Technology Review shares how a new tool, called Nightshade, lets artists add invisible changes to the pixels in their art before they upload it online. Nightshade messes up scraped training data and causes the resulting image-generating AI models to break in chaotic and unpredictable ways.
The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth. MIT Technology Review got an exclusive preview of the research, which has been submitted for peer review at computer security conference Usenix.
No comments:
Post a Comment