The article I read this week, “AI Images in Google Search Results Have Opened a Portal to Hell,” was written by Emanuel Maiberg and published on the 404Media website. According to the 404Media About page, “404 Media is a journalist-founded digital media company exploring the ways technology is shaping–and is shaped by–our world.”
This a group of independent journalists who write about, “investigative reports, longform features, blogs, and scoops about topics including: hacking, cybersecurity, cybercrime, sex, artificial intelligence, consumer rights, surveillance, privacy, and the democratization of the internet.”
It’s great to read articles produced by honest, unbiased journalists, and I was glad to see them take on AI and image-generation websites. Basically, the article discusses how these sites often produce images of adult celebrities depicted as children in bikinis or nude.
I wasn’t surprised by this, as there are many accounts of “AI” producing inappropriate or offense or downright vulgar images.
Maiberg describes how he contacted Google with questions about some of the inappropriate images their programs produced. Google responded with what I see as unconvincing statements about their good intentions to create clean, appropriate content and pledged to continue to watch out for problems.
The Google spokesman did admit that the company “could do a better job labeling AI images created with its own generator,” and provided a link to their company blog with a post titled, “New and better ways to create images with Imagen 2.” This is hopefully a good start for Google, but I fear others may not be so diligent.
Maiberg also writes about sites including Lexica, Neural Love, Prompt Hunt, Night Cafe, and Deviantart that have produced AI-generated images of celebrities as children. There is no mention of any celebrities giving permission for this. Maiberg also states that these companies often pledge to do better in at this and have policies that “prohibit nonconsensual pornography and CSAM (Child Sexual Abuse Material) but he adds that, “That doesn’t mean these policies are enforced perfectly.”
These kinds of responses make me angry. When talking about abuse and inappropriate treatment of children, there is no gray area, no room for mistakes. People should never permit this, and shame on these companies for ever allowing this to happen. Now some might say these programs just make mistakes and that nothing is intentional, but that’s not good enough. Are they really willing to believe that some problems are going to happen and that this is okay because “AI” is serving some higher purpose, so we should just accept it?
I have five grandchildren, and I would never accept them being used by some “AI” company for some dubious purpose to achieve anything. Instead, why don’t these companies hit the brakes and ensure their shiny new programs only produce safe, content? Why does “AI” get a free pass here?
Another sentence by Maiberg appears to show that Google may not be so dedicated to doing the right things after all: “The Google spokesperson said that the company isn’t able to comment on any actions taken against specific sites, but I have not noticed any of the sites mentioned in this story being blocked from search results after I flagged them to Google.” That’s not good enough Google.
I’ve been thinking about how to use this newsletter to do more than just complain about AI. I did a little searching and found a U.S. Government organization called the “National AI Advisory Committee. (NAIAC)” The website has tons of information on AI safety and recommendations. There is also an email address to contact them: naiac@nist.gov
I’m going to send them a link to the newsletter, and I hope you will consider contacting them too. I’ll take a deeper look at it this week.
Have a great week,
Will Granger
Don’t be a sheep