It’s Friday so you know what that means: It’s weekly roundup time.
On the podcast this week, we talked about Google search being taken over by AI images, how you might have helped fund a big raise for Elon Musk without even realizing it, and in the paid subscribers’ section, a remote Amazon tribe and the “porn addiction” story that went viral recently.
Listen to the weekly podcast on Apple Podcasts or Spotify, or watch it on our YouTube channel. Paid subscribers get a link to the bonus feed in their inbox!
Contact us here about advertising in the free version of our newsletter.
Ok, let’s get into it:
BIKINI SEASON
Even when the search terms do not explicitly ask for it, Google’s image search is serving AI-generated images of celebrities in swimsuits, but the celebrities are made to look like underage children. Emanuel documented how clicking on the images leads to AI image generation sites, and in a couple of cases the recommendation engines on these sites leads users to AI-generated nonconsensual nude images and AI-generated nude images of celebrities made to look like children.
SCREENGRAB: TESLA
DAMN IT
If you have any investments whatsoever—including a 401(k) or an IRA—there’s a very good chance you voted to give Elon Musk a $56 billion pay package as the CEO of Tesla. You probably did this without even knowing it. Jason wrote about how investments in exchange traded funds (ETFs), index funds, or mutual funds that buy shares of a large swath of the stock market means some of those shares are spread out to Tesla, and how Vanguard announced that it has changed its vote from “no” to “yes” on whether Musk deserved a massive new compensation package.
THAT’S A MOUTHFUL
Earlier this week, Senator Ted Cruz introduced the “Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (TAKE IT DOWN) Act,” a federal law that would punish platforms for non-consensual deepfakes. This is the first bill that would penalize platforms for hosting deepfakes. Interesting timing, considering I wrote about the risks inherent to such laws just a few weeks ago.
Collage via 404 Media
MAYBE DON’T?
TikTok is making AI-generated avatars of real people for brands to use in ads, providing both stock and custom digital avatars that brands can choose to use in their AI-generated ads. But as Jules wrote, the Federal Trade Commission warned about the dangers of using AI in marketing contexts just last week. “Don’t use consumer relationships with avatars and bots for commercial manipulation,” the FTC said.
READ MORE
Paid subscribers get commenting privileges and we love hearing from you! Responding to “AI Images in Google Search Results Have Opened a Portal to Hell,” Cooper Quintin wrote:
“It’s not just google that has this problem, bing and the engines powered by it such as yahoo and duck duck go also have an AI generated image problem. Since google gets a lot more traffic it’s more of a problem arguably but this problem is endemic not limited only to google.”
And L.K. replied to Cooper:
“That’s true, however I do think Google’s case is particularly notable given it is actively developing generative AI tools of the kind that is rapidly deteriorating its search abilities. It’s diving headfirst into AI without taking the time to insulate itself from the consequences.”
Replying to “Game Studio's Job Requirement: 'Non Negotiable' Nude Sauna Sessions,” someone named “Afj8YdSydMqwznsDpQw” said:
“I couldn't be paid enough to get into a sauna with co-workers. That's next-level weird af.”
Scratching “giant sauna for all of our subscribers” off our first anniversary party wishlist...
BEHIND THE BLOG
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss Joe Biden "cheapfakes," the idea of porn-addicted tribes, and feedback.
EMANUEL: Don’t ask me why, but at some point this week I ended up scrolling the Twitter account of conservative commentator Erik Erikson. While scrolling, I saw a tweet of a very short clip from a White House press conference in which press secretary Karine Jean-Pierre said that videos of Joe Biden looking lost and confused are “deepfakes.”
The clip, which has gone viral in Let’s Go Brandon circles, has since broken containment, and I understand why. Say what you will about whether the media, especially right wing talking heads and cable news, are treating Biden’s age fairly or not, nothing I’ve seen online and certainly not the clips in question have anything to do with “deepfakes,” which involves using AI to manipulate video, namely to face swap someone so they appear in a video they didn’t actually appear in.
It’s an outrageous and misleading statement for the press secretary to make and something I considered covering, so the first thing I did is look up the full press briefing to get full context for how, why, and when she made the statement. Doing this made it immediately clear that the viral clip of Jean-Pierre was itself misleading. Read the rest of Emanuel's Behind the Blog, as well as Jason and Sam's, by becoming a paid subscriber.
Okay, I need to go to the library. See you next week.
|