• The Upgrade
  • Posts
  • Author of "The Algorithm" on Dystopian AI-fueled Labor Practices

Author of "The Algorithm" on Dystopian AI-fueled Labor Practices

Plus, Taylor Swift AI Deepfakes: the Tipping Point!

Welcome to The Upgrade

Welcome to my weekly newsletter, which focuses on the intersection of AI, media, and storytelling. A special welcome to my new readers from UCLA, FutureNet, Microsoft, and many other top organizations — you’re in good company!

In today’s issue:

  • The Week’s Top AI Stories 📰

  • 🎓 Next Course: AI Boost starts in February!⚡️

  • Taylor Swift AI Deepfakes: the Tipping Point! 🤖

  • 🎙️The Big Interview: Journalist & Author of The Algorithm, Hilke Schellmann, investigates AI’s use in labor practices

The Week’s Top AI Stories

Top AI Headlines

  • Explicit Deepfake Images of Taylor Swift Elude Safeguards and Swamp Social Media — The New York Times

  • Taylor Swift AI Pictures Spark Fury — Newsweek

  • FTC Launches Probe of Microsoft, Alphabet, and Amazon’s AI Investments — Barron’s

  • Researchers Say the Deepfake Biden Robocall Was Likely Made With Tools From AI Startup ElevenLabs — WIRED

Regulation & Policy

  • The Sleepy Copyright Office in the Middle of a High-Stakes Clash Over A.I. — The New York Times

  • Feds Launch Inquiry Into OpenAI and Microsoft’s Messy Relationship — Gizmodo

Ethics & Safety

  • Satya Nadella says the explicit Taylor Swift AI fakes are ‘alarming and terrible’ — The Verge

  • How We Can Control AI — The Wall Street Journal

  • Victoria's Secret's new AI shopping partnership exposes new dangers, experts say — FOX Business

  • AI rise will lead to increase in cyberattacks, GCHQ warns — Reuters — Reuters

  • Algorithms Are Everywhere. How You Can Take Back Control — The Future of Everything (Podcast)

Legal & Copyright

  • Most Top News Sites Block AI Bots. Right-Wing Media Welcomes Them — WIRED

  • We Asked A.I. to Create the Joker. It Generated a Copyrighted Image. — The New York Times

  • Anthropic hits back at music publishers in AI copyright lawsuit — VentureBeat

In the Workplace

  • Tech companies are slashing thousands of jobs as they pivot toward AI — CBS News

  • How A.I. Fakes May Harm Your Business—and What This Founder Is Doing to Help — Inc.

  • Media companies should turn to the Three R’s in order to navigate the tech landscape — Fast Company

  • News industry off to brutal 2024 start as mass layoffs devastate publishers, raising questions about the future of journalism — CNN

  • The Media Is Melting Down, and Neither Billionaires Nor Journalists Can Seem to Stop It — The Hollywood Reporter

Taylor Swift AI Deepfakes: the Tipping Point!

Offensive and sexually explicit nonconsensual AI-generated images of Taylor Swift are circulating widely on X and other social media platforms, raising widespread concerns about privacy violations and the misuse of AI technology. Despite efforts from users to report these posts and suppress the trending topic, the explicit photos remain accessible on various sites. The disturbing incident highlights the darker side of AI tech, showcasing its potential to create damaging content without consent for targeted online harassment and abuse. This incident underscores the urgent need for effective legislation and platform guidelines to prevent the misuse of AI technology and protect individuals' digital rights.

The images, originating from a Telegram chat dedicated to creating nonconsensual explicit images, spread across social media platforms earlier this week. Despite efforts to remove them, their reach was alarming — estimated in the tens of millions and still growing. Swift’s passionate fanbase, the "Swifties," mobilized in a remarkable counter-campaign to report and flag the content. They also flooded social platforms with positive content and supportive messages, trying to bury the AI-generated content with trending phrases like “PROTECT TAYLOR SWIFT” to raise awareness and defend the pop idol. Sadly, Pandora’s box has been opened.

The Bigger Picture: A Post-Truth Era & The Need for Action

The gravity of the deepfake issue finally reached governmental levels, with the White House expressing alarm over the spread of the explicit Swift deepfakes. This incident is a form of gender-based violence and sexual harassment, with female celebrities and politicians facing disproportionate and increasingly severe impacts. The widespread availability of generative AI tools has increased these abuses. We are entering a new, unsettling phase in the digital age – a post-truth era where discerning reality becomes increasingly challenging. The ease with which malicious actors can create and disseminate false yet convincing digital content means we can no longer trust anything we see, read, hear, or watch online.

As lawmakers scramble to address this through legislation, it becomes evident that we need a multifaceted approach. This includes technological solutions for better detection of deepfakes, legal frameworks to punish perpetrators, and, perhaps most importantly, a societal shift in how we consume and share digital content. This incident with Taylor Swift is not just an attack on a celebrity; it's a bellwether for the vulnerabilities we all face online today. The scandal should be an urgent call to action for all of us – tech leaders, developers, lawmakers, and ordinary citizens – to come together to address these technologies' ethical, legal, and social challenges. As we navigate this post-truth era, our collective response to these incidents will determine our digital future.

🎓 AI Boost: Starts in February! 💻

AI Boost for Professional Communicators and Marketers covers the essentials of Generative AI for media and marketing professionals with novice and beginner-level experience with AI tools. The live 90-minute sessions will take place on Tuesdays, starting on February 13th, at 2pm ET / 11am PT. Spots are already filling up!⚡️

SAVE 20% WITH CODE: THEUPGRADE20

🎙️Interview: Author Hilke Schellmann

Hilke Schellmann is the author of The Algorithm and Assistant Professor of Journalism at New York University. She is an Emmy-award-winning investigative journalist.

Note: This interview has been edited for brevity and clarity.

Peter: Tell me why you focused your reporting on AI.

Hilke: AI is very present in hiring and at work, and we may or may not feel or see that. So part of my job is also to tell people. It's already there: if you apply via a job platform, be it LinkedIn, Indeed, or Monster — all of them use AI. Chances are that your resume will be processed by AI. We know that 99% of Fortune 500 companies use AI in their hiring funnel. So it's pretty likely that your application will be touched by AI and not by humans. A human hiring manager can only discriminate against a few people— and I'm sorry if they do that. The scope of an algorithm is just unprecedented, though. Imagine: Google gets about 3 million applications yearly to sort with its resume screener. The scope is just terrifying in this regard.

And then we've also seen where, during video screening interviews, companies use emotional recognition on our faces to figure out if we are happy or angry while we answer these interview questions. And when you start looking into it, you ask, “What do facial expressions in a job interview have to do with you being successful in the job?” And as I kept looking into that, it turns out this is not based on sound science. Not to mention, the use of most of these systems has no level of disclosure or consent.

Peter: Is there anyone we should be looking to and calling to say, hey, can you advocate for me?

Hilke: Yes and no. Launching an AI tool for hiring doesn't require any special permissions or disclosures. You and I could do it today without any licensing or anything. We do have some watchdogs, like the U.S. Equal Employment Opportunity Commission or EEOC, who are vigilant about fair play in job assessments. They're big on preventing discrimination against protected groups - ensuring equal hiring opportunities regardless of gender, race, or disability. But here's the catch: neither the AI tool users nor the providers are obligated to report how their tools work or to conduct audits unless a discrimination complaint surfaces. That's where the dilemma lies. Job rejections are ubiquitous; we’ve all faced many, but it's hard to tell whether it's due to merit or a flawed algorithm. So, to get the EEOC involved, there has to be a formal complaint. In my view, this system needs a serious overhaul. There's talk about allowing people to raise concerns if they suspect AI bias in hiring, without needing concrete evidence upfront.

So, we're really not seeing much regulation, right? What's interesting is the National Labor Relations Board peeking into this. They're worried about how companies might use AI to snoop on employees' communications, especially those about union stuff. This kind of surveillance could lead to retaliation against those involved in union activities, which is a big no-no. Now, I didn't expect the Labor Relations Board to dive into this, but they seem to be eyeing some kind of disclosure rules. But outright banning employers from monitoring workplace communications? That seems unlikely, unfortunately.

Peter: Wow, this is really tricky. After navigating the algorithmic hiring process, what happens next in terms of surveillance? Specifically, how are the top corporations using AI to monitor their employees? I'm curious about the steps following the initial AI-based hiring.

Hilke: Right, so after someone lands a job through AI hiring processes, companies don't just stop there. They're using AI in all sorts of surveillance ways, but nobody's really keeping tabs on this. Employees know they're being watched, like every click they make, every website they visit. I tried out one of these tools myself, and it's eye-opening. It basically judges your productivity by the websites you're on, which isn't always accurate. And it's not just about web browsing – call centers are using AI to analyze calls in real-time. They're giving workers live feedback – talk faster, show more empathy, that kind of thing. Then there's this whole 'insider risk' angle where companies are monitoring everything employees do on their computers, looking for unusual patterns. It's all about spotting risks before they happen. And it's not just about security; it's about behavior, too. Are you being a team player in meetings, or are you a problem? Everyone's under the AI microscope.

Don’t be shy—hit reply if you have thoughts or feedback. I’d love to connect with you!

Until next week,

Psst… Did someone forward this to you? Subscribe here!

Kris KrügVancouver AI

Reply

or to participate.