• The Upgrade
  • Posts
  • Poynter Faculty Tony Elkins on AI: "I feel like the doom and gloom person..."

Poynter Faculty Tony Elkins on AI: "I feel like the doom and gloom person..."

Plus, AI Fundamentals course starts Monday, and headlines!

Welcome to The Upgrade

Welcome to my weekly newsletter, which focuses on the intersection of AI, media, and storytelling. A special greeting to my new readers from NASA, Microsoft, UC Berkeley, and many other top organizations — you’re in good company!

In today’s issue:

  • The Week’s Top AI Stories 📰

  • 🎓 Last call for AI Fundamentals: still 30% Off!⚡️

  • 🎙️The Big Interview: Tony Elkins, faculty at Poynter Institute, AI “places an enormous burden on consumers”

The Week’s Top AI Stories

Top AI Headlines

  • Sam Altman Asserts Control of OpenAI as He Rejoins Its Board — The New York Times

  • Anthropic Ups Its AI Chatbot Game With Claude 3, Rival to ChatGPT and Gemini — CNET

  • Google-Reddit AI Deal Heralds New Era in Social Media Licensing — Bloomberg Law

  • TikTok is becoming swamped with AI-generated conspiracy theory content — PCGamer

  • OpenAI and Elon Musk keep trading barbs. Meanwhile, trust in AI is fading — Fast Company

Regulation & Policy

  • AI Is Getting Compared to Nuclear Weapons. Everyone Needs to Calm Down. — Barron’s

  • Whistleblowers call out AI's flaws — Axios

  • The Fear That Inspired the Creation of OpenAI — WIRED

Ethics & Safety

  • Bay Area Google employee charged with sharing AI trade secrets with Chinese firms — The San Francisco Chronicle

  • AI chatbot models ‘think’ in English even when using other languages — The New Scientist

  • The Lifeblood of the AI Boom — The Atlantic

  • Google’s Genie game maker is what happens when AI watches 30K hrs of video games — Ars Technica

Legal & Copyright

  • Musk wanted control over OpenAI, emails released by the company allege — The Washington Post

  • OpenAI’s legal battles are not putting off customers—yet — The Economist

  • Microsoft’s AI Tool Generates Sexually Harmful and Violent Images, Engineer Warns — The Wall Street Journal

In the Workplace

  • LLMs can predict the future as well as—and sometimes better than—humans — Fast Company

  • AI Is Coming! Tips for Keeping Calm and Carrying On — The Wall Street Journal

  • Executives are spending on AI—but just 38% are actually training their workers on it — CNBC

🎓Class Starts on Monday! 

Ready to level up your AI knowledge? Want to become the AI lead on your team? Join me and an amazing group of marketing, communications, and journalism professionals on Monday! ⚡️

The next cohort starts on Monday, March 11th, at 7pm ET / 4pm PT. Eight spots are left, so don’t wait! Grab your discounted ticket using the code below!

30% OFF NOW WITH “THEUPGRADE30”

💡Poynter Faculty Tony Elkins on AI Risk

Tony Elkins is faculty at the Poynter Institute. He is an expert in leadership coaching, AI ethics, design thinking, product development, and innovation. His past roles include Product Director at Gannett Media, Director of Innovation at Gatehouse Media, and editorial roles.

Note: This interview has been edited for brevity and clarity.

Peter: Tony, your perspective on AI in journalism is fascinating. You mentioned feeling like the "doom and gloom" person regarding AI's future in the field. Can you elaborate on that?

Tony: My journey through media innovation and product development, especially at Gannett, involved managing teams working on cutting-edge machine-learning projects. It feels surreal how quickly the tech industry has moved, and we’re now grappling with the implications of generative AI in every form of media. It's astonishing how what was once the frontier of innovation is now seemingly outdated. There are a lot of risks that come with this flood of AI tools into the market.

Peter: At Poynter, you're involved in a variety of initiatives. Could you explain your role and Poynter Institute's mission?

Tony: Poynter Institute is renowned for its journalism training, especially in leadership. We offer hard skills workshops, and I conduct many in-person sessions in St. Petersburg that focus on leadership. Beyond training, Poynter supports several organizations like MediaWise, PolitiFact, and the International Fact-Checking Network, all aimed at empowering journalists to promote truth and democracy. Due to my design background, my role extends to publishing—particularly in visual AI. I feel it's crucial to address this area as new visual tools emerge, ensuring it's part of our broader conversation about AI's impact on journalism.

Peter: I’m a multimedia journalist by training, and I've been watching this area closely too. How do you see these visual AI tools shaping the future of journalism?

Tony: With the advent of text-to-image generators like Dall-E, Midjourney, Stable Diffusion, and the more recent text-to-video Sora, we're at the cusp of a significant transformation. These tools offer unprecedented capabilities for visual storytelling but also pose challenges in ensuring authenticity and ethical use. My focus is on understanding and integrating these technologies thoughtfully, recognizing their potential to enrich journalism while being mindful of the ethical considerations they bring.

Peter: The rapid advancement of visual AI is profoundly changing the media industry. Tony, how do you view this shift, especially with easily accessible technology enabling anyone to create highly realistic images and videos?

Tony: The landscape is definitely shifting dramatically. The ability for anyone with basic software to produce content that's indistinguishable from professional work poses a significant challenge to creatives and the business models of many industries. More importantly, the implications for the media ecosystem and democracy are profound and, frankly, alarming. With the proliferation of generative AI, the potential for misuse in critical times, like elections, is a major concern.

Peter: Your cautious optimism is notable. Can you share more on your concerns, particularly about the visual aspects of generative AI technology?

Tony: My primary worry lies in the low barrier to creating ultra-realistic fake content. Previously, producing convincing fake images or videos required significant skill or resources. Now, anyone with a free or low-cost app can generate highly realistic visuals, undermining our trust in what we see and hear. This accessibility to powerful tools without the necessary media literacy among the general public magnifies the potential for misinformation and disinformation, significantly impacting our perception of reality.

Peter: It seems the boundary between real and fake is blurring. How do you see this affecting the public's ability to discern truth?

Tony: The shift towards hyper-realistic AI-generated content has reached a point where distinguishing between real and fake is increasingly challenging, even for savvy consumers. The rapid improvement of these technologies means we're entering a realm where everything we encounter must be questioned. This erosion of trust places an enormous burden on consumers, demanding a higher level of media literacy than ever before. It's a concerning development that shifts the responsibility for discerning truth entirely onto the audience, a task for which many are not prepared.

Peter: The Content Authenticity Initiative is gaining traction among various stakeholders, aiming to embed provenance in images. How do you see this evolving?

Tony: The initiative is a promising collaboration between media, tech companies, NGOs, and academics to establish an open industry standard for image provenance. It's exciting to see big names like Canon, Leica, Nikon, Getty Images, and Adobe supporting this, potentially transforming how authenticity is communicated and verified. However, ensuring this metadata remains intact through various uploads and downloads is critical but very challenging.

Peter: The challenge of maintaining authenticity in the digital age is daunting. How do you perceive the efforts to mitigate the risks associated with generative AI?

Tony: While the conversation around content authenticity is crucial, I'm concerned we might be too late. The pace at which AI tools are being adopted, and content is being generated and shared online outstrips the development and implementation of these initiatives. This gap leaves room for misuse and misinformation, with bad actors finding ways to circumvent protections. The recent incidents involving deepfakes, like the horrifying explicit fake images of Taylor Swift, underscore the urgency to put in place real policy changes. It's not just public figures at risk but anyone. We need robust legislative solutions to protect individuals from digital victimization and extortion.

Don’t be shy—hit reply if you have thoughts or feedback. I’d love to connect with you!

Until next week,

Psst… Did someone forward this to you? Subscribe here!

Kris KrügVancouver AI

Reply

or to participate.