• The Upgrade
  • Posts
  • What you need to know about Google's Gemini launch 🚀

What you need to know about Google's Gemini launch 🚀

Plus, interview with Innovation in Focus editor on AI chatbots & news!

Welcome to The Upgrade

Welcome to my weekly newsletter, which focuses on the intersection of AI, media, and storytelling. A special welcome to my new readers from Google, Uniomedia Group, Columbia Business School, and many other top organizations — you’re in good company!

In today’s issue:

  • The Week’s Top AI Stories 📰

  • 🚀 Google Launches Gemini: Initial Insights

  • 🎓 Sign up for my live, online AI Fundamentals course⚡️

  • 🎙️The Big Interview: Emily Lytle, editor of Innovation in Focus on AI chatbots in newsrooms

The Week’s Top AI Stories

The Arrival of Gemini

  • Google launches Gemini, the AI model it hopes will take down GPT-4 — The Verge

  • Google Just Launched Gemini, Its Long-Awaited Answer to ChatGPT — WIRED

  • Google admits that a Gemini AI demo video was staged — Engadget

  • How to Use Google’s Gemini AI Right Now in Its Bard Chatbot — WIRED

Regulation & Policy

  • How Nations Are Losing a Global Race to Tackle A.I.’s Harms — The New York Times

  • Europe was set to lead the world on AI regulation. But can leaders reach a deal? — The AP

  • EU's AI Act could exclude open-source models from regulation — Reuters

Ethics & Safety

  • Google researchers say they got OpenAI's ChatGPT to reveal some of its training data with just one word — Business Insider

  • How Moral Can A.I. Really Be — The New Yorker

Legal & Copyright

  • AI-Generated Jimmy Stewart Narrates Bedtime Story for Calm App — Variety

  • Runway incorporates Getty Images into its AI-generated video — Axios

  • The Generative AI Copyright Fight Is Just Getting Started — WIRED

In the Workplace

  • The Workplace Security Risk of ‘Bring Your Own AI’ — SHRM

  • Outsmarting AI: The New Challenge For Job Seekers And Employers — Forbes

Google Launches Gemini: Initial Insights💡

This Wednesday, Google launched its highly-anticipated flagship AI model, Gemini. The long-awaited response to Microsoft-backed ChatGPT is a big development for the industry and may be the most important algorithm the company has ever released. Google has been working on AI for years but was caught flat-footed last November by the release of OpenAI’s GPT-3.5 model. Here’s what you should know about it.

As demonstrated in its (misleading) viral demo video, Gemini is designed to be multimodal, meaning it can process and understand a combination of different types of information, including text, code, audio, images, and video. This ability makes it remarkably versatile and powerful. Google has optimized the first version of Gemini into three different sizes to cater to various needs: Gemini Ultra for complex tasks, Gemini Pro for a wide range of tasks, and Gemini Nano for on-device tasks. This flexibility allows Gemini to run efficiently on everything from data centers to mobile devices, enhancing its applicability across different platforms and uses. Right now, its most powerful version, Ultra, is not accessible to everyday users — and won’t be for some time.

One of the notable aspects of Gemini is its performance in industry benchmarks. Gemini Ultra has outperformed human experts in Massive Multitask Language Understanding (MMLU), a combination of 57 subjects such as math, physics, history, law, medicine, and ethics. Gemini scored 90.0% vs ChatGPT-4’s 86%. This achievement highlights Gemini's power and potential applications.

Gemini is already being incorporated into Google’s existing products, including its AI-powered chatbot Bard and the Pixel 8 Pro smartphone. It is expected to be integrated into Google’s search engine, enhancing its functionality and efficiency. The roll-out of Gemini will happen in phases, with the most advanced version, Gemini Ultra, set to launch Bard Advanced, a more sophisticated version of the chatbot.

Recently, it was reported that Google was going to delay the Gemini launch until next year because it wasn’t ready. With OpenAI’s recent board debacle, Google’s leadership no doubt sensed blood in the water and pushed to release it before the end of the year. So, how does it stack up against ChatGPT in real-life use cases?

Despite the impressive benchmarks set by Google's Gemini, the very initial consensus among AI experts is that OpenAI's ChatGPT-4 remains more powerful and relevant for everyday use cases. ChatGPT-4's advanced language processing capabilities and versatility will be hard to beat. Its ability to engage in nuanced conversations, understand complex queries, and provide informative and contextually relevant responses makes it particularly suited for everyday interactions and problem-solving scenarios. This is reflected by the much higher marks GPT-4 achieved on “common sense reasoning” tasks on the MMLU scores (95% vs Gemini’s 88%). Time will tell how Gemini stacks up, but the AI race is officially on.

🎓 Sign up for AI Fundamentals! 💻

AI Fundamentals for Professional Communicators and Marketers covers the essentials of Generative AI for media and marketing professionals with novice and beginner-level experience with AI tools. The live 90-minute sessions will take place on Wednesdays, starting January 17th, at 7pm ET / 4pm PT. Start 2024 by leveling up in AI!⚡️

🎙️The Big Interview: Emily Lytle of IIF Newsletter

Emily Lytle is the editor of Innovation in Focus, a monthly newsletter from the Reynolds Journalism Institute at the University of Missouri.

Note: This interview has been edited for brevity and clarity.

Peter: Tell me about Innovation in Focus.

Emily: The Innovation in Focus series is a monthly newsletter where we partner with news organizations across the country on what we call short-term experiments. So it's anything where we're trying something new. It could be a new technology, a tool, a new storytelling method, a different way of reporting, or anything that is new and practical and is going to help journalists do their everyday work and serve their communities better. We're here to collaborate with news organizations, try out new ideas, usually over a short period, and then write about those experiments or projects for articles on RJI's website and in the newsletter. On top of that, we interview experts related to that month’s topic.

Peter: I've read several AI-focused issues you all have done this year. How did you start thinking about AI experiments in newsrooms?

Emily: When all the conversations shifted to focus on AI, we knew we wanted to stay on top of that trend and try out some of the tools that were popping up. But we also didn't want to give in to the hype and try something for the sake of trying it. So that was a really big takeaway very early on: we wanted to make sure we were trying things or tools that were solving a problem for a newsroom or addressing a real challenge.

We started looking at ChatGPT when it was first launching and asked our partners, “How can we use this? Can we trust it to work in a newsroom setting at this point?” I remember talking to Ryan Sorrell at the Kansas City Defender, and he was telling me very early on that he was using it to help create restaurant guides. He had to provide ChatGPT with all this context: “I'm a startup news organization in Kansas City. I focus on this young Black community, and I'm creating this restaurant guide.” It was really interesting talking to him about how he did that. Initially, it was a lot of research in reaching out to journalists like Ryan, but it took us a while to figure out what an experiment might look like.

Peter: What did you initially hone in on?

Emily: We began looking at what platforms were out there you could use to build a chatbot for your newsroom if you didn't have any coding experience or technical knowledge. What options were out there? That's where we started.

Peter: And how did it go?

Emily: We started out working with the Missouri Independent. They wanted to create a chatbot that would answer questions about policy and legislation because that's their focus. So, we spent a lot of time discussing what topic made the most sense. What is something that readers would be motivated to ask questions about?

But then we also had to think about sensitivity. One of the biggest legislation issues was anti-trans legislation in Missouri. And so they had all this deep, incredible reporting about what was going on and what was coming next. And it seemed like something a lot of people were talking about. So we thought that would be a great option. But we were also concerned. We needed to be very cautious when testing these tools to make sure it wasn't pulling anything out of context and maybe citing extremist points of view, or certain points of view that might be offensive or hurtful or damage the Missouri Independence’s reputation.

Another aspect that made it challenging was that they felt more confident putting a bot on their social channels than on their website. It felt like a better place for them to experiment and say, hey, we can try this for a little bit to see how it goes. It turned out it was more complicated than we expected. We had to figure out APIs and things I didn't know anything about. It required significant staff time to navigate that. So that was a big barrier we ran into with them and part of why we shifted to work with the Queen City Nerve.

Peter: So what were your findings, the results of the experiment?

Emily: Ultimately, we found a lot of potential in all of these tools that we tested, but sometimes, the tools were not quite ready for what we wanted to do. For example, with Queen City Nerve, a big part of what they wanted to accomplish was uploading a lot of their police documentation that they had in CSV files. This was an option with one of the tools we were testing, with ChatThing. We ran into a lot of issues with it being able to read those files and correctly refer to information from them. We also noticed that citing sources was a great option for a few of the tools. But sometimes, it cited sources that were not correct. Other times, the answer could not be found in the article that it was citing.

The errors were concerning enough for a news organization with a high standard of accuracy that it caused us to hit pause and not end up publishing something public-facing for the chatbots. We're hopeful that there will be improvements and maybe this is something we can try again next year. But at that point, we didn't feel like the tools we tried were quite ready for what we wanted to do.

Peter: Got it. I know you're hyper-focused on news, but do you think that the tech is there yet for other types of organizations?

Emily: I think a big part of it is data privacy. No matter what type of organization you are, you should be concerned about that aspect. If you’re uploading documents into a tool for it to train a chatbot, you want to make sure that information is not being shared elsewhere or used to train other AI models or something like that. That's something I examine when I'm looking at new products or new tools. How are they treating data? It can take a lot of work to look into the fine print. I have a lot of respect for tools that are clear and transparent about what they do with your data.

Peter: Beyond the hype, what are you most excited about regarding AI and news?

Emily: Something that I've been excited about is the use of AI in a search function or in surfacing news content. Many news organizations—whether legacy news organizations or haven't been around for that long—have this vast resource of evergreen content and public service information. All these articles are not being put out in front of their audience as much as they could, but AI could help surface them. I also think it's fascinating how AI might help in the research phase of things. In sorting through long, jargon-filled documents and helping to ease some of that painful work of journalism when you're digging through lots of paperwork.

Don’t be shy—hit reply if you have thoughts or feedback. I’d love to connect with you!

Until next week,

Psst… Did someone forward this to you? Subscribe here!

Kris KrügVancouver AI

Reply

or to participate.