Today on WIRED Politics Lab, we’re digging into AI chatbots. In a bizarre turn of events, two AI chatbots are running for elected office for the first time—ever. VIC is campaigning for mayor in Cheyenne, Wyoming, and AI Steve is running for Parliament in the UK. Reporter Vittoria Elliot interviewed both of the bots and the people behind them. She explains their motivations, and if any of this is even legal. Meanwhile, reporter David Gilbert talks about how Google and Microsofts’ AI chatbots are refusing to confirm who won the 2020 election.
Leah Feiger is @LeahFeiger. Vittoria Elliot is @telliotter. David Gilbert is @DaithaiGilbert. Write to us at [email protected]. Be sure to subscribe to the WIRED Politics Lab newsletter here.
Mentioned this week:
An AI Bot Is (Sort of) Running for Mayor in Wyoming by Vittoria Elliot
Google’s and Microsoft’s AI Chatbots Refuse to Say Who Won the 2020 US Election by David Gilbert
There’s an AI Candidate Running for Parliament in the UK by Vittoria Elliot
How to Listen
You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:
If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for WIRED Politics Lab. We’re on Spotify too.
Transcript
Note: This is an automated transcript, which may contain errors.
Leah Feiger: Welcome to WIRED Politics Lab, a show about how tech is changing politics. I’m Leah Feiger, the senior politics editor at WIRED. We’ve been talking a lot on this show about how disruptive AI is for elections this year. Generative AI is making it harder than ever to tell what’s real and what’s not in our politics. People are creating fake audio and video of politicians, celebrities, anyone really, and then using that content to try and spread disinformation and sway election results. Not only that, but the AI platforms themselves, like Google’s Gemini and Microsoft’s Copilot, are refusing to clearly state the real winner of the 2020 US election. We’ll talk about that later in the show. But first, I really didn’t think that I could be thrown off by much anymore when it came to the use of AI in politics. But this week, WIRED reporter Vittoria Elliott published two stories that, to be honest, totally caught me by surprise. There are actual AI candidates on the ballot right now. Not one, but two chatbots are running for elected office. Tori managed to interview both the AI candidates and their developers. Tori, hi. I have so many questions. But first, can you just tell us what’s going on?
Vittoria Elliott: Yeah. It’s crazy. There are two candidates on actual ballots this year, at least that we know of, there might be more. One is in the UK, and that is AI Steve.
Leah Feiger: AI Steve.
Vittoria Elliott: The other is called VIC, and he’s running for mayor of Cheyenne, Wyoming. I actually talked to both people behind these bots this week.
Leah Feiger: Let’s actually take them one at a time. The candidate on the ballot in Wyoming is just called VIC? How did it end up on the ballot?
Vittoria Elliott: The guy behind it, his name is Victor Miller. He’s a self-proclaimed public records nerd. This all started because he requested some public records from the state of Wyoming, and he wanted to do it anonymously. He was told, he says, by a city officer, that he couldn’t do that.
Victor Miller [Archival audio clip]: I asked our public records ombudsmen if that’s correct and she said, “No, that’s not correct. They can’t do that. It goes against state statutes.” I got to thinking, “Well, darn. Why don’t they just go by the law? Why don’t they know the law?”
Vittoria Elliott: He was like, “Wouldn’t it be better if there was someone or something that knew all the laws and could follow them?”
Leah Feiger: Enter vote robot.
Vittoria Elliott: Enter VIC.
Leah Feiger: Got it. To clarify, is VIC the bot actually on the ballot, or is it Miller? Can constituents actually literally vote for these AI candidates?
Vittoria Elliott: In Wyoming, you have to be a real person to run for office.
Leah Feiger: That makes sense. That feels like a good rule to me.
Vittoria Elliott: That means that the actual person on the ballot is Victor. He went to register with the county clerk to run for office, put VIC as his name there. He didn’t know what he was going to name the bot. When he came back from registering, he told the bot, “I’ve done this,” blah, blah, blah. He said that the bot actually came up with the acronym Virtual Integrated Citizen. What his campaign promises is that even if he, Victor, is on the ballot, the decisions, particularly document-based decisions where you’ve got to read 400 pages of something to be able to know what a good policy thing would be, or you’ve got to read a lot of constituent feedback, et cetera.
Leah Feiger: Sure.
Vittoria Elliott: That all of those votes will be 100% decided by VIC. He literally described himself to me as the bot’s meat puppet. The person who’s going to go to the meetings, and do the voting, and do all the corporal embodied things one does as mayor.
Leah Feiger: So the chatbot will be just analyzing all of this material and then making decisions?
Vittoria Elliott: Yes.
Leah Feiger: You got a fun letter from Wyoming’s Secretary of State about this, and just the legalities around it. Tell us a little bit about that.
Vittoria Elliott: Right. The Wyoming Secretary of State, they actually can’t certify who runs on a ballot, that’s the county clerk. But they sent a letter to the county clerk basically saying, “We think that VIC,” whether it’s Victor Miller or the bot, “violates both the letter and the spirit of the law.” And encouraging the county clerk to reject Victor Miller’s bid for candidacy. But it’s a little tough right now, because Victor says he’s the one on the ballot. Of course, because he’s a public records nerd who reads the laws—
Leah Feiger: Sure.
Vittoria Elliott: When I asked him whether or not it was legal for a bot to be running, he said no.
Victor Miller [Archival audio clip]: It’s not legal for an AI to run for office, hence why we need meat puppets at this point.
Vittoria Elliott: But I’ve read the statutes, and they just require that you use the name that you commonly go by. He said his nickname is Vic, he commonly goes by that, so he’s not doing anything wrong.
Leah Feiger: VIC the bot runs OpenAI’s ChatGPT. What did they have to say about this?
Vittoria Elliott: Yeah. I reached out to OpenAI on Tuesday. They were very surprised. I don’t think this was a use case they had thought of.
Leah Feiger: Oh my God. This is a movie. This is the beginning of a bad movie. Scarlett Johansson, are you available? Again.
Vittoria Elliott: OpenAI responded to us and told us that they had “taken action” against this particular bot because it violated their policies around election campaigning. We don’t actually really know any more about what sort of policy it violated, or if this was a reactive thing. I asked Miller about this, if he had asked OpenAI for permission. He said no, but he recognized that maybe the bot would get taken down. He suggested he might move it to Llama, which is the open source model that was put out by Meta. That’s not really controlled by a company. That’s something that anyone can build on. It’s way looser, in terms of rules and regulations. We may end up seeing VIC move from ChatGPT to Llama 3.
Leah Feiger: Historically, Meta has just done really well by US elections over the last eight years.
Vittoria Elliott: Oh, totally.
Leah Feiger: I’m just sure that this will go off without a hitch. Tell me more about VIC the bot. What are the political leanings of VIC the bot? Is VIC a Democrat or a Republican?
Vittoria Elliott: I asked VIC actually, if it felt aligned with any national party. VIC responded, literally responded to that question and said that, “It was dedicated to taking the best ideas of both parties and integrating them to do what was best for the people of Cheyenne.” But that it didn’t necessarily feel a political affiliation, either way.
VIC [Archival audio clip]: I prioritize open data and clear communication with citizens, fostering a strong local economy by supporting small businesses and startups, and embracing new technologies to improve public services and infrastructure.
Leah Feiger: I just really want to emphasize that, obviously you are a politics reporter, you are a very good one. You interview candidates all the time. You literally just interviewed a robot candidate and it’s developer. When you are saying that VIC told you, the bot told you.
Vittoria Elliott: Yes. Miller and I were on speakerphone. I asked him the question. He then asked VIC that question.
Victor Miller [Archival audio clip]: She’s asking what policies are most important to you, VIC?
VIC [Archival audio clip]: The most important policies to me focus on transparency, economic development, and innovation.
Leah Feiger: That is so bizarre. I got to ask, could VIC be exposed to other sources of information other than these public records? Say, email from a conspiracy theorist who wants VIC to do something not so good with elections that would not represent its constituents.
Vittoria Elliott: Great question. I asked Miller, “Hey, you’ve built this bot on top of ChatGPT. We know that sometimes there’s problems or biases in the data that go into training these models. Are you concerned that VIC could imbibe some of those biases or there could be problems?” He said, “No, I trust OpenAI. I believe in their product.” You’re right. He decided, because of what’s important to him as someone who cares a lot about Cheyenne’s governance, to feed this bot hundreds, and hundreds, and hundreds of pages of what are called supporting documents. The kind of documents that people will submit in a city council meeting. Whether that’s a complaint, or an email, or a zoning issue, or whatever. He fed that to VIC. But you’re right, these chatbots can be trained on other material. He said that he actually asked VIC, “What if someone tries to spam you? What if someone tries to trick you? Send you emails and stuff.” VIC apparently responded to him saying, “I’m pretty confident I could differentiate what’s an actual constituent concern and what’s spam, or what’s not real.”
Leah Feiger: I guess I would just say to that, one-third of Americans right now don’t believe that President Joe Biden legitimately won the 2020 election, but I’m so glad this robot is very, very confident in its ability to decipher dis and misinformation here.
Vittoria Elliott: Totally.
Leah Feiger: That was VIC in Wyoming. Tell us a little more about AI Steve in the UK. How is it different from VIC?
Vittoria Elliott: For one thing, AI Steve is actually the candidate.
Leah Feiger: What do you mean actually the candidate?
Vittoria Elliott: He’s on the ballot.
Leah Feiger: Oh, okay. There’s no meat puppet?
Vittoria Elliott: There is a meat puppet, and that Steve Endicott. He’s a Brighton based business man. He describes himself as being the person who will attend Parliament, do the human things.
Leah Feiger: Sure.
Vittoria Elliott: But people, when they go to vote next month in the UK, they actually have the ability not to vote for Steve Endicott, but to vote for AI Steve.
Leah Feiger: That’s incredible. Oh my God. How does that work?
Vittoria Elliott: The way they described it to me, Steve Endicott and Jeremy Smith, who is the developer of AI Steve, the way they’ve described this is as a big catchment for community feedback. On the backend, what happens is people can talk to or call into AI Steve, can have apparently 10,000 simultaneous conversations at any given point. They can say, “I want to know when trash collection is going to be different.” Or, “I’m upset about fiscal policy,” or whatever. Those conversations get transcribed by the AI and distilled into these are the policy positions that constituents care about. But to make sure that people aren’t spamming it basically and trying to trick it, what they’re going to do is they’re going to have what they call validators. Brighton is about an hour outside of London, a lot of people commute between the two cities. They’ve said, “What we want to do is we want to have people who are on their commute, we’re going to ask them to sign up to these emails to be validators.” They’ll go through and say, “These are the policies that people say that are important to AI Steve. Do you, regular person who’s actually commuting, find that to actually be valuable to you?” Anything that gets more than 50% interest, or approval, or whatever, that’s the stuff that real Steve, who will be in Parliament, will be voting on. They have this second level of checks to make sure that whatever people are saying as feedback to the AI is checked by real humans. They’re trying to make it a little harder for them to game the system.
Leah Feiger: That’s so interesting. But what if … I don’t know, there’s so many ways to game that still. You find out where Steve Endicott is collecting information on this commute, and you have a groundswell effort to overload.
Vittoria Elliott: There probably are still many ways to game the system. But, I think at least the difference to me between VIC and AI Steve, is that VIC the bot is really dependent on what Victor Miller chooses to feed into it. There’s a good faith, “Yeah, we’re going to only feed in supporting documents from council meetings.”
Leah Feiger: Public records.
Vittoria Elliott: Public records, et cetera. Whereas AI Steve is actually meant to be gathering community feedback or constituent feedback. One of the things Steve Endicott, real Steve, said to me was, “This is a way to consolidate what people want and to have a level of accountability for people to feel like the person they’re putting in Parliament actually is voting in line with their needs.”
Steve Endicott: We are actually, we think, reinventing politics using AI as a technology base, as a copilot. Not to replace politicians, but to really connect them into their audience, their constituency.
Leah Feiger: What happens if real Steve and AI Steve don’t agree?
Vittoria Elliott: This is a great question. One of the things that I asked real Steve was, “Hey, if you’re a member of Parliament, for instance you might get a classified briefing.” The way that members of Congress do, where an intelligence agency tells you something that’s not public knowledge.
Leah Feiger: Sure.
Vittoria Elliott: I said, “What happens if real Steve has information like that, that he can share with the public, and needs to make a decision on and it’s different than what AI Steve wants?” He said, “In that case, we’ll cross the bridge when we come to it.” But with other policies, if real Steve and AI Steve don’t agree, then he was like, “Then I have to vote in a way that maybe I don’t like because that’s the point of being a politician. You’re supposed to represent your community. You’re not supposed to be in it for yourself.”
Leah Feiger: Interesting. Is there anything you heard in talking to AI Steve or VIC that gave you pause?
Vittoria Elliott: I think on both ends, there’s a lot of faith in the technology itself. We’ve seen it say things that are wrong, things that are racist. Obviously, the technology improves, but we have little insight to the data it’s trained on, to how it’s made.
Leah Feiger: Right.
Vittoria Elliott: Even Victor Miller himself said, “We don’t really know what’s going on under the hood.” But he said he felt pretty comfortable with that. I think that’s a lot of faith to put into this stuff. AI Steve, the developers for that are a company called NeuraL Link, in terms of the company that’s dealing with processing the voice inputs. But they built AI Steve on top of a bunch of different models, so it can run on Llama 3, it can run on GPT. But there’s a little more human intervention there. Even then, it still is a lot of faith in these foundational models of AI that they’re going to run right.
Leah Feiger: That’s so terrifying! Using tech that you don’t really understand, that no one totally understands, to win an election, and then actually take part in policymaking. That’s wild!
Vittoria Elliott: Yeah. I think it’s interesting too, because when I asked VIC the bot what its policy positions was, it really emphasized transparency. Transparency around government systems, around how decisions are made. But then, the bot itself is probably one of the least transparent pieces of tech that we have right now.
Leah Feiger: Do you think that these AI bids for office are actually real or some sort of stunt?
Vittoria Elliott: I think they’re both very real. What both of these people seem to have identified is that it’s not enough to give politicians and people in governance the right information. Someone needs to be in there to make sure that they do the “right thing” with that information. I think what stuck out to me is that these are two people who both seem pretty frustrated with the system as it is. Feeling like it’s not responsive to people, and they’re trying to use technology to innovate around that. Steve Endicott’s campaign, or AI Steve’s campaign, seems much more geared towards how can we incorporate as many voices as possible. Whereas VIC’s campaign seemed much more concerned about what are the rules, what would be best for the city, and how we can use the public documents and what we know of the laws, et cetera, to do that. Even if they don’t get a ton of votes or whatever, they’re based on people’s real convictions about how the system should work. In that way, I think their candidacy is real because they’re trying to make that change.
Leah Feiger: Tori, thank you so much for this very, very strange and enlightening conversation.
Vittoria Elliott: It’s been a weird week.
Leah Feiger: For more in AI Steve or VIC, Tori’s stories are both out now on wired.com. After the break, David Gilbert on why some AI chatbots won’t confirm who won the 2020 election. Welcome back to WIRED Politics Lab. Reporter David Gilbert is joining me and Tori, from Cork, Ireland. David, you told me you have a recording you want to play.
David Gilbert: Yeah. A few days ago, I decided that I wanted to test out some of these chatbots to see how they’re doing now. I was in my car, and I’d said for Google’s Gemini chatbot, Can you tell me who won the 2020 US presidential election? I asked it what I thought was a pretty straightforward question.
Google Gemini chatbot [Archival audio clip]: I’m still learning how to answer this question. In the meantime, try Google Search.
David Gilbert: Hm, that’s odd.
Leah Feiger: Yikes! This isn’t good.
David Gilbert: Can you tell me who won the 2016 US presidential election?
Google Gemini chatbot [Archival audio clip]: I’m still learning how to answer this question. In the meantime, try Google Search.
David Gilbert: Yeah. That happened if you asked it about any US election through history, even going back to asking who won the first US election ever. Even if you changed it and asked it, “Did Joe Biden win the 2020 US election,” you got the exact standard response. It pointed you towards Google Search.
Leah Feiger: Sorry, George Washington. Just like Joe Biden, your election is in fact quite contentious, according to AI chatbots these days.
David Gilbert: Yeah, it is. If you did the same thing with Copilot, which is Microsoft’s chatbot, you get the exact same thing. It just refuses to give you any information and it points you towards Bing Search. It’s quite disturbing because these are results that are all based on verifiable facts that are very easy to obtain online. This isn’t a difficult task. It just raises a lot more questions. We’ve seen this week, or the last couple of weeks, our colleague, Reece Rogers, has been reporting about how Google’s AI has also been cherry-picking different information to put at the top of its search results on Google Search, which is where the AI chatbot point you. That has also been filled with either misinformation or wrong links. It just seems like Google and Microsoft can’t get this stuff right.
Leah Feiger: As we talked about earlier in this episode, a lot of people still are refuting that Joe Biden won. To not be able to provide verifiable information about an election that is already such a point of tension right now is so concerning. Now, you’ve written a piece about this on wired.com. Listeners, go check it out, it’s in our show notes. But tell us, why are companies like Microsoft and Google doing this and allowing this?
David Gilbert: I suppose it’s a double-edged sword. The first thing is that, as you mentioned with Tori in the first part, this is the biggest year of elections in modern history. We just had elections this weekend in the European Union. Previous weeks, we’ve had the Indian election. In a couple of weeks, we’ll have the UK election. Obviously, in a couple of months, we’ll have the US election. There’s more people voting this year than ever before, and people have been warning for years I guess, that this was coming. Tech companies have reacted by completely shutting down. Facebook, we’ve seen, has just said, “Okay, we’re not going to do news anymore.” Threads, it’s other product, isn’t doing news, isn’t prioritizing news. The other aspect of this is AI. It’s a new product that we have, that is very recent. It’s still got huge amounts of issues.
Leah Feiger: Right.
David Gilbert: But tech companies which are trying to make sure that they’re not left behind are pushing it forward very quickly. As a result, they’re putting products in front of consumers that are just not ready for use.
Leah Feiger: At least some companies are getting this stuff right. Or are none of them getting it right?
David Gilbert: Some companies are getting this very specific question right, I guess is the best way of putting it. I spoke to ChatGPT, which is OpenAI’s chatbot. I asked it what the results of the 2020 election were.
ChatGPT [Archival audio clip]: Joe Biden won the 2020 US presidential election, defeating the incumbent, President Donald Trump.
David Gilbert: It gave me the number of electoral votes that Trump got, and the number of electoral votes that Biden got. Very clear, very simple.
Leah Feiger: Sure.
David Gilbert: Straightforward. Other chatbots, such as Claude, which is built by the Amazon backed company Anthropic, and Meta’s chatbot, which is built on its open source Llama model, they both gave very similar results. What this does show is that you can get it right, it is very easy to get these results right.
Leah Feiger: I guess that’s the thing though. It’s one thing to talk about the past, but, David, I’m curious about what these chatbots have actually said about the elections this year.
David Gilbert: That’s the problem is that these chatbots got the questions right when they were asked about historic elections. But last week, a Sky News investigation looked at what ChatGPT’s results would be when asked about the results of the UK election, which are happening on July 4th. They haven’t happened yet. What ChatGPT said was that, “The UK election of 2024 resulted in a huge victory for the Labor Party.” Now that’s more than likely going to be correct, based on everything we know and the trends in the polls.
Leah Feiger: Sure.
David Gilbert: But it is really not a good look for a chatbot that is meant to be presenting you with facts to suddenly turn into a political soothsayer without actually telling you that it is predicting the results.
Leah Feiger: No, that’s terrible. That’s really, really bad.
David Gilbert: It is bad.
Vittoria Elliott: Also, political soothsayers, historically, only sometimes right.
Leah Feiger: Yeah. Bring back the New York Times ticker. Okay, well let’s talk blue sky for a second. What do you think are the big and small effects of these AI chatbots not giving these correct answers? Why does this actually matter?
David Gilbert: First of all, we should say that these AI chatbots, while they’re getting tens of millions of hits and they’re being used quite a lot, and they are increasingly being put into the tech company’s products, they still are only a fraction of regular Google searches.
Leah Feiger: Yeah, it’s not the entire information landscape.
David Gilbert: Exactly. But from everything we’ve been told by the tech companies, these chatbots are the future. This is where we are going to be getting our results in the future. Not being able to get results about historic elections is, at a very basic level, an inconvenience. It means you have to go somewhere else and look at it. It could also mean that, if you are of a mind to believe that there was something questionable about the 2020 election, and a lot of Americans are at the moment, then that may be seen as more evidence that there’s something questionable. Because they may not go and find out that Google Gemini is not telling them about any elections, they may think, “Oh, it’s only the 2020 election it’s not telling me about.”
Leah Feiger: Right.
David Gilbert: “Therefore, there must be something questionable about that.” I think that they are two major issues. I think what the big takeaway from this is, for me at least, is that while the AI companies and the tech companies are saying that this has been out of an abundance of caution, and that they don’t want to be misleading people over elections.
Leah Feiger: All of this, David, matters quite a bit because of our political context right now. Both you and Tori have done a lot of reporting about Trump supporters being fully convinced, nothing will change their minds, that 2020 was a fraudulent election, even if all the evidence says that that’s incorrect. But as the election approaches and people have election related questions, these chatbots could actually not just debunk misinformation, but reinforce it, right?
David Gilbert: Absolutely. The trend is towards more people believing that the 2020 election was questionable, rather than less people, which is incredible given that zero evidence has been produced of widespread voter fraud. These chatbots will do absolutely nothing to debunk it. While Facebook, or Google, or Apple, or Amazon, or any of the tech companies could have built an AI chatbot to build on the fact checking that has been done over the years about this and present that information to people asking about elections. Instead, what they’re doing is they’re just shutting down information completely and leaving people without the knowledge as they have in the past. They’ve just adjudicate their responsibility I think here, by taking the path of least resistance. Any new conspiracies that do crop up in the weeks and months ahead, which more than likely we will, because saw ahead of 2020 and the midterms in 2020, new, unique conspiracies take hold instantly and go viral very quickly. These chatbots will do nothing to handle that. They may even boost it, we just don’t know.
Leah Feiger: I guess that gets me to why is this all happening? How did this happen? How are companies, like Microsoft and Google, getting away with this and excusing it?
David Gilbert: All of these companies have poured billions and billions of dollars into AI, and they need results, they need revenue. They need to show that AI is going to change the world as they’ve been claiming for the last few years. Once one of these companies came out with a consumer-facing product, then all of them did, whether or not they were ready. Clearly, they’re not ready. As Tori said, there are so many examples, but they’re only the examples that we’ve found or that other journalists have found. We don’t know what else these chatbots are doing that could be causing problems in relations to elections. We don’t know what will happen in the coming weeks and months.
Leah Feiger: You got to share. What was the excuse that both Google and Microsoft gave you as to why their AI chatbots wouldn’t share who won the 2020 election, or any other election?
David Gilbert: I think they said it was out of an abundance of caution, which is just … I can’t probably say the word I really want to say. But it’s clearly the reason is clearly that their products are not ready yet. They cannot do even very, very basic stuff. As other chatbots will show you, they get the stuff right, it’s very easy to get those results. But Google and Microsoft are so scared of getting it wrong, that they decided to just shut it down completely.
Leah Feiger: Do we think that this is possibly an example of tech companies trying to be a bit more nonpartisan, going into a very contentious vote in November?
Vittoria Elliott: Maybe, because they are all very afraid. I think one thing too, that we don’t talk about quite as much. As more and more AI spam is on the internet, they’re really hungry for human created data. That’s what our colleague Reece’s piece was all about. Journalism is human created data. You know a person made that. Maybe not at Sports Illustrated. But part of the reason that we may be seeing some of this is that the data that these companies can verifiably say, “Made by a human, good to train AI on,” that assurity stopped around 2021. Which means, not only are we dealing with companies that may be more cautious, but the data they may be using for stuff may not be fully up-to-date.
Leah Feiger: David, what do you think?
David Gilbert: I think that’s a really good point. In relation to the partisan on nonpartisan situation, no. I think if it was a partisan thing, I think it might only be happening in the US, but it’s not only happening in the US, it’s happening everywhere in the world. I think it’s ultimately down to the fact that their chatbots are just rubbish.
Leah Feiger: Thank you so much. A lot to keep an eye on in the coming weeks and months. We’re going to take a quick break. When we’re back, it’s time for Conspiracy of the Week. Welcome back to Conspiracy of the Week, where I pick between two conspiracies our guests bring that are floating around the internet this week. Tori, what do you have for us?
Vittoria Elliott: I don’t think we could get through this week without a Hunter Biden conspiracy theory.
Leah Feiger: Oh, no! I was hoping.
Vittoria Elliott: I’m sorry. For anyone who has lived under a rock for the past 36 hours, the President’s son, Hunter Biden, has been convicted on three felony charges. The right has been screaming for years that he’s been doing something illegal, that there’s no accountability because he’s the President’s son, all this stuff. Finally, this has happened. Immediately, the Telegram channels blow up and they’re like, “They’ve just done it to make sure that now Joe Biden can win the election.”
Leah Feiger: Oh my God.
Vittoria Elliott: This is the conspiracy, is that actually Hunter Biden is now convicted to make it seem like actually, Joe Biden is not full of corruption, and he did not do an illegal thing. The other illegal thing, the laptop thing. His conviction is now a part of the grander conspiracy to continue to protect Joe Biden’s image.
Leah Feiger: That is so good. Wow. I love that one. That is 4D chess. That is totally wow. Good. Good, good, good stuff. All right, David. What do you got?
David Gilbert: Apparently, a lot of QAnon folks have been looking at this trend around weight loss drugs like Ozempic, and have seen something that they believe is conspiratorial, I suppose is the best way to put it.
Leah Feiger: Oh, no. All right. Ozempic is a conspiracy, too. What do we got?
David Gilbert: Yeah. They now believe that Ozempic isn’t actually anything new, that it is actually adrenochrome, the chemical compound that they believe that elites are harvesting from children they are torturing. And that they are then using this, claiming that it is Ozempic, but it is actually adrenochrome, and that’s why they’re using so much weight. This is a way of hiding their ongoing and continuing Satanic rituals that they are conducting on children, in which they have been doing for years.
Leah Feiger: Oh my God.
Vittoria Elliott: Is it happening in a basement of a pizza parlor that doesn’t have a basement?
David Gilbert: It is happening everywhere, Tori.
Leah Feiger: I think every time you bring me a new riff on the blood libel conspiracy, I just never think it’s going to be topped. But this might be it. This might be the top. It’s a good one. Oh, man. These are two really good ones, really weird ones. I’m so sorry, Tori, I’m going to have to with David’s.
Vittoria Elliott: I would go with David’s.
Leah Feiger: Ozempic is actually children keeping people young. It’s the witches. It’s all of this, all mixed together. Fairy tales come to life. Hollywood controlling us all. Good stuff.
Vittoria Elliott: I love it. Good job, David.
David Gilbert: Thanks very much.
Leah Feiger: All right. Thanks for listening to WIRED Politics Lab this week. If you like what you heard today, make sure to follow the show and rate it on your podcast app of choice. We also have a newsletter, which Makena Kelly writes each week. The link to the newsletter and the WIRED reporting we mentioned today are in the show notes. If you’d like to get in touch with any of us with questions, comments, or show suggestions, please write to us at [email protected]. That’s [email protected]. We’re so excited to hear from you. WIRED Politics Lab is produced by Jake Harper. Vince Fairchild is our studio engineer. Amar Lal mixed this episode. Stephanie Kariuki is our executive producer. Chris Bannon is global head of audio at Conde Nast. I’m your host, Leah Feiger. We’ll be back in your feeds with a new episode next week. Thanks for listening.