Late Thursday night, Oprah aired a special on AI titled “AI and Our Future,” with guests including OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and current FBI Director Christopher Wray.
The prevailing tone was one of skepticism and caution.
In her prepared remarks, Oprah noted that, for better or worse, the AI genie is out of the bottle and humanity will have to learn to live with the consequences.
“AI is still beyond our control and, to a large extent, beyond our understanding,” she said, “But it is here, and we will live with a technology that can be our friend as well as our rival… We are the most adaptable creatures on this planet. We will adapt again. But let's stay realistic: the stakes have never been higher.”
Sam Altman overpromises
In Oprah's first interview of the night, Altman made the dubious claim that today's AI learns concepts within the data it is trained on.
“We show the system 1,000 words in a row and ask it to predict what's coming next,” he told Oprah. “The system learns to predict, and in the process, it learns the underlying concepts.”
Many experts would disagree.
AI systems like ChatGPT and o1, announced by OpenAI on Thursday, do indeed predict the most likely part of the next word in a sentence, but they're just statistical machines that learn the probability of data occurrences based on patterns — they have no intent, just make educated guesses.
While Altman said he may be exaggerating the capabilities of today's AI systems, he stressed the importance of finding ways to test their safety.
“One of the first things we need to do, and that's what's happening right now, is get the government to think about how to do safety testing of these systems, like you do with airplanes or new medicines,” he said. “I myself am probably talking to someone in the government every few days.”
Altman's push for regulation may be self-serving: OpenAI opposes California's AI safety bill, known as SB 1047, arguing that it would “stifle innovation.” But former OpenAI employees and AI experts like Geoffrey Hinton have come out in support of the bill, arguing that it would impose necessary safeguards on AI development.
Oprah also asked Altman about his role as leader of Open AI, and she asked him why people should trust him, but he dodged the question, saying his company is trying to build trust over time.
Altman has previously been outspoken in saying that people shouldn't trust him, or any single person, on whether AI will benefit the world.
The OpenAI CEO later said it was odd to hear Oprah ask him if he was “the most powerful and dangerous man in the world,” as the headlines suggested. He said he disagreed, but felt a responsibility to steer AI in a positive direction for humanity.
Oprah on Deepfakes
But as AI-powered video generators like OpenAI’s Sora show, you can get quite far with predictions and guesses.
Midway through the special, Brownlee showed Oprah sample footage of Sola. “Even now, when I watch some of this footage, I know something's not right,” Brownlee said.
“No, I can't,” Oprah replied.
The showcase served as a segue into an interview with Lai, in which he recounted his first exposure to AI deepfake technology.
“I was in a conference room and a lot of [FBI] “A group of people came together and showed me how AI-powered deepfakes are made,” Ray said, “and they created a video of me saying things that I have never said before and will never say.”
Ray spoke about the rise of AI-enabled sextortion, which cybersecurity firm ESET says is seeing a 178% increase in sextortion cases from 2022 to 2023, with AI technology being part of the reason.
“Someone posing as a peer targets teenagers and uses incriminating AI-generated photos to convince them to send back real photos,” Ray says. “In reality, it's a Nigerian keyboard typo who gets the images and then blackmails the kid, saying, 'Pay me or I'm going to share these images that will ruin your life.'”
Wray also addressed disinformation surrounding the upcoming US presidential election, declaring that “this is not the time to panic,” but stressing that it is the responsibility of “all Americans” to “exercise greater care and vigilance in our use of AI” and to recognize that AI “can be used by bad actors against all of us.”
“Too often we discover that someone who appears on social media to be Bill from Topeka or Mary from Dayton is in fact a Russian or Chinese spy in Beijing or outside Moscow,” Wray said.
In fact, a Statista poll found that more than a third of US respondents had come across misleading or suspected misinformation on a major topic towards the end of 2023. This year, misleading AI-generated images of vice presidential candidate Kamala Harris and former President Donald Trump attracted millions of views on social networks, including X.
Bill Gates discusses the disruptive changes brought about by AI
In an optimistic change of pace on the tech front, Oprah interviewed Microsoft founder Bill Gates, who expressed hope that AI will revitalize the sectors of education and healthcare.
“AI is [a medical appointment,] “Instead of the doctor facing a computer screen, they're having a conversation with the patient and the software creates a really good record,” Gates said.
But Gates ignored the possibility of bias due to inadequate training.
A recent study demonstrated that speech recognition systems from major technology companies are twice as likely to mistranscribe Black speakers compared to White speakers. Other studies have shown that AI systems reinforce long-held false beliefs that there are biological differences between Black and White people, which leads clinicians to misdiagnose health problems.
Gates said that AI is “always available” in the classroom and “can understand how to motivate students, regardless of their level of knowledge.”
In reality, this is not the case in many classrooms.
Last summer, schools and universities rushed to ban ChatGPT over fears of plagiarism and misinformation. Some have since lifted their bans. But research such as the UK Safer Internet Centre shows that more than half of children (53%) have seen their peers use GenAI in negative ways, such as with unbelievable false information or images meant to upset someone.
Late last year, the United Nations Educational, Scientific and Cultural Organization (UNESCO) called on governments to regulate the use of GenAI in education, including by imposing age restrictions on users and introducing guardrails on data protection and user privacy.