OpenAI Develops Tool to Create Realistic AI Videos
OpenAI has introduced new technology that uses artificial intelligence to create high-quality videos from text descriptions.
The company released short clips showcasing vivid, seemingly realistic videos, including woolly mammoths trekking across a snowy field, ocean waves crashing against a cliff's shoreline and people doing everyday things like reading a book or walking down a city street.
OpenAI calls the new system Sora. It takes a written prompt and, through AI, renders a richly detailed video. OpenAI is one of many companies like $Alphabet(GOOGL)$
$Microsoft(MSFT)$
OpenAI previously released a program called Dall-E 2 that produces still images based on text descriptions.
Sam Altman, OpenAI's chief executive, on Thursday asked users on X to submit text descriptions for Sora. He then shared their creations.
One person asked for "a bicycle race on the ocean with different animals as athletes riding the bicycles with drone camera view." Altman posted a Sora-generated video of penguins, dolphins and other aquatic creatures on bikes in his reply.
Another video showed a smiling, white-haired woman in an apron inviting viewers into her kitchen. Sora generated the AI video after Altman was asked for a cooking lesson "for homemade gnocchi hosted by a grandmother social media influencer set in a rustic Tuscan country kitchen with cinematic lighting."
The technology still has some flaws, the company said, including some spatial issues.
The company said it is aware of Sora's potential to create misinformation and hateful content, among other things. AI-powered deepfakes have emerged as a risk that could confuse the public ahead of the 2024 presidential election, researchers said. OpenAI has said it is taking actions to get ready for the election, including prohibiting the use of its tools for political campaigning.
Putting watermarks on AI-generated videos and images, as many companies have said they would do, may help, according to Siwei Lyu, director of the Media Forensic Lab at the University at Buffalo. But in many cases, watermarks can be removed or altered, he said.
As AI programs like Sora pop up, the technology will add to existing challenges with image and audio deepfakes, Lyu said.
The group of experts chosen to test Sora for ways it can be abused will provide feedback on how to strengthen its protections, OpenAI said.
The use of an image classifier, which analyzes video before its release to flag problematic material like nudity or violence, is positive, said Arthur Holland Michel, a senior fellow at Carnegie Council for Ethics in International Affairs who studies AI and surveillance technologies. Things get messier, however, when tools like Sora fall into the hands of sophisticated actors who really want to do harm with new iterations of technology, he said.
@TigerStars @CaptainTiger @TigerWire @Daily_Discussion @Tiger_chat @Tiger_comments @MillionaireTiger
Comments