close
close

Meta, challenging OpenAI, announces new AI model that can generate video with sound

Meta, challenging OpenAI, announces new AI model that can generate video with sound

Movie Gen artwork samples provided by Meta included videos of animals swimming and surfing, as well as videos using real photos of people to depict them doing activities such as painting on canvas

Reuters

October 5, 2024, at 13:05

Last modified: October 5, 2024, 1:05 p.m

The Meta AI logo is seen in an illustration taken on May 20, 2024. Photo: REUTERS/Dado Ruvic/Illustration/File Photo

“>

The Meta AI logo is seen in an illustration taken on May 20, 2024. Photo: REUTERS/Dado Ruvic/Illustration/File Photo

Facebook owner Meta announced Friday that it has built a new artificial intelligence model called Movie Gen that can create realistic-looking video and audio clips in response to user prompts, saying it could compete with tools from leading media generation startups , such as OpenAI and ElevenLabs.

Movie Gen samples provided by Meta included videos of animals swimming and surfing, as well as videos using real photos of people to depict them performing activities such as painting on canvases.

Movie Gen can also generate background music and sound effects in sync with movie content, Meta said in a blog post, and use the tool to edit existing videos.

Stay up to date, follow Google’s The Business Standard news channel

In one such video, Meta had pom-poms placed in the hands of a man running alone in the desert, while in another the tool turned a parking lot where a man was skateboarding from dry ground to a parking lot covered in splashing puddles.

Videos created by Movie Gen can be up to 16 seconds long and audio can be up to 45 seconds long, Meta says. It shared data showing blind testing showing the model compares favorably with offerings from startups including Runway, OpenAI, ElevenLabs and Kling.

The announcement comes as Hollywood grapples with the use of generative AI video technology this year, after Microsoft-backed OpenAI in February first showed how its Sora product can create feature-film-like videos in response to prompts text.

Entertainment technologists are eager to use such tools to improve and speed up video creation, while others are concerned about adopting systems that appear to be trained on copyrighted works without permission.

Lawmakers have also raised concerns about how artificial intelligence-generated fakes or deepfakes are being used in elections around the world, including in the U.S., Pakistan, India and Indonesia.

Meta spokespeople have said the company is unlikely to make Movie Gen programs available for open use as it did with the Llama series of large-language models, saying it weighs the risks on a model-by-model basis. They declined to comment on Meta’s rating for Movie Gen specifically.

Instead, they said, Meta has been working directly with the entertainment community and other content creators to use Movie Gen and will include it in Meta’s own products next year.

According to a blog post and research article on the tool released by Meta, the company used a combination of licensed and publicly available datasets to build Movie Gen.

OpenAI has been meeting with Hollywood executives and agents this year to discuss possible partnerships involving Sora, although no deals have yet been reported as a result of these talks. Concerns about the company’s approach increased in May when actress Scarlett Johansson accused the creator of ChatGPT of imitating her voice without permission for the chatbot.

Lions Gate Entertainment, the company behind “The Hunger Games” and “Twilight,” announced in September that it would give artificial intelligence startup Runway access to its film and TV library to train an artificial intelligence model. In return, the studio and its filmmakers can use this model to improve their work.