Long live the real Moo Deng.
Boo Deng
Meta has just announced Movie Gen, its new AI-powered video generation tool for creating realistic-looking footage — and one-upping OpenAI’s Sora, it can automatically add matching audio to its output.
But notable detail is its de facto choice of mascot. Featured front and center on the product’s announcement page — and in the marketing material shared with the press — is an AI-generated video of a cute baby hippo swimming underwater.
We’d put money on that being a shameless evocation of Moo Deng, the young, plucky pygmy hippo whose adorable antics have endeared millions.
It may be a cynical marketing move, but based on the instructions it was given, the AI video generational tool was spot on — if you can overlook the AI sheen. “A baby hippo swimming in the river. Colorful flowers float at the surface, as fish swim around the hippo,” reads the prompt entered into Movie Gen, as shared by Wired. “The hippo’s skin is smooth and shiny, reflecting the sunlight that filters through the water.”
And impressively, we must admit, all of that’s there. It doesn’t compare to the perfect creature that is Moo Deng — but it’s all there.
Super proud of our Movie Gen reveal because it can generate:
— Edits (way more fun than restyles)
— Personalization (imagine yourself – in video!)
— Moo Deng pic.twitter.com/KfQ5QfTrBq— Danny Trinh (@dtrinh) October 4, 2024
Filmfaker
For new footage, Movie Gen can generate clips up to 16 seconds long at 16 frames per second, according to The New York Times. (This puts it short of the filmmaking standard of 24 fps, despite Meta claiming it can help Hollywood filmmakers in its blog post.)
But it can also be used as an editing tool for existing footage, too. In a series of examples shared in the announcement, the AI is used to add pom poms to the hands of a man running across a scenic landscape, and in an even more ridiculous showing, place him in a dinosaur suit.
Movie Gen can also generate audio for the footage it produces by using both video and text inputs. That includes sound effects, background music, or even entire soundtracks, Meta said.
One example depicts a man gazing over a cliff as water crashes down around him, with music swelling in the background. Another shows firecrackers being shot into the sky, and the audio seems pretty well timed to the explosions, down to the crackling that follows.
Product Pending
There’s just one small thing: Movie Gen isn’t available to the public yet, and it probably won’t be for a while.
“We aren’t ready to release this as a product anytime soon — it’s still expensive and generation time is too long — but we wanted to share where we are since the results are getting quite impressive,” Chris Cox, Meta’s chief product officer, wrote on Threads.
It’s also far from perfect. In the NYT’s testing, it mistakenly grafted a human hand onto a phone that was meant to be held by a dog.
To be fair, OpenAI’s Sora, which generated a lot of hype when it was unveiled in February, is also yet to be made public.
In the meantime, other companies like Google-backed Runway have stepped in with impressive AI video generation tools of their own — so the race is on.
More on AI: Gullible Trump Cronies Losing Their Minds Over Fake AI Slop on Twitter
Share This Article