Embrace the power of video: 14 video marketing statistics for 2023
June 23, 2023
By Patrick Worrell
July 12, 2024
In 1873, Arthur O'Shaughnessy wrote the poem "Ode," also known for the line, "We are the music makers." This poem celebrates creators with the words, "We are the movers and shakers of the world for ever, it seems."
Fast forward to today, and anyone can be a creator. Technology has advanced to where we can type a line of text and see an entire world come to life in seconds. While the debate about whether generative AI (GenAI) is truly art continues, O’Shaughnessy’s sentiment still rings true.
Whether it's the person, the prompt, the technology, or a mix of all three, we are still “the music makers and the dreamers of dreams.”
Dream Machine is a text-to-video model created by Luma Labs. Some examples of the imaginative and mesmerizing scenes found on their website include a lifelike polar bear strolling through a serene winter landscape, butterflies with vibrant wings fluttering around an antique television set in an enchanted forest, and a man walking along the shore of an otherworldly beach as the sun sets in a breathtaking array of colors.
Much like OpenAI’s Sora, Dream Machine lets users enter a text prompt to generate a video that matches the input. What's happening behind the scenes? We're not entirely sure, as the details of the model's training data aren't public. But anyone can currently try it out for free, so as one of our resident video experts, I decided to give it a spin to see what it can do.
Free users can create up to 30 generations per month but no more than 5 per day. Each generated video will be 5 seconds, but you can extend the video at the cost of an additional credit. At the time of testing (June 2024), it took between 10 and 45 minutes for the model to generate one video.
Here are some examples of what it can do at this stage of the technology.
"Fast FPV drone flythrough of a European castle, beginning with an aerial view of the towering stone walls and turrets, then gliding through an open window into the grand interior halls."
The outcome appears quite impressive overall. There is some noticeable distortion, particularly on the upper window beneath the left spire. However, the motion appears smooth and realistic. I anticipated a more seamless transition between the exterior and interior of the castle, but it is somewhat abrupt. Additionally, the continuity of movement seems interrupted as the camera unexpectedly pulls back instead of continuing forward from the preceding shot into the interior.
"Grazing cows move slowly across an idyllic meadow, the camera tracking alongside them in a smooth side-angle motion."
At first glance, this looks like another satisfactory result. However, a closer examination reveals some significant issues. The front legs of the cow closest to us seem to switch sides as they cross, changing from left to right. And the other cow appears disproportionately large and has twice the number of legs it should, suggesting an attempt to depict two cows moving together that blended into one. Interestingly, the surrounding grass and trees look perfectly fine and maintain their intended appearance seamlessly.
"’90s style commercial. Rapid cuts between close-up shots of sleek, colorful remote-controlled cars zooming around a variety of terrains. Through dirt tracks, over ramps, and even inside a neon-lit indoor course."
I really love the energy in this one, but it feels like it's trying to do too much at once. It starts off with a bang, with the car transforming in midair while spinning and attempting to land (but not quite succeeding) on the track above. The world around the car looks fantastic—it definitely nailed the “colorful” vibe from the prompt. Plus, the camera movement adds a nice touch. However, the main focus of the shot doesn't quite hit the mark.
As you can see, the current model has trouble with certain types of movement and body parts like limbs. Luma Labs states as much on their site under "current limitations" where it lists movement alongside morphing, text, and “Janus" with accompanying examples.
These problems can ruin an otherwise great-looking result, but Dream Machine shows great promise. I believe these issues will be resolved with time. My greater concern lies with the challenges surrounding the technology. Given that most, if not all, GenAI tools are trained on existing content, navigating the legal landscape will inevitably be a lengthy process.
The consensus seems to be that for every impressive generation, you have to go through dozens of bad ones. And this serves as a reminder that for aesthetically pleasing and cohesive visuals without legal complexities you probably still want to hire a professional.
(Or you could end up with corporate videos that look like these.)
We take pride in a team loaded with smarts, wit, and ideas. If you'd like to have a smarter, wittier inbox filled with ideas each month, subscribe here to the MarketReach Blog, and we will let you know when there is something new you might like!
Need us now? Just want to learn more? We’d love to talk.