Google’s new AI Lumiere creates 5-second videos from still images
New Delhi: Google has introduced a new video generation AI model called Lumiere that uses a new propagation model called Space-Time-U-Net or STUNet. Lumiere creates 5-second videos in one process instead of stringing together small still frames.
This technique figures out where things are in the video (space) and how they move and change together (time). “We present Lumiere – a text-to-video propagation model designed to synthesize videos that depict realistic, diverse, and coherent motion – a breakthrough in video synthesis,” Google researchers said in a paper. It is an important challenge.”
“We present a space-time U-Net architecture that generates the entire temporal span of the video at once through a single pass through the model,” they wrote. The design facilitates a wide range of content creation tasks and video editing applications, including image-to-video, video inpainting, and style generation.
Lumiere can perform text-to-video generation, convert still images to video, compose video in specific styles using a reference image, apply continuous video editing using text-based prompts and Can create a cinemagraph by animating specific areas of the image.
Google researchers said the AI model outputs five-second-long 1024×1024 pixel videos, which they describe as “low-resolution.” The Lumiere also produces 80 frames compared to the 25 frames of the Stable Video Diffusion.
“There is a risk of misuse with our technology to create fake or harmful content, and we believe that tools to detect cases of bias and malicious use are necessary to ensure safe and fair use,” the paper’s authors said. It is important to develop and implement.”