Axolot
|Subscribers
Latest videos
Dive into the captivating world of AI-generated visuals with "THE NIGHT FALLS." Witness the mesmerizing sci-fi clips of a bustling city and its nocturnal rhythms, brought to life through the collaborative efforts of MidjourneyV6 and PikaLabs.
Experience the magic of image generation by MidjourneyV6 and the seamless animation crafted by PikaLabs, all expertly woven together using CapCut video editing software.
#ai #aiart #aianimation #aishorts #aigenerated #pikalabs #runway #synthography #stablediffusion #dreamstudio #scifi #ambientmusic #chill #StudioGhibli
Prompt: The Glenfinnan Viaduct is a historic railway bridge in Scotland, UK, that crosses over the west highland line between the towns of Mallaig and Fort William. It is a stunning sight as a steam train leaves the bridge, traveling over the arch-covered viaduct. The landscape is dotted with lush greenery and rocky mountains, creating a picturesque backdrop for the train journey. The sky is blue and the sun is shining, making for a beautiful day to explore this majestic spot.
Prompt: A Samoyed and a Golden Retriever dog are playfully romping through a futuristic neon city at night. The neon lights emitted from the nearby buildings glistens off of their fur.
Introducing Stable Video Diffusion
It's been a little over a month since Stable Video Diffusion hit the scene, but we're still buzzing with excitement about its game-changing capabilities. This groundbreaking model, inspired by the success of Stable Diffusion in the image realm, is our first step into the realm of generative video.
In case you missed it, Stable Video Diffusion is now available in research preview, offering a peek into the future of generative AI video models. We're talking about a model that represents a significant leap forward in our quest to democratize access to cutting-edge AI.
We've been thrilled to see the community's response since we dropped the model. The code for Stable Video Diffusion is up on our GitHub repository, and you can grab the weights needed to run the model locally from our Hugging Face page. And if you're craving technical details, check out our research paper for the full scoop.
Now, let's talk applications. Our video model isn't just a one-trick pony. It's adaptable to a plethora of downstream tasks, from multi-view synthesis to fine-tuning on multi-view datasets. We're already envisioning a lineup of models that will build upon and expand the capabilities of Stable Video Diffusion, much like the ecosystem that's grown around stable diffusion.
So, if you're ready to dive into the world of generative video and explore the endless possibilities, Stable Video Diffusion is your ticket to ride. Join us as we push the boundaries of what's possible with AI-generated video.