Runway Gen-3 Alpha
ModelsRunwayRunway Gen-3 Alpha

Runway Gen-3 Alpha

Gen-3 Alpha, built on entirely new infrastructure, introduces an array of fresh features and improvements in terms of understanding complicated prompts and creating lifelike clips with the most authentic movement, detail, and physics. Try Gen-3 Alpha for free here!

Image to Video
Text to Video
Image to Video
Runway
Image
Add end frame

Click to upload an image

Upload JPG/PNG images up to 10MB, with a minimum width/height of 300px.

Key Features

  • Superior Video Output: Gen-3 Alpha can create cinematic-level video output, with detailed and lifelike visuals in each frame.
  • Act One Animator: Animate still character images with realistic, consistent movements.
  • Video Extender: Use this to make your videos longer, by up to 10 seconds of fresh content with each use.
  • High-Fidelity Output: Gen-3 Alpha videos are more consistent, fluid, and rich, resulting in more natural visuals across the board.
  • Advanced Temporal Controls: Make incredibly unique and dynamic transitions between scenes, using only text prompts.
  • Lifelike People: Humans look better than ever in Gen-2, with the most authentic movements and reactions.
  • Lip Syncing Audio: This allows you to sync up the audio of people speaking with accurate lip movements of video subjects.
  • Video to Video: Use a video as your baseline and transform it in various ways.

Superior Video Output

Runway's Gen-3 Alpha model has been extensively trained using both video and image-based content to become smarter than the prior version. It offers three forms of AI generation: Text to Video, Image to Video, and Text to Image. And it brings exciting features to the table, like Motion Brush, Advanced Camera Controls, and Director Mode.

It allows users to create highly detailed videos with complex scene changes, a wide range of cinematic choices, and detailed art directions.

Prompt Input Image Output video
High-speed, dynamic angle, the camera locks onto a plastic bag floating through the air across a sandy scene. The bag is semi-transparent and floats up and down on the breeze, but remains clearly visible and in focus throughout the scene. None
plastic bag
The gloved hands pull to stretch the face made of a bubblegum material
glove
glove video
The sea anemones sway and flow naturally in the water. the camera remains still.
sea
sea

Act One

Act One is one of the most exciting additions to the Gen-3 Alpha version of Runway. It's a character animation tool, aimed at producing the most realistic, authentic facial movements, speech patterns, and expressions in human subjects.

With this, users can create their own character performances that look just like the real thing, and it works in various styles, from photorealistic videos of people talking to cute animated scenes in the styles of major animation studios, like Disney and Pixar.

It works so well because the tool was trained on reams of facial animation data and mocap, opening lots of new opportunities for creative expression.

Driving performace Output Video
driving 1
driving 2

Extend Video

With Gen-3 Alpha, you can make your AI-generated videos longer than before, adding new sections of content in 5-10 second increments. And you can extend a single video up to three times to make it much longer than it originally was (up to a max of 40 extra seconds).

The process of extending videos is also quick and easy – you just have to select the "Extend" tool in the "Actions" menu beside your video.

extend video

By default, the tool will add on new content following the final frame of your original video. And you can enter your own text prompt to tell the AI model what to include in the extension, describing the scene, the camera movement, and other factors.

Gen-3 Alpha also lets you choose between five second or 10 second additions, as desired.

Original Video Extened Video

High-Fidelity Output

Runway's Gen-3 Alpha takes consistency and high-fidelity output to new levels when compared to previous versions, like Gen-2.

It's able to create videos depicting the most authentic, lifelike movements, including the likes of walking, running, jogging, and jumping. Each movement flows neatly and realistically, without weird glitches or other AI obscurities, keeping the audience engaged and immersed.

What's more, the new model is able to maintain consistent movement and visuals from frame to frame. This is thanks to its superior understanding of prompts and better algorithms than the prior Gen-2 version.

All of this is also delivered faster than ever – Gen-3 Alpha is 2x faster than Gen-2.

running

Advanced Temporal Controls

Runway's Gen-3 Alpha model has been trained to comprehend complex, layered prompts that involve multiple scenes and temporal ideas all wrapped together. Because of that, it's very good at understanding prompts that describe scenes or visuals which go through various changes or transitions.

For the end user, what this means is that Gen-3 Alpha does an efficient job of generating smooth, fluid transitions between scenes or segments of each video. It also gives you more control over key-framing, so you can specifically set precise moments of the video where specific things occur or elements appear.

With so much control, users can make the exact videos they want, without having to compromise.

toucan

Lifelike People

Runway's Gen-3 Alpha has been trained to make its human subjects look as realistic as they possibly can. In other words, humans in these AI videos are very hard to tell apart from real people.

Whether they're talking, making facial expressions, or performing various activities like running and jumping, the people in these videos are highly authentic. This gives users a lot of possibilities in terms of telling stories and making human-oriented content.

japanese

Lip Syncing Audio

The "Lip Sync" feature lets you synchronize an audio track or speech with realistic lip movements of your characters and subjects.

You can type out your own script for your AI characters to say, record your own voice directly in Runway, or upload an audio file. Lip Sync will then sync it up with the character's mouth movements, so it really looks like they're delivering that audio.

There are also various voice options to configure and choose between when you use Lip Sync.

Video to Video Generation

SImply upload (or generate) a video to use as the baseline or reference point for the AI model. It will then let you transform or change that video in various ways, such as turning a realistic video into an animated one, or vice versa.

Input Video Text prompt Output Video
video-to-video
3D halftone CMYK style. halftone print dot. comic book. vibrant colors in layers of cyan blue, yellow, magenta purple, and black circular dots.
video to video 2

Best Prompts for Using Runway Gen-3 Alpha

To get the most out of Gen-3 Alpha, you need to use the best kinds of prompts. Runway itself encourages users to stick to the "[camera movement]: [establishing scene]. [additional details]." formula.

For example, you could put in a prompt like "slow horizontal pan, a young woman walks through the desert, it then begins to rain."

The more detail you can put into your prompts, the more specific and tailored results you'll get. So, if you have a certain vision in mind of how you want your video to look, it's best to be detailed.

We can apply this to the previous example, to get something like:

"The camera pans slowly from left to right across a desert of golden sand. A woman in her mid-20s walks slowly across the scene. She's wearing a floral dress. She looks up as rain starts to fall from grey clouds above."

Here are a few bonus tips to make your prompts the best they can be:

  • Provide as much detail as you can, without going overboard. Gen-3 Alpha works best with "sweet spot" prompts that aren't excessively simple or complex.
  • Learn about terms for camera movements and shot types, so you can give the tool precise instructions for framing and camera angles.
  • Use the same style in your prompts when creating similar or follow-on scenes so that they maintain a level of consistency.
  • Try playing around with Runway's own custom preset prompts and save your favorite ones to reuse in the future.

Runway Gen-2 vs Gen-3 Alpha vs Gen-3 Alpha Turbo

Features Gen-2 Gen-3 Alpha Gen-3 Alpha Turbo
Video Duration 4s 5/10s
Video Resolution 1408 × 768px (Upscale off)

2816 × 1536px (Upscale on)

1280x768px 1280x768px

768x1280px

Text to Video
Image to Video
Video to Video
Motion Brush
Camera Control
Custom Styles
Lip Sync
Act-One
Expand Video

Reddit Reviews of Gen-3 Alpha

So, what do general users think about Runway's Gen-3 Alpha? Well, at this time, opinions on the tool are somewhat mixed.

Some users have claimed that they're going to stick with using Gen-2 for now, since Gen-3 is only in the "Alpha" stage of development and still needs some improvements before it fulfills its potential.

Comment
by u/West_Persimmon_6210 from discussion
in runwayml

Others have complained about the high cost of using Gen-3 Alpha, but some have still praised the technology and key features it has introduced.

Comment
by u/Puzzled-Emphasis1116 from discussion
in runwayml

Video Reviews

FAQs

What is gen 3 alpha of Runway?

What is the difference between Gen-3 Alpha and Gen-3 Alpha Turbo?

How is Gen-3 Alpha different from Gen-2 Text/Image to Video?

How to try gen-3 alpha?

How to use Gen3 alpha?

Is Runway Gen-3 free?

What is the use of Gen 3 Alpha?

How many credits does Runway Gen 3 take?

What is the maximum length of Gen-3 Alpha generations?

What is the resolution of Gen 3 Alpha video?

How much does Runway Gen-3 cost?