Draft:Gen-4 (AI image and video model)
| Submission declined on 20 October 2025 by Tenshi Hinanawi (talk). This submission appears to read more like an advertisement than an entry in an encyclopedia. Encyclopedia articles need to be written from a neutral point of view, and should refer to a range of independent, reliable, published sources, not just to materials produced by the creator of the subject being discussed. This is important so that the article can meet Wikipedia's verifiability policy and the notability of the subject can be established. If you still feel that this subject is worthy of inclusion in Wikipedia, please rewrite your submission to comply with these policies.
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
|
Comment: In accordance with the Wikimedia Foundation's Terms of Use, I disclose that I have been paid by my employer for my contributions to this article. Itsjuliamartins (talk) 22:04, 17 October 2025 (UTC)
Runway Gen-4 is a text-to-video artificial intelligence model developed by Runway AI, Inc. and released on March 31, 2025.[1][2] It is the fourth iteration of the company's video generation models. The model generates video clips up to 10 seconds in length from text prompts and reference images.[3]
Technical Description
[edit]Gen-4 uses a transformer-based architecture with diffusion techniques to generate video content.[4] Researchers estimate the model contains 10-20 billion parameters based on its computational requirements, though Runway has not disclosed exact specifications.[5]
The model accepts text prompts up to 1,000 characters and can use reference images as the initial frame.[6] Output specifications include:
- Duration: 5 or 10 seconds
- Resolution: 720p with multiple aspect ratios (1280×720 for 16:9, 960×960 for 1:1, etc.)
- Frame rate: 24 frames per second
Features
[edit]According to Runway, the model maintains visual consistency of characters within individual clips using reference images, without requiring additional training.[7][8] Technical reviews indicate this consistency holds within single clips but does not extend across multiple separately generated clips.[7]
Additional reported capabilities include:
- Better real-world motion simulation[9]
- Camera movement simulation (pans, zooms, tracking shots)
- Style transfer from reference images[10]
Limitations
[edit]Technical reviews have identified several constraints:^[14]^
- Maximum output length of 10 seconds requires concatenation for longer content[11]
- Character consistency breaks across separately generated clips[7]
- Fixed 24 fps output causes compatibility issues with 30/60 fps footage
- Motion artifacts and physics violations in complex scenes
Applications and Availability
[edit]Gen-4 is available through Runway's web interface on a credit-based subscription model and via API for developers.[12] Primary uses include previsualization, storyboarding, and concept development in film and advertising production.
In September 2024, Lionsgate announced a partnership with Runway to use the technology for visual effects pre-production.[13] Runway has produced demonstration films entirely with Gen-4, including "The Herd" and "New York is a Zoo."[9]
Reception
[edit]Technology publications have provided mixed assessments. VentureBeat described Gen-4 as addressing "AI video's biggest problem" of character consistency,[14] while Ars Technica characterized the advances as incremental.[5]
References
[edit]- ^ Wiggers, Kyle (2025-03-31). "Runway releases an impressive new video-generating AI model". TechCrunch. Retrieved 2025-10-17.
- ^ Vasani, Sheena (2025-04-01). "Runway says its latest AI video model can actually generate consistent scenes and people". The Verge. Retrieved 2025-10-17.
- ^ "Runway launches new Gen-4 AI video generator". SiliconANGLE. 2025-03-31. Retrieved 2025-10-17.
- ^ Ezz, Mohamed (2025-05-18). "Runway Gen‑4: How AI Is Supercharging Video Creation". MPG ONE. Retrieved 2025-10-17.
- ^ a b Axon, Samuel (2025-03-31). "With new Gen-4 model, Runway claims to have finally achieved consistency in AI videos". Ars Technica. Retrieved 2025-10-17.
- ^ "Creating with Gen-4 Video". Runway. Retrieved 2025-10-17.
- ^ a b c Wang, Gerui. "Runway AI's Gen-4: How Can AI Montage Go Beyond Absurdity". Forbes. Retrieved 2025-10-17.
- ^ Eric Hal Schwartz (2025-04-09). "Sora needs to up its game to match the new Runway AI video model". TechRadar. Retrieved 2025-10-17.
- ^ a b Nield, David (2025-04-02). "Runway Says That Its Gen-4 AI Videos Are Now More Consistent". Lifehacker. Retrieved 2025-10-17.
- ^ "Runway's New Gen-4 AI System Promises the Most Predictable Media Creation Yet | No Film School". nofilmschool.com. Retrieved 2025-10-17.
- ^ "Runway launches new Gen-4 AI video generator". SiliconANGLE. 2025-03-31. Retrieved 2025-10-17.
- ^ "Runway News | Introducing the Gen-4 Image API". runwayml.com. Retrieved 2025-10-17.
- ^ "Lionsgate announce collaboration with AI video company Runway". www.bbc.com. 2024-09-19. Retrieved 2025-10-17.
- ^ Nuñez, Michael (2025-03-31). "Runway Gen-4 solves AI video's biggest problem: character consistency across scenes". VentureBeat. Archived from the original on 2025-07-21. Retrieved 2025-10-17.
