opensora-video.com Open in urlscan Pro
2606:4700:3033::ac43:c18c  Public Scan

URL: https://opensora-video.com/
Submission: On July 31 via api from BE — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Skip to main content
 * Research
 * Products
 * Safety
 * Company







CREATING VIDEO FROM TEXT

Sora is an AI model that can create realistic and imaginative scenes from text
instructions.

Download Now
Your browser does not support the video tag.




All videos on this page were generated directly by Sora without modification.

We’re teaching AI to understand and simulate the physical world in motion, with
the goal of training models that help people solve problems that require
real-world interaction.

Introducing Sora, our text-to-video model. Sora can generate videos up to a
minute long while maintaining visual quality and adherence to the user’s prompt.

Prompt: A stylish woman walks down a Tokyo street filled with warm glowing neon
and animated city signage. She wears a black leather jacket, a long red dress,
and black boots, and carries a black purse. She wears sunglasses and red
lipstick. She walks confidently and casually. The street is damp and reflective,
creating a mirror effect of the colorful lights. Many pedestrians walk about.

Prompt: Several giant wooly mammoths approach treading through a snowy meadow,
their long wooly fur lightly blows in the wind as they walk, snow covered trees
and dramatic snow capped mountains in the distance, mid afternoon light with
wispy clouds and a sun high in the distance creates a warm glow, the low camera
view is stunning capturing the large furry mammal with beautiful photography,
depth of field.

Prompt: A movie trailer featuring the adventures of the 30 year old space man
wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic
style, shot on 35mm film, vivid colors.

Prompt: Drone view of waves crashing against the rugged cliffs along Big Sur’s
garay point beach. The crashing blue waters create white-tipped waves, while the
golden light of the setting sun illuminates the rocky shore. A small island with
a lighthouse sits in the distance, and green shrubbery covers the cliff’s edge.
The steep drop from the road down to the beach is a dramatic feat, with the
cliff’s edges jutting out over the sea. This is a view that captures the raw
beauty of the coast and the rugged landscape of the Pacific Coast Highway.

Prompt: Animated scene features a close-up of a short fluffy monster kneeling
beside a melting red candle. The art style is 3D and realistic, with a focus on
lighting and texture. The mood of the painting is one of wonder and curiosity,
as the monster gazes at the flame with wide eyes and open mouth. Its pose and
expression convey a sense of innocence and playfulness, as if it is exploring
the world around it for the first time. The use of warm colors and dramatic
lighting further enhances the cozy atmosphere of the image.

Prompt: A gorgeously rendered papercraft world of a coral reef, rife with
colorful fish and sea creatures.

Prompt: This close-up shot of a Victoria crowned pigeon showcases its striking
blue plumage and red chest. Its crest is made of delicate, lacy feathers, while
its eye is a striking red color. The bird’s head is tilted slightly to the side,
giving the impression of it looking regal and majestic. The background is
blurred, drawing attention to the bird’s striking appearance.

Prompt: Photorealistic closeup video of two pirate ships battling each other as
they sail inside a cup of coffee.

Prompt: A young man at his 20s is sitting on a piece of cloud in the sky,
reading a book.

Today, Sora is becoming available to red teamers to assess critical areas for
harms or risks. We are also granting access to a number of visual artists,
designers, and filmmakers to gain feedback on how to advance the model to be
most helpful for creative professionals.

We’re sharing our research progress early to start working with and getting
feedback from people outside of OpenAI and to give the public a sense of what AI
capabilities are on the horizon.

Prompt: Historical footage of California during the gold rush.

Prompt: A close up view of a glass sphere that has a zen garden within it. There
is a small dwarf in the sphere who is raking the zen garden and creating
patterns in the sand.

Prompt: Extreme close up of a 24 year old woman’s eye blinking, standing in
Marrakech during magic hour, cinematic film shot in 70mm, depth of field, vivid
colors, cinematic

Prompt: A cartoon kangaroo disco dances.

Prompt: A beautiful homemade video showing the people of Lagos, Nigeria in the
year 2056. Shot with a mobile phone camera.

Prompt: A petri dish with a bamboo forest growing within it that has tiny red
pandas running around.

Prompt: The camera rotates around a large stack of vintage televisions all
showing different programs — 1950s sci-fi movies, horror movies, news, static, a
1970s sitcom, etc, set inside a large New York museum gallery.

Prompt: 3D animation of a small, round, fluffy creature with big, expressive
eyes explores a vibrant, enchanted forest. The creature, a whimsical blend of a
rabbit and a squirrel, has soft blue fur and a bushy, striped tail. It hops
along a sparkling stream, its eyes wide with wonder. The forest is alive with
magical elements: flowers that glow and change colors, trees with leaves in
shades of purple and silver, and small floating lights that resemble fireflies.
The creature stops to interact playfully with a group of tiny, fairy-like beings
dancing around a mushroom ring. The creature looks up in awe at a large, glowing
tree that seems to be the heart of the forest.

SAFETY

We’ll be taking several important safety steps ahead of making Sora available in
OpenAI’s products. We are working with red teamers — domain experts in areas
like misinformation, hateful content, and bias — who will be adversarially
testing the model.

We’re also building tools to help detect misleading content such as a detection
classifier that can tell when a video was generated by Sora. We plan to include
C2PA metadata(opens in a new window) in the future if we deploy the model in an
OpenAI product.

In addition to us developing new techniques to prepare for deployment, we’re
leveraging the existing safety methods(opens in a new window) that we built for
our products that use DALL·E 3, which are applicable to Sora as well.

For example, once in an OpenAI product, our text classifier will check and
reject text input prompts that are in violation of our usage policies, like
those that request extreme violence, sexual content, hateful imagery, celebrity
likeness, or the IP of others. We’ve also developed robust image classifiers
that are used to review the frames of every video generated to help ensure that
it adheres to our usage policies, before it’s shown to the user.

We’ll be engaging policymakers, educators and artists around the world to
understand their concerns and to identify positive use cases for this new
technology. Despite extensive research and testing, we cannot predict all of the
beneficial ways people will use our technology, nor all the ways people will
abuse it. That’s why we believe that learning from real-world use is a critical
component of creating and releasing increasingly safe AI systems over time.

RESEARCH TECHNIQUES

Sora is a diffusion model, which generates a video by starting off with one that
looks like static noise and gradually transforms it by removing the noise over
many steps.

Sora is capable of generating entire videos all at once or extending generated
videos to make them longer. By giving the model foresight of many frames at a
time, we’ve solved a challenging problem of making sure a subject stays the same
even when it goes out of view temporarily.

Similar to GPT models, Sora uses a transformer architecture, unlocking superior
scaling performance.

We represent videos and images as collections of smaller units of data called
patches, each of which is akin to a token in GPT. By unifying how we represent
data, we can train diffusion transformers on a wider range of visual data than
was possible before, spanning different durations, resolutions and aspect
ratios.

Sora builds on past research in DALL·E and GPT models. It uses the recaptioning
technique from DALL·E 3, which involves generating highly descriptive captions
for the visual training data. As a result, the model is able to follow the
user’s text instructions in the generated video more faithfully.

In addition to being able to generate a video solely from text instructions, the
model is able to take an existing still image and generate a video from it,
animating the image’s contents with accuracy and attention to small detail. The
model can also take an existing video and extend it or fill in missing frames.
Learn more in our technical report.

Sora serves as a foundation for models that can understand and simulate the real
world, a capability we believe will be an important milestone for achieving AGI.




Our research
 * Overview
 * Index

Latest advancements
 * GPT-4
 * GPT-4o mini
 * DALL·E 3
 * Sora

ChatGPT
 * For Everyone
 * For Teams
 * For Enterprises
 * ChatGPT login (opens in a new window)
 * Download

API
 * Platform overview
 * Pricing
 * Documentation (opens in a new window)
 * API login (opens in a new window)

Explore more
 * OpenAI for business
 * Stories

Safety overview
 * Safety overview
 * Safety standards

Teams
 * Safety Systems
 * Preparedness
 * Superalignment

Company
 * About us
 * News
 * Our Charter
 * Security
 * Residency
 * Careers

Terms & policies
 * Terms of use
 * Privacy policy
 * Brand guidelines
 * Other policies

OpenAI © 2015–2024
(opens in a new window) (opens in a new window) (opens in a new window) (opens
in a new window) (opens in a new window) (opens in a new window) (opens in a new
window)