Why did OpenAI make a Sora App?
I have several theories
Epistemic Status: I live in Australia, and so can not access the Sora app yet. This limits the amount of useful data I can gather right now.
And here it comes.
OpenAI has released their TikTok-like app, Sora. Sora worries me. I have written in the past about why I feel this way: GTFO of the Social Internet Before you Can’t: The Miro & Yindi Story.
The thing is, I feel quite confused as to why OpenAI has done this. What value do they expect this app to create for them? I have several theories.
H1: Revenue
My first guess, is that it will basically be ‘passive income’ for OpenAI. Based on demos I’ve seen, the app takes ~2 minutes to generate a video. During that time, people might as well interact with the app while they wait. So for every video a person generates, Sora has a chance to implant hooks in a brain.
Revenue for Sora will come from ads embedded in the user feed, the same way all other Shortform platforms work. Which is the basic incentive for keeping users on the platform for a long time.
Since Sora currently will generate other people’s intellectual properties, I imagine OpenAI will take some legal hits for releasing this app. Perhaps in the long term, there will be some sort of revenue split, between OpenAI and copyright holders.
H2: Data
Maybe OpenAI wants a lot of some sort of data, and they’re going to use the Sora app to get it. Here is a short list of types of data I can imagine OpenAI wanting:
User interaction data: If they’re collecting this, I might expect the Sora app to have a lot of different methods of controlling the interface, so that a future agent can be trained with the user inputs. I think this is the least likely type of data they want.
User Engagement Data: What faces hold peoples attention the longest? Which voices install a feeling of trust in a user? This type of data, I can imagine being used in a more advanced version of GPT-voice mode. The Video Call mode we’ve all been expecting for some time.
User Personas: Perhaps somewhere in the ToS, there is a clause which says something like “You grant OpenAI the full right to the use of your likeness, including voice, for any purpose.” OpenAI has already played its hand, we’re pretty sure they’re 𝔢𝔳𝔦𝔩. But are they this evil? I doubt it.
User Facial Expressions: We all kinda know a video is AI, partly because the face is off. I’ve now watched many videos of Sam Altman being a freak on my twitter timeline, and his eyes are always so wide. I can imagine enough data on facial expressions might fix this. The iPhone has pretty incredible face scanning. Projecting thousands of tiny points of infrared light onto your face, and using that data to determine what you are looking at, and if it is, or is not your face. That’s how face ID works. There is, of course, also the front facing camera. Which could be used to capture video of the user. I imagine the loop to be like this 1) user generates video, 2) GPT sentiment analysis on the content of the video, 3) Video is marked with sentiment tags, 4) User’s reactions to the video are recorded, 5) recordings are used as training data.
H3: Altruistic
Altman says he wants Sora to improve people’s lives. Maybe he actually means that? When OpenAI was founded, the goal was to be the good guys, who would align an AI responsibly. Unlike Google, who didn’t have much of an alignment culture.
Perhaps OpenAI took a look at the current state of short form video, and thought they could have an outsized impact for good, given their tech advantage.
If this is the case, I hope their second attempt at usurping the power of a large company, does not also turn out to make the problem worse.


