Skip to main content

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

Reality was fun while it lasted

Welcome to Sora, the AI video platform so powerful it risks throwing the entire internet, and all its users, into confusion

Image: TNW/Getty

Michael Jackson and Prince have an amicable discussion about whose funeral was better. Amy Winehouse records a piece to camera pushing a trolley through Wal-Mart. Sam Altman – so many Sam Altmans! – steals NVidia computer chips and is apprehended by a shop security guard as he tries to make a getaway; inspires a mass-dancealong on the New York subway; raps about OpenAI’s success while tricked out like Biggie. 

None of these things, of course, happened, despite there being video after video circulating online that shows each of the above examples in high definition. Welcome to the new now, when you really can’t believe your eyes anymore. 

OpenAI has launched the second iteration of Sora, its text-to-video model, and it doesn’t feel hyperbolic to say that our ability to distinguish fact from fiction online has crossed a Rubicon. Only available to users in North America for now (or to enterprising folk who can be bothered to circumvent geoblocks, something people in the UK have recently become significantly more familiar with), the technology allows anyone to spin up high-quality, AI-generated videoclips from just a few typed prompts – so far, so much the same as current models from the likes of Google. 

Sora, though, comes with its own app, which is where things start to get interesting. On opening it for the first time, users are invited to create a “Cameo”, an AI model of themselves, generated in seconds from a few photos taken within the app from various angles. That done, you’re free to make videos of yourself in any setting or scenario you wish, with the obvious caveats around sex and violence. Even more interestingly, if you so choose you can make your likeness available for anyone else to create with, effectively letting yourself be rendered as an AI puppet by anyone with the inclination. 

While the idea of a feed of AI-generated video and imagery isn’t per se new – Meta’s had one for a few months, recently renamed “Vibes” – the fact that Sora makes real-seeming videos that feature real-seeming people feels like a killer feature; it turns out that we really like watching videos of people, and it’s quite possible that we won’t really care that much if those people aren’t, well, real

A smart part of the app’s launch strategy was Altman’s decision to make his countenance freely available for third-party use, hence the hundreds and hundreds of different riffs on “Sam Altman does something weird/satirical/gently-horrifying” that have popped up since launch. It turns out that dunking on the founder makes for good viral content. 

The company’s experimental approach to copyright is also driving interest. While you can’t create images of living celebrities without their consent, there’s no such prohibition from seemingly spinning up as many videos of dead famous people as you like – and the copyright issues don’t stop there. Want a clip of Jaws in which Brody and Quint are played by Mario and Luigi? Want to see a clip of an old Reagan press conference in which the Gipper proudly shows off his Pokemon card collection? Ok, fine, whatever floats your boat you weirdo!

How exactly this is going to work with regard to copyright law is unclear. Altman’s initial position was that rights holders who didn’t wish their works to be used within Sora would have to explicitly opt out; this stance appears to be softening, perhaps in response to some strongly-worded letters from companies such as Disney and the famously-litigious Nintendo. 

Frankly, though, the copyright issue feels like the least-pressing of the many concerns that spring to mind here. The most obvious is that our ability to accurately-determine what is “real” online is set to be utterly banjaxed. While it’s true that every video produced by Sora comes complete with a floating watermark which clearly reads “Sora”, it’s not entirely clear that your average Joe seeing a video on their feed will know what that word means and that it’s a signifier that what they’re seeing is machine-generated. 

It’s also entirely possible to export and crop the videos generated by Sora so as to remove the watermark entirely, or to use specifically-created apps to remove them. While videos generated by the model are also digitally watermarked via the C2PA industry standard – meaning their metadata labels them as AI-made – these watermarks can easily be removed by simply taking screen recording of the video and sharing that instead of the original. 

Now add to this the fact that the model can produce footage that looks poor-quality enough to be believable. Hand-held mobile footage, bodycam footage, dashcam footage, CCTV footage, all of this is now spoofable to convincing quality, in seconds. All of a sudden anyone with access to the app can create a plausibly-lofi-looking bit of video to support, or deny, any narrative they choose. 

It’s not hard to imagine how this might play out. Examples already shared across social media include an eyewitness account of a fire at the UN building in NYC, students captured on CCTV fleeing a college campus in terror, a far-right protest taking place at the Eiffel Tower complete with clashes with riot police, and bodycam footage of a black man being apprehended for shoplifting. It doesn’t take a great deal of imagination to predict how this is going to be used by motivated actors looking to elicit strong emotions for reasons of profit or ideology (or even just online clout).

As of now, it is entirely possible for realistic video supporting pretty much any position or agenda you like to be created in seconds and disseminated online to an audience of people – that is, you and I – who have long-since stopped putting the work in when it comes to fact-checking. Be honest: if you saw a convincing-looking video of something floating across your feed that chimed with your beliefs, would you take the time to check its veracity before hammering the “share” button? Current behaviour suggests most of us would not, and it’s only going to get harder to distinguish real from fake from hereon in. 

Still, you’re a New World reader! You’re educated! You’re savvy! You won’t get caught out by the slop machine! It might be worth having a word with your parents, though, lest they get radicalised by fake security cam videos of asylum seekers lording it at one of these four-star hotels we hear so much about these days. RIP, any collective agreement as to what is real! Consensus reality was fun while it lasted.

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.