Last week we witnessed the announcement of Sora, a new text-to-video AI model developed by OpenAI. Its name, which means "sky" in Japanese (空), reflects the ambition of Sam Altman and his team: the sky is the limit.
Like ChatGPT, Sora allows you to generate videos from text using natural language understanding. This means that no complex commands are needed: the tool understands how we speak on a daily basis, although it is always possible to be more precise with so-called prompts. Once the request has been processed, Sora is able to produce very realistic videos of up to one minute in length, as Altman showed with examples on his X account.
The tool is currently being tested by "Red Teams", independent teams in charge of detecting bugs and vulnerabilities. OpenAI emphasises that security is a priority, with the aim of preventing bias, hate content and especially misinformation. Sora promises to prevent the creation of videos that simulate famous people, along the same lines as DALL-E, although experts warn that, sooner or later, any technology will end up being used to generate fake content.
A recent example of the potential risks was seen with AI-generated sexual images of Taylor Swift, an event that provoked international controversy and prompted the White House to propose to Congress to develop specific regulation. This underlines that the dangers of disinformation are not limited to public image: a fake negative product video or a simulated CEO announcing aggressive corporate moves could severely damage a brand's reputation or cause stock market crashes.
These risks are compounded by concerns about job substitution. In the case of Sora, audiovisual professionals and, more broadly, communication workers could see certain repetitive or content-producing tasks become automated.
However, the adoption of AI should not be seen as a threat, but as an opportunity if it is handled logically and judiciously. Back in 2023, our fellow MCPC alumnus Yago Sánchez Reig asked in PRECISA/MENTE whether digital transformation changed the mission of the Dircom. The conclusion at the time was clear: no. And in 2024, with or without Sora, that premise remains valid. AI is a tool; responsibility, creativity and judgement remain human.
In this regard, Spain is among the most optimistic countries with regard to the adoption of AI in the workplace. According to the Boston Consulting Group study AI at Work: What People Are SayingIn the UK, 59 % of Spanish managers have a positive view on its implementation and many already use Generative AI in their day-to-day business.
For communicators, tools like Sora are a powerful ally. They can speed up the creation of audiovisual content, facilitate the personalisation of messages, respond quickly to frequently asked questions and even simulate crisis scenarios to train teams. However, they will never replace the human capacity to understand context, interpret cultural, emotional and strategic nuances, or manage authentic relationships with stakeholders.
The value of the communication professional lies in his or her experience, knowledge of the client or organisation, and the personal links he or she establishes. AI should be seen as a capability multiplier, not a substitute. Its role is limited by algorithms and data; ours by empathy, judgement and creativity. While Sora generates impressive videos, it will be the human who gives meaning, direction and purpose to the narrative.
Generative AI will change many ways of working, but it will not replace what is essential: our ability to understand people, relationships and intentions. Tools like Sora are here to empower our work, to extend our reach and optimise processes. But the limits of the human are still infinite. And, as the name says, the sky is the limit.