The ability to trust what is seen online has long been questioned, but OpenAI’s new app, Sora, has brought this concern to the forefront. The app allows users to generate realistic videos from simple text prompts, creating almost any imaginable scenario. Every scene, from news reports to personal encounters, can now be simulated, making the boundary between reality and fiction increasingly difficult to distinguish.
Sora is currently free on the Apple App Store, has a 13+ age restriction and holds a 4.6-star rating with approximately 20,000 reviews. Once users enter a birth date for verification and choose a username, they are ready to create videos. To generate a video, users type a small prompt describing their idea, and the video appears after a short processing time.
The Apple App Store’s reviews reflect a mix of excitement and concern. Many users praised the app’s creativity, while others warned about potential risks.
One reviewer stated, “Spend some time on the Sora app and you find yourself craving reality. Spend a few hours in it and afterwards TikTok suddenly feels sane and normal.”
Another user commented, “Sora is really amazing. Big leap ahead. It renewed my faith in the pace of progress. I can’t see how we won’t have Hollywood blockbusters within a couple years.” In contrast, some users expressed caution, noting that artificial intelligence-generated videos “could easily be misused.”
Sora has raised concerns among families and educators regarding its use. Teens are becoming increasingly aware of the benefits and risks of generative AI. A Harvard University survey found that 41% of students believe AI’s development will have both positive and negative effects on their lives over the next decade.
This highlights the critical role of parents and guardians, who should engage with children about their experiences on apps like Sora, monitor their activity on devices and provide guidance to help them navigate AI responsibly. By combining supervision with open conversation, adults can ensure that children explore AI safely while developing critical thinking, creativity and healthy digital habits.
Unfortunately, just as Photoshop revealed that images could be manipulated, AI-generated videos now threaten this fundamental trust, giving viewers confidence that what they saw reflected reality.
This shift has major implications for how the public consumes and trusts information. AI-generated videos can create entirely fabricated events, statements or scenarios that appear convincingly real, making it difficult for viewers to distinguish fact from fiction.
As technology advances, the line between authentic and artificial content blurs, undermining traditional standards of proof and challenging the credibility of news, social media and online content. Platforms and apps like Sora illustrate both the creative possibilities and the risks of a powerful tool such as AI.
AI-generated content has the potential to manipulate public perception, influence elections, fabricate events and erode trust in social institutions.
The U.S. government recognizes both the promise and risks of AI. In the administration’s America’s AI Action Plan, the White House described AI as “a transformative technology that will revolutionize the way we live and work,” emphasizing public awareness and responsible oversight. Yet the rapid adoption of apps like Sora suggests that regulation may already be insufficient to prevent misuse, highlighting the urgent need for critical media literacy and verification skills in the AI era.
News organizations have echoed these concerns. The New York Times has called this period “the era of fakery,” warning that realistic AI videos may signal “the end of visual fact as we know it.” Similarly, NBC News reports that educators and journalists are urgently seeking strategies to help the public distinguish fact from AI-generated fiction, emphasizing the critical role of media literacy and verification in today’s information landscape.
Ultimately, Sora shows both the promise and the risk of AI. It can be difficult for viewers to fact-check these creations, and not everyone will take the time to verify what they see. Many will trust the videos, just as they once trusted that what appeared on screen reflected reality.
Governments must implement stricter regulations, or else chaos can arise. Something intended as art or creative expression could easily be used in harmful ways. Rather than making life easier, AI-generated media may complicate it and raise serious questions about whether people can trust their own eyes.
