top of page
Writer's pictureArion Polozani

Those Are Fake

OpenAI, the creators of ChatGPT, have unveiled a new AI, Sora, via a YouTube video on February 17th. The new AI has the power to create lifelike videos that many are claiming are nearly indistinguishable from reality. The range of videos is not limited either, with the creators showcasing realistic videos of puppies playing in the snow, while also showing an animated rabbit dancing on stage. The videos intended to look real are almost the same as reality, with some limitations, while videos intended to look like animation look like a modern Disney movie. 

“Honestly I think everyone knew that the technology would get here at some point, but it being this realistic, this fast, is really impressive”, NHS Senior Matthew Tomaj said.

The prompts are not overly complex either, with most ranging from 1-2 short sentences.

The technology is not publicly available yet, but will soon be available to a select team of artists and filmmakers, as well as a group of researchers known as “red teamers” who intend to look for any malicious applications from the technology so the creators can block it before public use.

However, the technology does come with some limitations. Longer videos, as well as videos where the “camera” moves fast and sporadically are difficult to create.

A little over a year ago, ChatGPT made its debut. Its creation sparked debate on how it could steal jobs from anyone in the writing industry, or even mathematicians. This was part of the reason for the screen writers protests over the summer, as they wanted insurance against AI writing scripts. At that time, images could barely be generated that held any semblance of an actual figure. Now, people are debating if anyone in the camera or animation industry can hold a job, since we now hold the power to create almost anything with a computer.

“I just hope workers can plan out something similar to the writers strikes from earlier, to try and stop companies from using AI instead of people”, NHS Senior Roman Verna says.

The controversy extends beyond just the job market unfortunately. The video program uses both visuals and audio, both of which come with their own issues. Many are worried about the possibility of deepfaking, or someone creating fake videos of someone else for various reasons, without their consent. Another potential issue is the audio aspect. Besides being able to fake voices just for something like blackmail, it currently looms over our political world. Although Tik Tok users have already been using AI voices of past presidents to make song covers, it was clear at the time what was AI and what wasn’t. 

“I mean back when you had AI Trump and Biden singing a song, you could clearly tell the voices were fake, but with how fast the image generating is progressing, I think the voices will be indistinguishable pretty soon”, NHS Senior Maxwell McCarthy says.

Now, we could hold the power to have our candidates “say” things that could jeopardize their campaign. If we choose to ignore all audio blackmail, we also are electing to ignore actual evidence against a candidate's character. 

Although it is unlikely that Sora will allow users to use faces that exist in the real world, many social media users have made jokes like “Me watching in terror as I see an AI video of me committing a crime I never did”, remarking that the blackmailing capabilities of the technology is dangerous. For now, there are some key features that can distinguish AI from reality, however the speed at which this is progressing is alarming for some, especially in image and video generation.

2 views0 comments

Recent Posts

See All

Commenti


bottom of page