Tuesday, July 23, 2024

Growth in Artificial intelligence Makes It Easy to Fake Images and Video

Growth in Artificial intelligence Makes It Easy to Fake Images, Audio, and Video, AI researchers have been successfully in creating 3D face models from still 2D images.

Generating sound effects from the silent video, and even mapping the facial expressions of actors onto other people in videos.

Smile Vector a Twitter bot that can make any celebrity smile. It scrapes the web for pictures of faces, and then it morphs their expressions using a deep-learning-powered neural network.

Imagine a version of Photoshop that can edit an image as easily as you can edit a Word document — will we ever trust our own eyes again?

As per TomWhite(creator of Smile) tells verge “Not only in our ability to manipulate images but really their prevalence in our society.” He also added

I don’t think many people outside the machine learning community knew this was even possible,” says White, a lecturer in creative coding at Victoria University School of design. “You can imagine an Instagram-like filter that just says ‘more smile’ or ‘less smile,’ and suddenly that’s in everyone’s pocket and everyone can use it.”

The Smile vector is just a start of the iceberg. It’s hard to give a comprehensive overview of all the work being done on multimedia manipulation in AI right now, but here are a few examples:

  • Creating 3D face models from a single 2D image.
  • Changing the facial expressions of a target on video in real-time using a human “puppet”.
  • Modifying the light source and shadows in any picture.
  • Generating sound effects based on muted video.

Inspired by research done on the human brain in 2005, they identified the neurons that lit up when faced with certain images and taught the network to produce the images that maximized this stimulation.

The field is progressing extremely rapidly,” says Jeff Clune, an assistant professor of computer science at the University of Wyoming.

Pictures Created in 2015 and 2016

Growth in Artificial intelligence Makes It Easy to Fake Images and Video
Growth in Artificial intelligence Makes It Easy to Fake Images and Video

To create these images, the neural network is trained on a database of similar pictures.

Then, once it’s absorbed enough images of ants, redshanks, and volcanoes it can produce its own versions on command — no instruction other than “show me a volcano” is needed.

“Our current limitation isn’t the capability of the models but the existence of data sets at higher resolution,” says Clune. “

Once these techniques have been perfected, they spread quickly.An important paper on this subject was published in September 2015, with researchers turning this work into an open-source web app in January 2016.

Later a Russian Startup company expertise the code with a mobile app named Prisma, which allowed anyone to apply various art styles to pictures on their phones and to share pictures on various social networks.

This app exploded in popularity, and this November, Facebook unveiled its own version, adding a couple of new features along the way.

Clune says that in the future, AI-powered image generation will be useful in the creative industries.

In furniture designer could use it as an “intuition pump,” he says, feeding a generative network a database of chairs, and then asking it to generate its own variants which the designer could perfect.

Growth in Artificial intelligence Makes It Easy to Fake Images and Video

Another trick, consider the video below a demonstration of a program called Face2Face.

The researchers demonstrate it using footage of Trump and Obama. Now combine that with prototype software recently unveiled by Adobe that lets you edit human speech (the company says it could be used for fixing voice overs and dialog in films).

Then anyone can create video footage of politicians, celebrities, saying, well, whatever you want them, too. Post your clip on any moderately popular Facebook page, and watch it spread around the internet.

That’s not to say these tools will steer society into some fact-less free-for-all.However, we can’t deny that digital tools will allow more people to create these sorts of fakes.

AI researchers involved in this fields are already getting a firsthand experience of the coming media environment.

“I currently exist in a world of reality vertigo,” says Clune. “People send me real images and I start to wonder if they look fake. And when they send me fake images I assume they’re real because the quality is so good. Increasingly, I think, we won’t know the difference between the real and the fake. It’s up to people to try and educate themselves.”






Latest articles

Beware Of Dating Apps Exposing Your Personal And Location Details To Cyber Criminals

Threat actors often attack dating apps to steal personal data, including sensitive data and...

Hackers Abusing Google Cloud For Phishing

Threat actors often attack cloud services for several illicit purposes. Google Cloud is targeted...

Two Russian Nationals Charged for Cyber Attacks against U.S. Critical Infrastructure

The United States has designated Yuliya Vladimirovna Pankratova and Denis Olegovich Degtyarenko, two members...

Threat Actors Taking Advantage of CrowdStrike BSOD Bug to Deliver Malware

Threat actors have been found exploiting a recently discovered bug in CrowdStrike's software that...

NCA Shut’s Down the Most Popular “digitalstress” DDoS-for-hire Service

The National Crime Agency (NCA) has successfully infiltrated and dismantled one of the most...

Play Ransomware’s Linux Variant Attacking VMware ESXi Servers

A new Linux variant of Play ransomware targets VMware ESXi environments, which encrypts virtual...

SonicOS IPSec VPN Vulnerability Let Attackers Cause Dos Condition

SonicWall has disclosed a critical heap-based buffer overflow vulnerability in its SonicOS IPSec VPN....

Free Webinar

Low Rate DDoS Attack

9 of 10 sites on the AppTrana network have faced a DDoS attack in the last 30 days.
Some DDoS attacks could readily be blocked by rate-limiting, IP reputation checks and other basic mitigation methods.
More than 50% of the DDoS attacks are employing botnets to send slow DDoS attacks where millions of IPs are being employed to send one or two requests per minute..
Key takeaways include:

  • The mechanics of a low-DDoS attack
  • Fundamentals of behavioural AI and rate-limiting
  • Surgical mitigation actions to minimize false positives
  • Role of managed services in DDoS monitoring

Related Articles