Saturday, December 2, 2023

Growth in Artificial intelligence Makes It Easy to Fake Images and Video

Growth in Artificial intelligence Makes It Easy to Fake Images, Audio, and Video, AI researchers have been successfully in creating 3D face models from still 2D images.

Generating sound effects from the silent video, and even mapping the facial expressions of actors onto other people in videos.

Smile Vector a Twitter bot that can make any celebrity smile. It scrapes the web for pictures of faces, and then it morphs their expressions using a deep-learning-powered neural network.

Imagine a version of Photoshop that can edit an image as easily as you can edit a Word document — will we ever trust our own eyes again?

As per TomWhite(creator of Smile) tells verge “Not only in our ability to manipulate images but really their prevalence in our society.” He also added

I don’t think many people outside the machine learning community knew this was even possible,” says White, a lecturer in creative coding at Victoria University School of design. “You can imagine an Instagram-like filter that just says ‘more smile’ or ‘less smile,’ and suddenly that’s in everyone’s pocket and everyone can use it.”

The Smile vector is just a start of the iceberg. It’s hard to give a comprehensive overview of all the work being done on multimedia manipulation in AI right now, but here are a few examples:

  • Creating 3D face models from a single 2D image.
  • Changing the facial expressions of a target on video in real-time using a human “puppet”.
  • Modifying the light source and shadows in any picture.
  • Generating sound effects based on muted video.

Inspired by research done on the human brain in 2005, they identified the neurons that lit up when faced with certain images and taught the network to produce the images that maximized this stimulation.

The field is progressing extremely rapidly,” says Jeff Clune, an assistant professor of computer science at the University of Wyoming.

Pictures Created in 2015 and 2016

Growth in Artificial intelligence Makes It Easy to Fake Images and Video
Growth in Artificial intelligence Makes It Easy to Fake Images and Video

To create these images, the neural network is trained on a database of similar pictures.

Then, once it’s absorbed enough images of ants, redshanks, and volcanoes it can produce its own versions on command — no instruction other than “show me a volcano” is needed.

“Our current limitation isn’t the capability of the models but the existence of data sets at higher resolution,” says Clune. “

Once these techniques have been perfected, they spread quickly.An important paper on this subject was published in September 2015, with researchers turning this work into an open-source web app in January 2016.

Later a Russian Startup company expertise the code with a mobile app named Prisma, which allowed anyone to apply various art styles to pictures on their phones and to share pictures on various social networks.

This app exploded in popularity, and this November, Facebook unveiled its own version, adding a couple of new features along the way.

Clune says that in the future, AI-powered image generation will be useful in the creative industries.

In furniture designer could use it as an “intuition pump,” he says, feeding a generative network a database of chairs, and then asking it to generate its own variants which the designer could perfect.

Growth in Artificial intelligence Makes It Easy to Fake Images and Video

Another trick, consider the video below a demonstration of a program called Face2Face.

The researchers demonstrate it using footage of Trump and Obama. Now combine that with prototype software recently unveiled by Adobe that lets you edit human speech (the company says it could be used for fixing voice overs and dialog in films).

Then anyone can create video footage of politicians, celebrities, saying, well, whatever you want them, too. Post your clip on any moderately popular Facebook page, and watch it spread around the internet.

That’s not to say these tools will steer society into some fact-less free-for-all.However, we can’t deny that digital tools will allow more people to create these sorts of fakes.

AI researchers involved in this fields are already getting a firsthand experience of the coming media environment.

“I currently exist in a world of reality vertigo,” says Clune. “People send me real images and I start to wonder if they look fake. And when they send me fake images I assume they’re real because the quality is so good. Increasingly, I think, we won’t know the difference between the real and the fake. It’s up to people to try and educate themselves.”

References:

http://www.wired.co.uk/article/art-algorithm-recreates-paintings

https://deepart.io/page/about/

http://www.theverge.com/2016/12/20/14022958/ai-image-manipulation-creation-fakes-audio-video 

Website

Latest articles

Active Attacks Targeting Google Chrome & ownCloud Flaws: CISA Warns

The CISA announced two known exploited vulnerabilities active attacks targeting Google Chrome & own...

Cactus Ransomware Exploiting Qlik Sense code execution Vulnerability

A new Cactus Ransomware was exploited in the code execution vulnerability to Qlik Sense...

Hackers Bypass Antivirus with ScrubCrypt Tool to Install RedLine Malware

The ScrubCrypt obfuscation tool has been discovered to be utilized in attacks to disseminate the RedLine Stealer...

Hotel’s Booking.com Hacked Logins Let Attacker Steal Guest Credit Cards

According to a recent report by Secureworks, a well-planned and advanced phishing attack was...

Critical Zoom Vulnerability Let Attackers Take Over Meetings

Zoom, the most widely used video conferencing platform has been discovered with a critical...

Hackers Using Weaponized Invoice to Deliver LUMMA Malware

Hackers use weaponized invoices to exploit trust in financial transactions, embedding malware or malicious...

US-Seized Crypto Currency Mixer Used by North Korean Lazarus Hackers

The U.S. Treasury Department sanctioned the famous cryptocurrency mixer Sinbad after it was claimed...

API Attack Simulation Webinar

Live API Attack Simulation

In the upcoming webinar, Karthik Krishnamoorthy, CTO and Vivek Gopalan, VP of Products at Indusface demonstrate how APIs could be hacked.The session will cover:an exploit of OWASP API Top 10 vulnerability, a brute force account take-over (ATO) attack on API, a DDoS attack on an API, how a WAAP could bolster security over an API gateway

Related Articles