Google releases free AI tool to help companies identify child sexual abuse material

Stamping out the spread of child sexual abuse material (CSAM) is a priority for big internet companies. But it’s also a difficult and harrowing job for those on the frontline — human moderators who have to identify and remove abusive content. That’s why Google is today releasing free AI software designed to help these individuals.

Most tech solutions in this domain work by checking images and videos against a catalog of previously identified abusive material. (See, for example: PhotoDNA, a tool developed by Microsoft and deployed by companies like Facebook and Twitter.) This sort of software, known as a “crawler,” is an effective way to stop people sharing known previously-identified CSAM. But it can’t catch material that hasn’t already been marked as illegal. For that, human moderators have to step in and review content themselves.

This is where Google’s new AI tool will help. Using the company’s expertise in machine vision, it assists moderators by sorting flagged images and videos and “prioritizing the most likely CSAM content for review.” This should allow for a much quicker reviewing process. In one trial, says Google, the AI tool helped a moderator “take action on 700 percent more CSAM content over the same time period.”

Speaking to The Verge, Fred Langford, deputy CEO of the Internet Watch Foundation (IWF), said the software would “help teams like our own deploy our limited resources much more effectively.” “At the moment we just use purely humans to go through content and say, ‘yes,’ ‘no,” says Langford. “This will help with triaging.”

The IWF is one of the largest organizations dedicated to stopping the spread of CSAM online. It’s based in the UK but funded by contributions from big international tech companies, including Google. It employs teams of human moderators to identify abuse imagery, and operates tip-lines in more than a dozen countries for internet users to report suspect material. It also carries out its own investigative operations; identifying sites where CSAM is shared and working with law enforcement to shut them down.

Langford says that because of the nature of “fantastical claims made about AI,” the IWF will be testing out Google’s new AI tool thoroughly to see how it performs and fits with moderators’ workflow. He added that tools like this were a step towards fully automated systems that can identify previously unseen material without human interaction at all. “That sort of classifier is a bit like the Holy Grail in our arena.”

But, he added, such tools should only be trusted with “clear cut” cases to avoid letting abusive material slip through the net. “A few years ago I would have said that sort of classifier was five, six years away,” says Langford. “But now I think we’re only one or two years away from creating something that is fully automated in some cases.”

Deepfakes for dancing: you can now use AI to fake those dance moves you always wanted

Artificial intelligence is proving to be a very capable tool when it comes to manipulating videos of people. Face-swapping deepfakes have been the most visible example, but new applications are being found every day. The latest? Call it deepfakes for dancing — it uses AI to read someone’s dance moves and copy them on to a target body.

The actual science here was done by a quartet of researchers from UC Berkley. As they describe in a paper posted on arXiV, their system is comprised of a number of discrete steps. First, a video of the target is recorded, and a sub-program turns their movements into a stick figure. (Quite of a lot of video is needed to get a good quality transfer; around 20 minutes of footage at 120 frames per second.) Then, a source video is found and a stick figure made of their movements. Then, the swap happens, with a neural network synthesizing video of the target individual based on the stick figure movements of source.


The system can be used to transfer all sorts of styles — from modern dance to ballet.

It sounds simple like this of course, but there’s lots of clever engineering at work here. For example, there’s a subroutine that smooths the movement of the stick figures so the dancers don’t jerk about too much, and a completely separate neural network dedicated to re-tracing the target’s face to ensure realism.

There are also limitations to the program. The network can’t accurately model loose fabrics for example, so the target individual has to wear tight-fitting clothes. In the video above, you can also see quite a few visual anomalies; moments where the joints of the target and source dancer don’t quite line up, or where the AI software couldn’t reproduce complex movements, like a hand flipping from back to front.

Still, though, it’s impressive work. It’s the sort of video manipulation that would probably take a whole team days to produce. Now, all it takes is some source video and the right AI software. Expect it to be turned into an app before too long, and just don’t think about the possible implications about what technology like this will do to our trust of video evidence.

A porn company promises to insert customers into scenes using deepfakes

The concept of “deepfakes” hit the internet late last year like a weirdly rendered bombshell after amateur coders discovered that they could use AI to quickly and easily face-swap celebrities into pornographic clips. The phenomenon raised important questions about consent and revenge porn, but advocates for the technology have always maintained it can have non-harmful uses, too.

One of these is porn company Naughty America. This week, it launched a service that lets customers pay to customize adult clips to their liking using AI. They’ll be able to insert themselves into scenes alongside their favorite actor or actress or edit the background of an existing clip. “We see customization and personalization as the future,” Naughty America’s CEO, Andreas Hronopoulos, told Variety in an interview.

The company demoed the service with a pair of sample clips (link very much not safe for work). One blends the faces of two actresses and another swaps the background of a scene from a bedroom to a beach. It’s not the most advanced use of the technology, but the face-blending is relatively seamless, and it shows how accessible this sort of AI-powered video manipulation has become.

Customers who want to be inserted into a scene will have to send Naughty America a set of photos and videos of themselves, including different facial expressions that help the software accurately replicate their likeness. The company says its legal team will get consent from the actors involved. Simple edits will cost just a few hundred dollars, and longer, more complicated changes will run into the thousands.

There are a number of potential problems with this service. For example, how will Naughty America know that the photos and videos submitted by customers are also consensual? They could be taken under false pretenses and submitted to the site by a third-party. (The Verge has reached out to Naughty America with questions but has yet to hear back.)

It might also be important for the company to indelibly watermark the resulting videos, so they’re not confused with original pornographic clips. This is something that a number of companies and researchers are looking into, especially as experts worry that AI video editing will be used to create fake videos for the purpose of political manipulation.

But Naughty America is presenting this as a natural use of the technology. “It’s just editing, that’s all it is,” said Hronopoulos of the “deepfake” concept. “People have been editing people’s faces on pictures since the internet started.” He told FastCompany: “Deepfakes don’t hurt people, people using deepfakes hurt people.”