May 6, 2024

Brighton Journal

Complete News World

Google DeepMind’s SynthID watermark can detect AI-generated images

Google DeepMind’s SynthID watermark can detect AI-generated images

The Google DeepMind team has believed for years that building great, generative AI tools also requires building great AI-generated discovery tools. There are a lot of obvious high-risk reasons, says Demis Hassabis, CEO of Google DeepMind. “Every time we talk about it and other systems, the question is: What about deepfakes?” And with another controversial election season looming in 2024 in both the US and the UK, Hassabis says building systems to identify and detect AI images is more important all the time.

Hassabis and his team have been working on this tool for the past few years, which Google will release publicly today. It’s called SynthID, and it’s designed to essentially put a watermark on an AI-generated image in a way that’s imperceptible to the human eye but can be easily picked up by a dedicated AI detector.

The watermark is built into the image’s pixels, but Hassabis says it doesn’t change the image itself in any noticeable way. “It doesn’t change the image, or the quality, or the experience,” he says. “But it’s powerful for various transformations — cropping, resizing, and all the things you might do to try and beat regular, traditional, simple watermarks.” As the basic models of SynthID improve, Hassabis says, the watermark will become less visible to humans, but more easily detected by DeepMind tools.

And that’s about as technical as Hassabis and Google DeepMind want it to be right now. Even the launch blog post is a bit sketchy because SynthID is still a new system. “The more you reveal how you operate, the easier it will be for hackers and evil entities to get around it,” says Hassabis. SynthID is rolling out first in a Google-centric way: Google Cloud customers using the company’s Vertex AI platform and image generator Imagen will be able to embed and detect the watermark. As the system undergoes more real-world testing, Hassabis hopes it will improve. Then Google will be able to use it in more places, share more about how it works, and get more data about how it works.

See also  Pokemon Go players are confused by strange new Halloween costumes

Google SynthID Tools will tell you how likely the image is to have been generated by AI.
Image: Google

Ultimately, Hassabis seems hopeful that SynthID will serve as an Internet-wide standard. Key ideas can also be used in other media such as video and text. Once Google has proven this technology, “the question is scale it, share it with other partners who want it, then scale the consumer solution – and then have this discussion with civil society about where we want to go.” He says over and over that this is a beta test, the first attempt at something new, “not a panacea for the deepfakes problem.” But he clearly thinks she could be huge.

Of course, Google is not the only company with this particular ambition. far from it. Just last month, Meta, OpenAI, Google, and many other biggest names in AI promised to build more protections and safety systems for their AI. A number of companies are also working with a protocol called C2PA, which uses encrypted metadata to tag AI-generated content. Google is trying, in many ways, to catch up with all of its AI tools, including detection. It seems likely that we will get that Lots of AI detection criteria Before we get to the ones that actually work. But Hassabis is confident that watermarks will be at least part of the answer on the web.

Google is not the only company with this particular ambition, but it is far from it

SynthID is being launched during the Google Cloud Next conference, where the company is telling its business customers about new features in Google Cloud and Workspace products. The use of the Vertex AI platform is expanding exponentially, says Thomas Kurian, CEO of Google Cloud. This growth and improvement in the SynthID system made Corian and Hassabis feel it was the right time to go.

Customers are certainly concerned about deepfakes, Kurian says, but they also have more mundane AI detection needs. For example, he says, “We have a lot of clients who use these tools to create images for ad copy, and they want to verify the original image because oftentimes the marketing department has a central team that actually creates the original image outline.” Retailing is another big business: some retailers use AI tools to create descriptions for their huge product catalog, and they need to make sure the product images they upload don’t get mixed up with the generated images they use for brainstorming and iteration purposes. (By the way, you’ve probably already seen descriptions like this created by DeepMind, both on retail sites and in places like YouTube Shorts.) They may not be as significant as the fake shots of Trump or the pope strutting, but these are the ways in which AI is already showing up in everyday business.

See also  Star Wars: Jedi Director Announces New Studio to Create AAA Narrative-Driven Single-player Action-Adventure Game

There’s one thing Kurian says he’s looking for with the SynthID rollout — other than if the system is, you know, He works – is how and where people want to use it. He’s pretty sure Slides and Docs will need SynthID integration, for example. “When you use presentations, you want to know where the images are coming from.” But where else? Hassabis suggests that SynthID could eventually be offered as an extension for Chrome or even integrated into the browser so that it can recognize images created across the web. But suppose this happens: should the tool preemptively flag whatever might be generated or wait for some kind of query from the user? Is the huge red triangle the correct way to say “This was made with AI”, or should it be something more subtle?

SynthID could eventually be offered as an extension for Chrome or even built into the browser so it can recognize images created across the web.

There may eventually be more user experience options, Kurian suggests. He believes that as long as the underlying technology is constantly working, users can choose exactly how they want to appear. It can also vary by topic: you probably don’t care too much if the background for the presentations you’re using is created by humans or AI, but “if you’re in hospitals scanning tumors, you really want to make sure that wasn’t an artificially generated image.” .”

The release of any AI detection tool would certainly be the start of an arms race. In many cases, it’s a loser: OpenAI has already abandoned a tool intended to detect text written by its ChatGPT chatbot. If SynthID succeeds, it will inspire hackers and developers to find innovative ways around the system, which will force Google DeepMind to improve the system, and they will continue to do so. With a bit of resignation, Hassabis says the team is ready for it. “It’s probably going to be a straightforward solution that we have to update, like antivirus software or something like that,” he says. “You’ll always have to be on the lookout for a new type of attack and a new type of diversion.

See also  Updated Walt Disney World buses debut with new wheelchair/ECV system

For now, this is still a long way off issue because the entire initial system for creating, using, and discovering AI images is controlled by Google. But DeepMind built this with the entire internet in mind, and Hassabis says it’s ready for a long journey to bring SynthID to everywhere it needs to be. But then he adjusts himself, one thing at a time, he says. “It would be premature to consider expansion and civil society discussions until we prove that the core part of the technology works.” This is the first function and the reason why SynthID is now launched. If SynthID or something like that works, and when it does, we can start to discover what it means to life online.