New AI image detection tools by OpenAI
In a new blog post by the venerable OpenAI, the developers of ChatGPT announce the entrance of new tools and techniques geared for content authentication, a hot button topic in our post-AI world. They also discuss their recent membership into the Content Provenance and Authenticity Steering Committee and how with them OpenAI aims to improve Human-AI relations.
Setting The Stage
Generative AI is being used in our modern day to improve code for developers, create original compositions for musicians, and even to paint whole new pictures for businesses. Advocates for the use of AI in creative fields aim for a world where its use is indispensable and ubiquitous, as common a tool for artists and creatives as Adobe Photoshop or Ableton are today for artists and musicians respectively. However not all is right with everyone. OpenAI argues that society as a whole is threatened by what these new AI tools introduce to creative work. Therefore, they hope to address that threat by encouraging, developing, and adopting a new open standard for digital content.
Enter a new authenticity standard
OpenAI joined the Coalition for Content Provenance and Authenticity(C2PA), introducing it as a new standard for digital content certification popular among various actors in the digital space. Essentially, C2PA proves content’s point of origin via digital means. OpenAI hopes this new partnership will improve the adoption of the standard and improve the way digital content is authenticated online.
It’s not all Talk
C2PA metadata has already been added to their generated content and is present in ChatGPT, DALL-E 3, and even the new Sora video generation model. Because this metadata is present, an image created by these models now has data which can be used to determine whether it’s been AI generated and what model was to used to do so. OpenAI hopes this metadata will grow commonplace and be more widely adopted as the world craves authentication for AI in the creative and news spaces.
Research on the Horizon
But that’s not all. Additionally, they’ve also been hard at work at new techniques for authentication such as a tamper-resistant watermark for AI creations. Like a real watermark, such a thing may tell an AI generated image apart from a human created one. New tools are also in the works that may help detect whether an image has been created by AI. These are called Detection Classifiers and can read visual signs that an AI created a given image and determine the probability, outputting the result to its user. In the future, they may be used to make it harder to counterfeit or commit fraud using AI generated digital content.
Conclusion
OpenAI is hopeful that new innovations like these in both technology but also standardization can turn the tide of public opinion and help AI tools and technologies gain wider acceptance and trust. time only will tell how the world will react to the next stage in the evolution of AI in the digital asset creation field.
Source:
https://openai.com/index/understanding-the-source-of-what-we-see-and-hear-online