Lo0go techturning.com

OpenAI, Google, others pledge to watermark AI content for safety, White House says

OpenAI, Google, others pledge to watermark AI content for safety, White House says

On July 21, 2023, President Joe Biden announced that major AI companies, including OpenAI, Alphabet (the parent company of Google), and Meta Platforms (formerly Facebook), have made voluntary commitments to the White House to implement measures aimed at enhancing the safety and security of artificial intelligence technology. The move comes as concerns have been growing about the potential disruptive applications of AI and the need to safeguard U.S. democracy from potential threats posed by emerging technologies.

During a White House event, President Biden emphasized the importance of being vigilant about the risks posed by AI and acknowledged that the voluntary commitments made by these tech giants are a promising step forward. However, he also emphasized that there is still much more work to be done in collaboration with the industry to address these concerns effectively.

The companies involved in the commitment include Anthropic, Inflection, Amazon.com, and Microsoft, which is a partner of OpenAI. They pledged to adopt several key measures to ensure the responsible development and deployment of AI technology. One of the notable commitments is to thoroughly test AI systems before releasing them to the public, thereby minimizing potential risks. Additionally, the companies agreed to share information on how to reduce AI-related risks and invest in cybersecurity measures to protect users and data.

The Biden administration sees this commitment as a positive development in its efforts to regulate the AI industry. With AI experiencing significant investment and widespread popularity, there is a pressing need to establish guidelines and safety measures to protect national security and the economy.

Comparatively, the U.S. has lagged behind the EU in addressing AI regulation. In June of the same year, EU lawmakers had already agreed on a set of draft rules that required systems like ChatGPT (a popular AI language model) to disclose AI-generated content, distinguish deep-fake images from real ones, and implement safeguards against illegal content. In the U.S., Senate Majority Chuck Schumer called for comprehensive legislation to ensure proper oversight of artificial intelligence.

To address concerns related to political advertising, Congress is considering a bill that would mandate disclosure of the use of AI in creating imagery or content for such ads. This is part of a broader effort to establish a robust framework for AI technology in the United States.

President Biden mentioned that he is actively working on developing an executive order and bipartisan legislation focused on AI technology. Recognizing the pace of technological advancement, he highlighted the astonishing revelations regarding the potential for significant technological changes in the next decade compared to the past 50 years.

As part of the commitments, the companies involved agreed to develop a system to “watermark” all AI-generated content, including text, images, audio, and videos. This watermarking would serve as a technical indicator, making it easier for users to identify deep-fake images or audio that may depict false violence or create deceptive scams. The specific method for embedding these watermarks and their visibility during content sharing is yet to be clarified.

Furthermore, the companies pledged to focus on protecting users’ privacy as AI continues to evolve and to ensure that AI technology remains free from bias and is not used to discriminate against vulnerable groups. Beyond these commitments, the AI companies also expressed their dedication to leveraging AI solutions to tackle scientific challenges, such as medical research and climate change mitigation.

The voluntary commitments from these prominent AI companies represent a significant step toward establishing a safer and more responsible AI landscape. However, the path ahead involves continuous collaboration between the industry and policymakers to address potential risks and harness the full potential of artificial intelligence for the benefit of society.


Related Articles