Lo0go techturning.com

“New Australian Regulations Mandate Removal of AI-Generated Child Abuse Material from Search Results”

“New Australian Regulations Mandate Removal of AI-Generated Child Abuse Material from Search Results”

On September 8th, Australia took a significant step towards combating the dissemination of child sexual abuse material generated by artificial intelligence (AI). The country’s internet regulator, the e-Safety Commissioner, announced that it would be working with tech giants like Google and Bing to establish a new code aimed at preventing the sharing of AI-created child abuse content and deepfakes. This move underscores the increasing challenges posed by the rapid growth of generative AI technology, catching the world somewhat off guard.

The forthcoming code requires search engines to implement measures that ensure AI-generated child abuse content does not appear in search results. Additionally, it mandates that AI functions integrated into these search engines cannot produce synthetic versions of such material, which are commonly referred to as deepfakes.

E-Safety Commissioner Julie Inman Grant emphasized the urgency of addressing this issue, as generative AI technology has evolved at a pace that few anticipated. The code’s development is a response to the growing concerns surrounding the use of AI in generating lifelike and harmful content.

Notably, Google, which is owned by Alphabet, and Bing, owned by Microsoft, had previously drafted a code, but it did not address AI-generated content. In response, Inman Grant requested that these industry leaders revisit and revise the code to encompass the emerging challenges posed by AI-generated materials. This collaborative effort reflects the evolving regulatory and legal landscape that is adapting to the proliferation of AI-driven content creation tools.

A spokesperson for the Digital Industry Group Inc., an Australian advocacy organization that counts Google among its members, expressed satisfaction with the regulator’s approval of the new code. They commended the effort to adapt and include generative AI developments in the code, emphasizing the importance of codifying industry best practices and enhancing community safeguards.

This initiative aligns with Australia’s broader approach to regulating internet platforms. Earlier this year, the regulator introduced safety codes for various internet services, including social media platforms, smartphone applications, and equipment providers. These codes are set to become effective in late 2023. However, the regulator is still working on developing safety codes related to internet storage and private messaging services, which have faced resistance from privacy advocates worldwide.

Australia’s move to address AI-generated child abuse material through a revised code for search engines highlights the ongoing efforts to adapt to the changing digital landscape and the challenges posed by advanced AI technologies. This development reflects a growing awareness of the need to safeguard online spaces and protect vulnerable users from harmful content.c

author

Related Articles