OUR PARTNERS

Government Pledges Stricter Regulations and Transparency for AI


02 July, 2024

Title: Australia Moves Towards Safer and More Transparent Artificial Intelligence Use

As Australia stands on the precipice of a technological revolution, the federal government takes decisive action to ensure the burgeoning artificial intelligence (AI) industry operates responsibly. In light of a consultation process on this matter, Minister for Industry and Science Ed Husic has unveiled the government’s response, targeting the seamless and secure integration of AI into society. The move is expected to not only bolster Australia’s technological prowess but potentially increase the country’s GDP by a staggering $600 billion annually, according to McKinsey research.

Despite the promising financial upside, there is palpable public concern regarding the rapid evolution of AI and its implications. This skepticism is reflected in surveys where only about one-third of Australians believe that currently, sufficient safeguards are in place for AI development. Addressing these concerns, Husic emphasizes the government’s commitment to nurturing low-risk AI applications, such as email filtering tools, while implementing stringent regulations for high-risk areas like autonomous vehicles and job recruitment algorithms. “Australians understand the value of artificial intelligence but they want to see the risks identified and tackled,” Husic remarks.

The government’s strategy includes forming an expert advisory group tasked with the development of AI policy, which entails the creation of both voluntary and, potentially, mandatory measures to promote AI safety standards. The aim is to offer businesses a reliable and singular reference point for integrating AI technology safely into their systems whilst also introducing a framework for rigorous pre-deployment risk assessments and enhancing the skill sets of software developers.

Furthering its commitment to transparency, the interim response points to the possibility of public disclosure protocols regarding the data on which AI models are trained. There is also a proposition to work with industry stakeholder groups on the implementation of a voluntary code for watermarking or labeling AI-generated content. This proposal could pave the way for an easier distinction between human-created and artificial intelligence generated images, audio, and text.

Amid these sweeping changes, Communications Minister Michelle Rowland has made pledges of her own. The goal is to revise existing online safety laws to compel technology companies to tackle the proliferation of AI-originated harmful material, such as deepfake media and hate speech.

In parallel, the government is deeply considering how to address the application of generative AI—including ai text generator systems and AI images generator tools—in educational settings, and through a task force focusing on AI within government operations. Indeed, the latest ai news & ai tools continue to evolve at a pace that could outstrip the capabilities of current legislative frameworks—which are often designed without specific technologies in mind.

The thorough consultation process highlighted the need to address potential legal issues concerning AI. This ranges from the creation of deceptive deepfakes falling under consumer law scrutiny to potential infringements upon copyright laws as AI models, like AI video generators, are trained using pre-existing content.

For creators, from journalists to artists, who witness their original work repurposed into new AI-generated forms without their consent or compensation, this has been an area of particular vexation. Recent legal action, such as the New York Times’ lawsuit against OpenAI over the use of their content, underlines the urgency for clear policies and recompenses.

Husic assures that the government is not only focused on crafting thoughtful guidelines fit for the era of AI but is also striving to match the velocity of technological advancements. By assembling an advisory body comprising specialists in artificial intelligence, the government intends to chart a prudent course for AI integration. “We want safe and responsible thinking baked in early as AI is designed, developed and deployed,” Husic states, signaling a proactive stance in aligning tech developments with governmental agility and foresight.

In summation, as Australia grapples with the dual challenges of embracing AI’s economic potential and ensuring its ethical use, the government is paving the way for a future where AI tools are utilized responsibly and in harmony with public trust and well-being.