Regulating AI - datasurfr Regulating AI - datasurfr
loader
banner

The debate on the moral alignment of artificial intelligence (AI) has been intensifying and the calls to regulate it growing louder. Geoffrey Hinton, the Godfather of AI, quit his job to speak freely about the technology’s dangers. An open letter signed among others by Musk and Wozniak had called for a moratorium on the development of the most powerful AI systems, warning of profound risks to society and humanity. They argued that AI developers are “locked in an out-of-control race to develop & deploy even more powerful digital minds that no one – not even creators – can understand, predict or reliably control.” The proliferation of generative AI tools has made disinformation easy to produce. So far, there has been little oversight on AI and virtually no audit or licensing. This is set to change now.

President Biden has unveiled sweeping rules to regulate emerging AI technology. A lengthy executive order released earlier this week sets new standards for security and privacy protections for AI. Companies will be directed to put powerful AI models through safety tests and submit results to the government before their public release. It is designed to vet technology with potential national or economic security risks, along with health and safety. Photos, videos and audio developed by such systems are recommended to be watermarked to make clear that they were created by AI. The Commerce Department would issue guidance to label and watermark AI-generated content. 

The order also touched on matters of privacy, civil rights, consumer safety, scientific research, and worker rights. It asks immigration officials to lessen visa requirements for overseas talent seeking to work at US AI companies. The order is aimed at the most advanced future systems, and it largely does not address the immediate threats of existing chatbots. 

The regulation builds on voluntary commitments already made by tech firms. Microsoft, OpenAI, Google and Meta are among 15 companies that have agreed to voluntary safety and security commitments, including having third parties stress-test their systems for vulnerabilities. The tech industry generally supports the regulations though they disagree on the level of government oversight. 

EU, China, and Israel have drafted laws/proposals/regulations on AI. Many other nations are trying to play catch up. 

The proliferation of Generative AI tools is making disinformation easier to produce. AI is getting better at generating images & text – threatens the ability to distinguish fact from fiction.

In elections in major democracies in 2024 (India, US, UK, Indonesia, Mexico, Taiwan) – disinformation campaigns may be supercharged. 

MitKat will continue to monitor developments in AI and cutting-edge technologies and their impact on business.

To know more reach us at contact@mitkatadvisory.com 

Disclaimer: The article has reference to open sources, executive orders and Times of India.

Subscribe Our Newsletter

Book a Demo

Please enable JavaScript in your browser to complete this form.
Are you using any Analysis tool or had used before ?
LinkedIn, Friends of Friend, etc.