What does it mean to regulate AI?
In this edition of TechTalk, experts talk about how challenging regulating AI can be, the ways in which regulation can work and how to ensure a balance of possibilities and risks
Over the last few months, Artificial Intelligence (AI) as a technology has seen a rapid pace of development and recognition. Industries all over the world are trying to make the best use of the tool, and that has gotten authorities to look into the dangers that AI brings with it.
Along with data protection issues, the fast-paced growth of AI has compelled authorities around the world to take actions to regulate it. China last week released draft regulations which said its tech companies need to register generative AI products with China’s cyberspace agency and submit them for security assessment before being released to the public. The European Union has come up with an AI Act that will govern anyone who provides a product that uses AI. EU lawmakers also called on world leaders to control development of AI systems like ChatGPT. France and Italy have voiced concerns about privacy violations related to Open AI’s ChatGPT. The United States, although has not officially set out a regulation, is seeking public comments on potential accountability measures for AI systems.
India, however, has said that it is not looking at any sort of regulation for AI for now. Regulating a technology as powerful as AI can be a tricky business. Experts in the field talk about what it means to regulate AI, the complex areas around it and if it’s a necessary business to attend to.
Regulation of AI can be challenging
Talking about what authorities can do to find a way to regulate AI and help innovate at the same time, Ashray Malhotra, CEO of Rephrase.ai, says, “No one knows exactly what they’re setting out to build. They get down to it and, along the way, figure out everything that a particular model can do. So there’s no way to push the brakes on a particular niche development.” “What can be done, perhaps, is a government body that works with the same or comparable agility as AI companies, which can help regulate what’s good and bad. The key, again, would be to educate the members of this body not just about the intricacies of public policy but also about the immense positive impact that AI can have on humankind. That’s when such a body can bring balance and cater to both regulation and innovation.”
Need for mandates?
Speaking on recent safety measures published by Open AI and if companies need to give disclosures, Ashray Malhotra agrees that there should be a government-mandated structure of the right category of disclosures that the companies need to make.
He says, “This includes providing warnings and safety notes to the public about the platform beforehand, particularly in cases where the technology could have significant impacts on individuals or society as a whole. By being transparent and accountable, companies can help to build trust in their technology and ensure that it is used in a responsible and ethical manner. At the same time, user awareness that content can be manipulated is very important because it empowers individuals to make informed decisions about how they interact with the technology. So they can protect themselves, and make choices that align with their values. This, in turn, builds trust in the technology and helps prevent unintended consequences that may arise from a lack of understanding or awareness.”
Rajat Ojha, CEO Gamitronics, also believes AI must be regulated but it shouldn’t slow down the research, but make for more responsible use.
“We have already started seeing the signs of how AI is changing everything around us and also how Italy and other countries in Europe are banning particular AI tools. The problem is that advancement in AI is happening so rapidly that people are not able to quickly assess the opportunities and risks, so outright bans are happening.”
“AI must be regulated by the lists of high risk areas, set expectations of AI in those high risk areas and wrap it up in governance structure. Similarly for any new AI system to be deployed, the assessment to governance structure journey must be charted. AI Safety is as important (or more) as AI research and the amazing use cases it offers. I’m personally excited about the future of AI but I also understand that regulations should be enforced and should be done in such a way that AI research doesn’t slow down or create panic but just helps tech companies be more responsible,” he says.