With incredible potential for both good and harm, AI needs worldwide regulation to ensure it isn't misused Credit: Thinkstock In a recent speech, Google and Alphabet CEO Sundar Pichai called for new regulations in the world of AI, with the obvious focus that AI has been commoditized by cloud computing. This is no surprise, now that we’re debating the ethical questions that surround the use of AI technology: most especially, how easily AI can weaponize computing—for businesses as well as bad actors. Pichai highlighted the dangers posed by technologies such as facial recognition and “deepfakes,” in which an existing image or video of a person is replaced with someone else’s likeness using artificial neural networks. He also stressed that any legislation must balance “potential harms … with social opportunities.” AI is much more powerful today than it was just a few years ago. AI once resided in the realm of supercomputers that cost budget-busting sums to utilize. Cloud computing made AI an on-demand service, affordable for even small businesses. Moreover, there is a huge boom in R&D spending on AI services. AI providers are racing to the top in terms of innovations and the sheer number of features and functions they can offer. This includes knowledge models that are easy to build and train and can easily integrate with new and existing applications. I would make the analogy that AI is much like nuclear power. Both have potential that needs to be captured. Both need limits to ensure they are not misused. Nuclear power provides cheap, carbon-light electricity, and AI has the potential to give us driverless cars and save hundreds of thousands of lives in the healthcare vertical. Don’t both need regulation? Most technology has the potential to be used for good and bad. AI and nuclear power certainly fall into that category. The risk with AI is that some organizations may leverage it for perfectly sound reasons but end up doing ethically questionable things with it. For example, facial recognition in a retail store can build a database of images and personal information that can be sold to marketing firms. It’s one thing to have security cameras always present, but another when they can find out who you are, your marital status, sexuality, demographics, and other information that can be culled using AI-driven big data analytics. The law of unintended consequences is really what’s at stake here. If regulations are created and adopted but not implemented worldwide, they will have little effect in limiting the misuse of AI. Public clouds are international. If some pattern of AI usage is illegal in one country, it’s simple to move to another region. We already do that with data processing security. AI processing won’t be any different. Related content analysis Generative AI won’t fix cloud migration You’ve probably heard how generative AI will solve all cloud migration problems. It’s not that simple. Generative AI could actually make it harder and more costly. By David Linthicum Jul 12, 2024 5 mins Generative AI Artificial Intelligence Cloud Computing analysis All the brilliance of AI on minimalist platforms Buy all the processing and storage you can or go with a minimum viable platform? AI developers and designers are dividing into two camps. By David Linthicum Jul 09, 2024 5 mins Generative AI Cloud Architecture Artificial Intelligence analysis The next 10 years for cloud computing Despite AI's explosive growth, the industry still needs to face facts that customers are unhappy about costs and vendor lock-in. By David Linthicum Jul 05, 2024 5 mins Amazon Web Services Google Cloud Platform Microsoft Azure analysis Serverless cloud technology fades away Serverless was a big deal for a hot minute, but now it seems old-fashioned, even though its basic elements, agility and scalability, are still relevant. By David Linthicum Jul 02, 2024 4 mins Serverless Computing Cloud Computing Software Development Resources Videos