Guest Articles

Why regulating AI is a lost cause

Explore the challenges posed by the rapid evolution of AI, where advancements outpace regulatory efforts.

The inherent inertia and inefficiency of regulators in responding to rapidly evolving sectors like AI can be attributed to several factors rooted in their nature, design, and skill sets. First, regulatory bodies are typically structured to be cautious and deliberative, prioritizing stability and risk aversion over rapid adaptation. This approach, while beneficial for maintaining systemic integrity in traditional markets, often results in a lag when faced with fast-paced technological innovations. Additionally, the design of these institutions, often bureaucratic and bound by complex legislative processes, hampers their ability to swiftly enact new policies or adapt existing ones to novel contexts. Lastly, there is often a skill and knowledge gap; regulators may lack the specialized expertise required to understand and effectively govern cutting-edge technologies, leading to a reliance on outdated frameworks or overly cautious approaches that fail to address the unique challenges and opportunities presented by sectors like cryptocurrency.

This pattern of slow and inadequate responses was most recently highlighted by the rise and fall of FTX, a major cryptocurrency exchange. In 2021, FTX quickly grew into one of the world’s largest cryptocurrency exchanges. In 2022 it collapsed in one of the most prolific financial fraud cases in US history. This failure served as a wake-up call. It demonstrated the risks inherent in the crypto market and the consequences of the US government’s slow response in establishing a comprehensive regulatory framework. 

Enter AI

The rapid advancement of AI is already outpacing regulatory efforts, making it particularly challenging and potentially harmful to consumers if not properly regulated. Driven by breakthroughs in machine learning algorithms, vast amounts of data, and increasing computational power, this pace of AI development far exceeds the traditional timelines of regulatory bodies, which often take years to formulate and implement. 

AI is an extremely dynamic and diverse industry, unlike other more traditional industries like healthcare and finance, and it encompasses almost every aspect of all other industries – from healthcare to finance.Therefore, regulating AI requires a nuanced understanding of all of these domains, and the technical nature of AI systems compounds this challenge further. On the other hand, over-regulation can stifle innovation, leading to reduced competition and slower advancement in beneficial AI applications. In the best case, it will result in a lag of AI technology development compared to other countries that may have a better handle on regulation. In the worst case, it will ultimately harem consumers by limiting access to improved services, increasing costs, and slowing the development of AI solutions that address critical societal issues. Thus, the regulation of AI must be a tightrope balance, ensuring consumer protection and ethical use while not impeding the technological progress that leads to significant benefits.

Regulating AI

After chatGPT, AI regulation has been the hot topic of 2023 at congressional hearings. Several major tech companies have advocated for AI regulation often with modifications that align with their business interests (surprise, surprise).

Perhaps the most vocal proponent of AI regulation has been Microsoft. By positioning itself as a responsible leader in AI, Microsoft is hoping to gain trust from both consumers and business clients. Google is the next obvious culprit. With its parent company Alphabet, Google has shown support for regulatory frameworks around AI, particularly in areas like facial recognition and ethical AI. Google benefits from regulation by reducing the risks of uncontrolled AI deployment, which could lead to public backlash or harmful incidents that might tarnish its reputation or lead to costly legal battles. Meta has faced a lot of scrutiny over its use of AI in content moderation and data privacy. Meta is hoping to guide the formation of policies in a way that aligns with its own practices, potentially mitigating some of the public and regulatory pressure it faces. 

Of course, these companies never state the true intentions behind these regulations – which is to close the door behind them after creating one of the most powerful technologies mankind has ever encountered. By advocating for regulation, these companies can not only start pushing to monopolize this technology for themselves, but they can also get the added benefit of brand reputation and consumer trust by advocating for “safe and ethical AI”. We are already seeing the effects of this through President Biden’s executive order which already implies increased compliance costs, barriers to entry and innovation, and market consolidation – all of which will help them dominate the market and kick out the little guys. 

Consumers are getting screwed

All these regulations will create negative consequences for consumers if not carefully crafted or if they “inadvertently” favor large companies at the expense of smaller ones or innovation in general. Primarily, a reduction in innovation and diversity, slower access to advanced technologies, and decreased competition are a few of many concerns. The best example of this happening is in Canada. The telecom industry consists of only three players: Rogers Communications Inc., Telus Corporation, and Bell Canada. This became possible as they lobbied and bullied their way into the top to introduce regulations to stifle any competition in mobile phone and internet services.

As a result, Canadians have significantly worse coverage plans, both locally and globally, than Americans do. Mobile phone bills have skyrocketed to eye watering prices, and Canadians are often the last to get many interesting and innovative services. This extreme competition stifling has even resulted in fatal consequences. On July 8th, 2022, Rogers Communications experienced a service outage that knocked 25% of Canada offline. This resulted in many crucial services being knocked offline, including 911 services.

If Canada is suffering this much due to the monopolization of telecom services through regulation, it pains me to imagine what catastrophic consequences the regulation of AI will have upon the consumers in a country like the United States. 

What do we do then?

AI must be decentralized. Period. Full stop. Allowing something as game changing and powerful to be centralized and to follow the bottom line of business corporations is akin to allowing the internet to be controlled by corporations. If this had happened, this would have resulted in a much less free and open internet than the one we have today. Consumers of the internet today have freedom of choice in where they shop from, how often they do it, and what they wish to pay, due to the many services allowed on the internet. If it had been regulated like Microsoft had attempted to do in 1995, it would be akin to shopping in a random strip mall in midwestern America. 

The same fate awaits AI if we do not push for decentralization. Other countries may embrace decentralization earlier and end up leading the way in innovation and AI power, whereas the US may still be stuck exchanging emails with a product manager that’s promising their feature is “coming in the next quarter”, then proceeding to thank them for “touching base.” 

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!


Related posts

5 Reasons to Be Passionate about Coding

Asaf Darash

5 Lessons Learnt in Scaling Autonomous Mobile Robots from 10 to 10K

Joe Wieciek

Overcoming the Barriers of the Physical World with AI

Jonas Angleflod