Why AI Should Be Regulated: 5 Warning Reasons 

Should AI be regulated

Should AI be regulated

One of the foremost concerns surrounding AI revolves around the ethical implications it raises. As AI systems become more sophisticated, they can potentially make decisions that impact people’s lives and even entire societies.

Many nations are making a lot of attempts to regulate AI. Yet, there’s an increasingly controversial debate on social media and the press about if it’s really necessary to do so.

In today's article, we will delve deeper into the multifaceted reasons why AI should be regulated, exploring the potential risks it poses, the ethical considerations it raises, and the various aspects that effective regulations must encompass.

What Could AI Regulation Look Like?

In 2018, the United Kingdom, alongside other European member states, embarked on a pioneering journey by introducing legislation focused on artificial intelligence (AI) that enables the challenge of automated decisions.

This significant development came in the form of the General Data Protection Regulation (GDPR) law, which represents an initial stride towards creating a legal framework around artificial intelligence AI.

Recognizing the necessity for effective implementation and adherence to any established standards or guidelines, the consensus is that a governing body must be entrusted with the responsibility of oversight.

Presently, stakeholders engaged in the ongoing debates surrounding AI regulations have explored several key considerations.

These include the notion that AI should not be weaponized, emphasizing the importance of preventing the misuse of AI technology for harmful purposes.

Additionally, there is a strong push for the inclusion of an impenetrable "off-switch," providing humans with the ability to deactivate AI systems when necessary.

Manufacturers and developers are also being encouraged to voluntarily adhere to general ethical guidelines mandated by international regulations, promoting responsible and accountable AI practices.

Another critical aspect under discussion is the question of liability. Should a real-time AI system malfunction or cause harm, the responsibility and liability for such incidents need to be clearly defined.

Determining the accountability for AI-related errors or adverse outcomes is a complex matter that necessitates thorough examination.

5 Reasons Why AI Should Be Regulated

In short, the implementation of well-designed regulations can help establish clear guidelines for developers and organizations, ensuring responsible and ethical AI practices.

Responsibility And Transparency Are Not Guaranteed

Humans can not entirely use AI with responsibility

Humans can not entirely use AI with responsibility

The complexity and opacity of AI systems pose challenges for individuals and organizations in comprehending their decision-making processes.

However, it is important to acknowledge that not all applications of developed AI adhere to desirable principles or values. There is a potential for AI to acquire immense power, resembling a deity in its capabilities.

When relying solely on self-proclaimed ethical safeguards, AI has demonstrated tendencies towards discrimination and subversion. One notable example is the "social credit" system in China, where AI plays a crucial role.

This system evaluates the trustworthiness of individuals, imposing penalties ranging from minor infractions like jaywalking to excessive engagement in video games.

Without China's AI regulations, consequences may involve the loss of certain rights, such as the ability to book tickets, or restrictions on internet speed.

Besides, imposing mandatory rules on AI would help prevent technology infringing human rights. Regulation has the potential to ensure that AI has a positive, not negative effect on lives. The EU has proposed an AI Act, intended to address these types of issues.

The EU AI regulation is the first of its kind by a large regulator worldwide, but other jurisdictions like those in China and the UK are also entering the regulatory race to have a say in shaping the technologies that will govern our lives this century.

AI Might Lead To Autonomous Systems

AI may create chances for non human based decisions

AI may create chances for non human based decisions

Since AI doesn’t work based on facial recognition, there is a looming concern about the potential development of autonomous systems capable of making decisions independently.

Such systems carry the risk of unintended consequences that could lead to harm inflicted upon individuals and the environment. The regulation of AI plays a crucial role in identifying and mitigating these risks before deploying such systems.

In 2015, a collective of robotics and AI researchers, alongside public intellectuals and activists, penned an open letter presented at the International Conference on Artificial Intelligence.

This letter urged the United Nations to prohibit further advancements in weaponized AI that could operate "beyond meaningful human control."

Noteworthy signatories, including Stephen Hawking, Elon Musk, Noam Chomsky, and leading researchers in AI systems and robotics, surpassed 20,000 in number.

The call for action followed a statement made in 2013 by Christof Heyns, the UN special rapporteur on extrajudicial, summary, or arbitrary executions.

Heyns advocated for a moratorium on testing and deploying armed robots, emphasizing the importance of a collective pause when considering the global deployment of machines capable of taking human lives, regardless of the weaponry employed.

Temporarily halting the development of lethal machines until nations worldwide reach a consensus on limitations for autonomous weapons appears to be a sensible approach.

The Treaty on the Non-Proliferation of Nuclear Weapons stands as an example of a widely signed treaty by most nations, leading to the abandonment of nuclear weapon development programs by countries like South Africa, Brazil, and Argentina, and the reduction of nuclear arsenals in those already possessing them.

Additional relevant treaties encompass the prohibition of biological and chemical weapons, as well as landmine bans.

It is important to acknowledge that existing treaties primarily address items with clear boundaries between what is prohibited and what is not.

However, when it comes to autonomous weapons, delineating such a line becomes exceedingly challenging.

Various degrees of autonomy are inherent in software utilizing algorithms, which are integrated into numerous weapon systems. Therefore, it is crucial to discuss three distinct levels of autonomy pertaining to weapons systems.

The first level, known as "human-in-the-loop systems," is currently operational, requiring human oversight over the robot's target selection and use of force. Israel's Iron Dome system serves as an example of this level of autonomy.

Moving forward, the next level involves "human-on-the-loop systems" that can independently select targets and employ force, yet a human has the capability to override the decisions made by the robot.

South Korea has deployed a sentry robot along the demilitarized zone adjacent to North Korea, which aligns with this level of autonomy.

Lastly, there exists a level of fully autonomous weapons that operate without any human input. Exploring the possibility of an international ban on fully autonomous weapons, involving nations like Russia, China, and North Korea, seems worthwhile.

Given the complexity and potential risks associated with fully autonomous weapons, it is crucial to engage in discussions to establish consensus among nations regarding their prohibition.

Machines May Potentially Cause Job Replacements

Job collapse can be one of the consequences

Job collapse can be one of the consequences

It is becoming increasingly evident that the cyber revolution, initiated by the widespread use of computers and further propelled by the introduction of more advanced machine learning, is leading to profound and worldwide transformative effects. One area of focus is the alarming destruction of jobs, a topic economists delve into frequently.

Initially, blue-collar jobs on assembly lines were displaced by robots, followed by white-collar positions as banks reduced their back-office staff. More recently, even professional roles like legal research have been affected.

The Bureau of Labor Statistics has provided compelling evidence that jobs within the service sector, employing a significant two-thirds of the global workforce, are being "obliterated by technology."

Between 2000 and 2010 alone, 1.1 million secretarial positions vanished, alongside 500,000 jobs for accounting and auditing clerks. Technological advancements have also led to steep declines in other job categories, such as travel agents and data entry workers.

The legal field has experienced the most recent consequences, as e-discovery technologies have diminished the need for large teams of lawyers and paralegals to sift through millions of documents.

According to Michael Lynch, the founder of an e-discovery company called Autonomy, the shift from human document discovery to e-discovery has the potential to enable a single lawyer to accomplish the workload previously undertaken by 500 individuals.

Throughout human history, job destruction has been a recurring phenomenon as new technologies have replaced old practices.

From weaving looms supplanting hand-weaving to steamboats displacing sailboats and Model T cars disrupting the horse-and-buggy industries, such developments have reshaped labor markets.

However, the current concern lies in the limited creation of new jobs by these technological advancements. A single piece of software, crafted by a handful of programmers, can now execute tasks previously undertaken by hundreds of thousands of individuals.

Consequently, there are growing fears of job collapse and an impending economic Armageddon gripping not only the United States but also the world.

Furthermore, the consequences of joblessness and widening income disparities can have significant societal ramifications.

Persistent high levels of unemployment in Europe, for example, have contributed to social unrest, including increased violence, political fragmentation, polarization, heightened anti-immigrant sentiments, xenophobia, and anti-Semitism.

However, some economists maintain a less worrisome perspective, asserting that new jobs will emerge. They argue that people will develop new preferences for products and services that even intelligent computers cannot provide or produce.

Examples cited include an increased demand for skilled chefs, organic farmers, and personal trainers. Additionally, these economists highlight the relatively low unemployment rate in the United States.

In response, the concerned group counters by pointing out that the new jobs often offer lower pay, fewer benefits, and reduced job security.

Exploring untested solutions, such as implementing a universal basic income, reducing workweeks, implementing shorter six-hour workdays, and levying taxes on overtime to distribute the remaining work more equitably, is also under consideration.

However, any recommendations made to Congress and the White House must navigate major challenges arising from deeply entrenched beliefs and powerful vested interests.

AI Regulation Improves User Privacy

Data protection calls for AI regulation

Data protection calls for AI regulation

Regarding privacy concerns, Italy is one of the first Western countries to block the advanced AI tool ChatGPT. ChatGPT banned in Italy has sparked the same concern among  other countries’ data-protection authority.

Iu Ayala Portella, the CEO of Gradient Insight and a tech analyst, emphasized the importance of AI regulation as a data protection measure against the misuse of personal data (eg. credit card number, tax code, etc.).

By implementing such rules, we can prevent AI systems from engaging in discriminatory practices, violating privacy rights, and causing harm to individuals and the environment.

The introduction of regulations for AI is necessary to establish stringent boundaries on high-risk data collection, processing, and utilization.

This is particularly to prevent the creation of profit streams that infringe upon users' privacy rights and intellectual property ownership rights. 

Machines Devalue Innovation

Creativity will not have room to glow up

Creativity will not have room to glow up

Contrary to the perception of regulation as an impediment to innovation, it can actually serve as a catalyst for its promotion.

When clear guidelines and standards are established for the development and implementation of AI, regulation can create an equitable environment for companies and researchers.

This, in turn, cultivates innovation by ensuring that all participants operate within a shared set of rules.

Regulation plays a crucial role in fostering innovation and competition by establishing a level playing field for businesses.

It prevents dominant companies from monopolizing the market and stifling innovation, instead promoting fair competition that benefits consumers.

By ensuring fair competition, regulation encourages companies to innovate and offer improved products and services, ultimately driving progress and benefiting society as a whole.

Is It Too Early To Regulate AI?

The discussion surrounding AI regulation is often met with skepticism as many argue that it is premature to regulate an industry that does not yet have specific requirements in need of regulation. While remarkable advancements have emerged from the world of AI algorithms, it is important to acknowledge that the field is still in its early stages of development.

Concerns arise that implementing regulations could be stifling innovation within a rapidly expanding industry. Alex Loizou, the co-founder of Trouva, advocates for a comprehensive understanding of AI's complete potential before rushing into regulation.

A study conducted by Stanford University supports the notion that attempts to regulate AI in a generalized manner would be misguided.

The lack of a clear and universally applicable definition of AI, coupled with the varying risks and considerations across different domains, further highlights the complexity involved in establishing regulations for the field.

Therefore, the consensus is that a nuanced approach is required, taking into account the unique characteristics and challenges of each AI application, before considering any comprehensive regulatory framework.

AI Regulation: Benefits Prevail Over Losses

We recognize the potential adverse impact on business interests that may arise from regulating AI. There is a concern that it could impede technological advancement and hinder competition.

However, drawing inspiration from the successful implementation of GDPR in the EU, governments have an opportunity to collaboratively establish AI-focused regulations that can yield positive long-term outcomes.

We firmly advocate for meaningful dialogues between governments to establish a common international framework for AI regulation.

It is important, though, for governments to ensure that AI developers have limited authority and avoid assuming absolute powers that could hinder the long-term growth of the AI market.

As we discuss why AI should be regulated, it is essential to acknowledge the intricacies involved in striking the right balance.

Overregulation runs the risk of stifling innovation and impeding progress, while underregulation may lead to the proliferation of AI systems with significant negative consequences.

Achieving a harmonious coexistence between AI and humanity requires a nuanced approach that actively involves policymakers, researchers, industry experts, and the wider public in shaping the regulatory framework.