Innovation or regulation? Nations take contrasting paths in AI race.

Tech and AI companies are fond of grumbling about regulation, which supposedly stifles innovation. They seemingly want to let the chips fall where they may. However, both guidelines and guardrails need to be provided, most experts agree.

Is development of artificial intelligence (AI) tools and models something that needs to be reined in? Most countries and regions agree – the rise of machine learning has been stunning, and its potential to disrupt life on Earth as we know it is definitely significant.

However, even if China, where the government always has the first and final say, already has tight AI regulations in place, debates about what precisely needs to be controlled and how risky AI actually is are only just beginning elsewhere in the world.

Approaches taken by three key players – the United States, the European Union, and China – are different.

While the Europeans are highly precautionary and, with the bloc’s Digital Markets Act and the AI act, are now sometimes called global leaders in tech regulation, the US – where most of the new AI companies are based – is still almost casually trying to see what’s going on, and mostly keeping its hands off the fast-expanding industry.

Which way is the best? It’s probably too early to say because AI regulation, just like the new generative AI ecosystem, needs to be created almost from scratch. In order to find answers about this grand creative and regulatory experiment, Cybernews chatted to numerous tech regulation experts.

Perfection is not the enemy of progress

For those who support tighter regulation, the EU is an example of how other countries and regions should act.

Sure, the AI Act is not yet confirmed and might change, but the current draft already envisions banning the use of software that creates an unacceptable risk. This covers predictive policing, emotion recognition (also known as brain hacking), and real-time facial recognition.

A whole series of other different requirements could be implemented, too, for instance, in guiding decisions in social welfare, criminal justice, or hiring processes.

The EU wants developers to show that their product is safe, effective, privacy-oriented, transparent, and non-discriminatory. If companies are deemed to violate the bloc’s rules, they could be fined 7% of their annual global profits.

“The EU’s approach is at least attempting to provide guidelines and guardrails around the issue, rather than burying the proverbial head in the sand and letting the capitalism chips fall where they may,” John Isaza, a data privacy and cybersecurity expert at Rimon Law, a firm advising tech companies, told Cybernews.

“Although the EU approach is arguably aspirational at the moment, it is a solid starting point. Some of the criticism is that the regulators simply do not understand how technology works. However, we cannot let perfection be the enemy of progress.”

In the digital age, it is, of course, hard to pinpoint the best approach for regulating AI development because technology is a moving target. But Ani Chaudhuri, the chief executive of Dasera, a data security platform, agrees with Isaza.

“The EU’s caution is commendable in that it prioritizes the rights of its citizens and seeks to pre-empt potential misuse,” Chaudhuri told Cybernews.

However, Gaurav Kapoor, co-CEO and co-founder of MetricStream, a company specializing in integrated risk management and governance, risk, and compliance, thinks the EU’s approach might be too aggressive.

“It’s clear that the EU has taken the most aggressive step toward regulating AI by taking a top-down approach. But driving regulations purely from a government oversight standpoint rarely sets the right balance and, in some cases, could create an issue of overregulation,” said Kapoor.

According to him, business leaders and regulators need to collectively agree on what’s most pertinent to protecting businesses and consumers. In other words, collaboration between regulatory bodies and the private sector would allow the industry to thrive.

Voluntary commitments aren’t enough

It would seem that this is the summit Joe Biden’s America is trying to climb at the moment. It’s not actually surprising – the US’ perspective has always been liberal and market-oriented, and the powers-that-be usually believe in the power of innovation with minimal intervention.

OpenAI, the creator of the viral ChatGPT bot, and other firms have long been publicly calling for voluntary commitments to responsible development of their products – not laws. The White House has heard their pleas.

In July 2023, Biden’s administration announced that seven leading AI companies in the US – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – have formally agreed to voluntary safeguards on the technology’s development.

“Voluntary commitments – underscoring safety, security, and trust – mark a critical step toward developing responsible AI,” the White House stressed in the press release.

This is a great idea, Izasa said – it’s not unlike existing international standards such as those promulgated by the International Organization for Standardization. But it should still “only be a good complement to regulations and could provide the details that regulators would not be able to or attempt to tackle.”

Michael DeSanti, president and chief technology officer at LightPoint Financial Technology, agrees. And says that voluntary commitments need to be combined with laws because big businesses have already proven that they’re often untrustworthy.

“Profit motivation is a powerful force, and history is littered with examples of companies that have violated the trust of guidance bodies in order to make a profit. Some companies will inevitably break their commitments,” DeSantis told Cybernews.

“For example, look at the pharmaceutical industry, where Purdue Pharmaceutical manipulated the US Food and Drug Administration in order to sell more Oxycontin.”

The US has no broad, federal AI-related laws – nor significant data-protection rules, even though, in October 2022, the White House Office of Science and Technology Policy did release a Blueprint for an AI Bill of Rights.

This is a whitepaper describing the five principles supposed to guide the use of AI. For example, automated systems should be safe and effective, non-discriminatory, protective of people’s privacy, and transparent.

People should also be notified when a system makes a decision for or about them; be told how the system operates; and be able to opt out or have a human intervene.

Bad actors lurking

Promising but, again, non-binding. Besides, many experts find it a bit fishy that tech companies do contemplate potential unintended consequences of the tech but then regularly complain of too much regulation because it would supposedly stifle innovation in the field.

They feel this kind of concern by tech leaders, eloquently called “visionary oligarchs” by US economists Daron Acemoglu and Simon Johnson in their new book Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, is exaggerated.

“I can tell you that in my space of information governance and privacy, the arrival of regulations was in many respects welcomed and a blessing. Entrepreneurs are better served if they know or understand from the get-go the obstacles and deal breakers in the technology they may be proposing,” said Isaza.

Ashu Dubey, co-founder and chief executive of Gleen, a generative AI company, doesn’t believe regulation – if it’s sensible – would kill off innovation, even though the industry should have some guidelines around ethical use and deployment.

“I think voluntary commitments are good when adhered to, but the reality is that a voluntary commitment can be walked back at any time,” said Dubey.

Alon Yamon, chief executive of Copyleaks, a firm detecting plagiarized and AI-generated content, provides a timely reminder: guardrails can be established without stymying continued innovation.

“AI is so new, and we’re all attempting to figure it out in real-time. As a society, to inform the best path forward, it’s important for the industry to be open and transparent, working closely with the government to educate and capitalize as best as possible on AI’s positive implications while curbing its negative and disruptive consequences,” Yamin told Cybernews.

But the thing is, the genie is already out of the bottle, believes Richard Gardner, chief executive of Modulus, a tech company. It’s the early stages, sure, but some of the technologies coming out of the AI movement are already rife with issues, including hallucinations.

“That will create a necessity for governments to step in and offer guardrails, though in many countries, it may take a headlines-grabbing event to be the impetus – not unlike the FTX disaster that has renewed calls for regulation in digital assets,” said Gardner.

Finally, Gardner stresses, if we agree that China is actively working to use AI to the benefit of the government, we also have to admit that these foreign actors will not heed any voluntary commitments, international or not.

“This means that bad actors, including cyberterrorists and hackers, will continue to develop AI for nefarious means, regardless of national law or international agreements. The only way to stop them is to develop a better, more sophisticated counter-strategy at the national, or perhaps international, level,” Gardner told Cybernews.

Source: Cybernews

Have a query? Contact Us

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *