The author is the creator FilteringFT sponsored site about European startups
The leaders of the G7 countries discussed many international concerns over Nomi oysters in Hiroshima last week: the war in Ukraine, economic resilience, clean energy and food security among others. But they also threw one more thing into their distinctive bag of good intentions: the advancement of integrated and reliable intelligence.
As they recognize the new potential of AI, leaders are concerned about the risks to human security and human rights. Implementing the Hiroshima AI strategy, The G7 has commissioned a working group to assess the impact of artificial intelligence (AI) technologies, such as ChatGPT, and encourage leaders to discuss them later this year.
The first challenges will be how to properly define AI, classify its risks and put in place the right solution. I am laws should the good be left to the existing institutions? Or is this technology so important that it requires new international organizations? Do we need a modern equivalent of the International Atomic Energy Agency, which was established in 1957 to promote the peaceful development of nuclear technology and prevent its military use?
One can discuss how the UN has successfully completed this task. After all, nuclear weapons technology includes radioactive materials and large-scale weapons that are easy to see. AI, on the other hand, is cheap, invisible, widespread and has permanent systems. At a minimum, it presents four challenges that need to be solved in a flexible way.
The first part is discrimination. Machine learning systems are designed to discriminate, so they can see what’s in the traffic. This is good for seeing cancer cells on radiology scans. But it’s bad if black box machines trained on the wrong data sets are used to hire and fire employees or approve bank loans. Bias in, bias out, as they say. Restricting these practices in high-risk areas, as the EU’s AI Act requires, is one prudent, conservative approach. Creating independent, professional accountants can be a revolutionary process.
Second, disinformation. As academic Gary Marcus warned the US Congress last week, emerging AI could put democracy itself at risk. Such examples can produce sound lies and fake people at lightning speed and industrial scale.
The onus should be on the technology companies themselves to display content and reduce false information, such as suppressing spam emails. Failure to do so only increases the call for stronger action. The example can be set in China, where the regulatory law puts the responsibility for the misuse of AI models on the developer and not on the user.
Third, dislocation. No one can accurately predict what impact AI will have on the economy. But it seems certain that it will lead to the “deprofessionalization” of white-collar jobs, as entrepreneur Vivienne Ming told the FT Weekend festival in DC.
Computer programmers have widely embraced artificial intelligence as a powerful tool. In contrast, Hollywood screenwriters may be the first in many commercials for fear that their original skills will become permanent. This complex story defies easy answers. Countries will have to adapt to human problems in their own ways.
Fourth, to destroy. Incorporating AI into autonomous lethal weapons (LAWS), or killer robots, is a terrifying prospect. The fact that people should remain in the decision-making process can be established and enforced through international agreements. The same applies to roundtable discussions Artificial General Intelligence, a day (which may be fictional) when AI surpasses human intelligence in every domain. Some campaigners dismiss the story as misleading fiction. But it is important to listen to experts who warn of potential dangers and call for international research cooperation.
Some would argue that trying to control AI is as futile as praying for the sun to never set. The rules will only change gradually as AI becomes more advanced. But Marcus says he was encouraged by bipartisanship to take action in the US Congress. Fearing that EU regulators could impose global norms on AI, as they did five years ago with data protection, US tech companies are also openly supporting the rules.
G7 Leaders should encourage the competition of good ideas. Now they have to start a race to the top, instead of leading a dangerous slide to the bottom.