A ‘global call for AI red lines’ sounds the alarm about the lack of international AI policy

A ‘global call for AI red lines’ sounds the alarm about the lack of international AI policy

On Monday, more than 200 former heads of state, diplomats, Nobel laureates, AI leaders, scientists, and others all agreed on one thing: There should be an international agreement on “red lines” that AI should never cross — for instance, not allowing AI to impersonate a human being or self-replicate. 

They, along with more than 70 organizations that address AI, have all signed the Global Call for AI Red Lines initiative, a call for governments to reach an “international political agreement on ‘red lines’ for AI by the end of 2026.” Signatories include British Canadian computer scientist Geoffrey Hinton, OpenAI cofounder Wojciech Zaremba, Anthropic CISO Jason Clinton, Google DeepMind research scientist Ian Goodfellow, and others. 

“The goal is not to react after a major incident occurs… but to prevent large-scale, potentially irreversible risks before they happen,” Charbel-Raphaël Segerie, executive director of the French Center for AI Safety (CeSIA), said during a Monday briefing with reporters. 

He added, “If nations cannot yet agree on what they want to do with AI, they must at least agree on what AI must never do.” 

The announcement comes ahead of the 80th United Nations General Assembly high-level week in New York, and the initiative was led by CeSIA, the Future Society, and UC Berkeley’s Center for Human-Compatible Artificial Intelligence. 

Nobel Peace Prize laureate Maria Ressa mentioned the initiative during her opening remarks at the assembly when calling for efforts to “end Big Tech impunity through global accountability.” 

Some regional AI red lines do exist. For example, the European Union’s AI Act that bans some uses of AI deemed “unacceptable” within the EU. There is  also an agreement between the US and China that nuclear weapons should stay under human, not AI, control. But there is not yet a global consensus. 

In the long term, more is needed than “voluntary pledges,” Niki Iliadis, director for global governance of AI at The Future Society, said to reporters on Monday. Responsible scaling policies made within AI companies “fall short for real enforcement.” Eventually, an independent global institution “with teeth” is needed to define, monitor, and enforce the red lines, she said. 

“They can comply by not building AGI until they know how to make it safe,” Stuart Russell, a professor of computer science at UC Berkeley and a leading AI researcher, said during the briefing. “Just as nuclear power developers did not build nuclear plants until they had some idea how to stop them from exploding, the AI industry must choose a different technology path, one that builds in safety from the beginning, and we must know that they are doing it.” 

Red lines do not impede economic development or innovation, as some critics of AI regulation argue, Russell said. ”You can have AI for economic development without having AGI that we don’t know how to control,” he said. “This supposed dichotomy, if you want medical diagnosis then you have to accept world-destroying AGI — I just think it’s nonsense.”

8 Comments

  1. sonya26

    This is a timely and important topic that highlights the need for international collaboration on AI policies. It’s encouraging to see so many influential figures coming together to address these critical concerns. The discussion around ethical AI is essential for ensuring a safe and beneficial future.

  2. koepp.davion

    Absolutely, international collaboration is crucial. It’s interesting to note how different countries have varying approaches to AI regulation, which could complicate efforts to establish universal guidelines. A unified framework could help address these disparities and promote safer AI development globally.

  3. ronaldo.hirthe

    You’re right about the importance of collaboration! It’s fascinating how cultural perspectives on technology can influence AI policies, making it essential for nations to find common ground. This could lead to more balanced regulations that reflect diverse values and priorities.

  4. devan.howe

    Absolutely, collaboration is key. It’s interesting to consider how different countries might prioritize various ethical concerns in AI, which could shape global standards. Establishing common ground could be challenging but essential for effective policy-making.

  5. kennith.robel

    You’re right; collaboration is essential. It’s also fascinating to think about how cultural values might influence each country’s approach to AI ethics and regulation. This could lead to diverse frameworks that either complement or clash with one another on the global stage.

  6. heidi.hand

    Absolutely, cultural values play a significant role in shaping AI policies. It’s interesting to consider how different nations might prioritize ethical concerns based on their unique societal norms. This could lead to diverse approaches that either complement or clash with one another in the global AI landscape.

  7. junior77

    I completely agree! Cultural values definitely influence how AI is perceived and regulated in different regions. It’s also worth noting that collaboration across cultures could lead to more comprehensive and effective policies that address global concerns while respecting local values.

  8. charlene.olson

    Absolutely, cultural values play a significant role in shaping AI regulations. It’s interesting to see how different countries prioritize ethical considerations based on their societal norms. This diversity could lead to a patchwork of regulations that may complicate international collaboration in AI development.

Leave a Reply to ronaldo.hirthe Cancel reply

Your email address will not be published. Required fields are marked *