AI Governance within the Framework of International Law: Challenges and Future Solutions

By • 5 min read

AI Governance within the Framework of International Law: Challenges and Future Solutions

Khaled Salous

Khaled Salous

Academic researcher in international law and international relations .

clear ethical standards must require governments and companies to respect principles of transparency, fairness, and human rights.

The world is witnessing an unprecedented technological transformation driven by artificial intelligence (AI), which has become one of the central engines of global change. This rapid development raises a range of complex legal and ethical challenges, particularly because of the transnational nature of these technologies. Among the most pressing concerns are privacy protection, ensuring fairness and justice, strengthening cybersecurity, and clarifying legal liability. This paper aims to analyze the current international legal framework governing AI, examine the key legal challenges associated with its use, and propose practical solutions for developing effective global governance mechanisms that balance technological innovation with the protection of fundamental human rights.

AI technologies have emerged as a major driver of economic growth and sustainable development, influencing nearly every aspect of daily life. Yet, this digital revolution also raises legitimate concerns about the capacity of international law to keep pace with the accelerating evolution of these technologies. AI operates beyond national borders, rendering domestic regulatory frameworks insufficient to address the cross-border implications of its deployment. This reality underscores the need for a coherent and globally applicable governance system capable of ensuring responsible innovation while safeguarding fundamental rights and promoting international stability. The core issue addressed in this article revolves around the following question: how can international law keep pace with the rapid development of artificial intelligence and establish an effective governance framework that balances innovation with the protection of rights? The study proceeds from the assumption that the current international legal framework is inadequate to address the challenges posed by AI technologies and argues that the development of clear international governance mechanisms is essential for achieving the necessary balance between technological progress and the protection of human rights.

Artificial intelligence can be defined as the capacity of technological systems to perform tasks that typically require human intelligence, including learning, reasoning, and decision-making. Prominent applications include autonomous vehicles, surveillance systems, predictive analytics, and intelligent robotics. In this context, AI governance refers to the legal, ethical, and institutional frameworks designed to regulate the development and deployment of AI systems. Effective governance seeks to ensure respect for human dignity, protect individual rights, and promote accountability, transparency, and fairness.

International human rights law provides a foundational framework for protecting individual rights affected by AI. The United Nations has long established principles that remain highly relevant in the digital era. The Universal Declaration of Human Rights (1948) and the International Covenants of 1966 constitute a legal basis for safeguarding the right to privacy, non-discrimination, and freedom of expression in the context of AI use. In recent years, several important international regulatory initiatives have emerged, including the 2021 Recommendation on the Ethics of Artificial Intelligence by the United Nations Educational, Scientific and Cultural Organization, the draft Artificial Intelligence Act proposed by the European Commission, and the OECD Principles on Artificial Intelligence adopted by the OECD. Together, these instruments provide an important initial foundation for building a more comprehensive and enforceable international legal framework for AI governance.

The challenges surrounding AI governance are both diverse and complex. One of the most pressing issues is privacy protection. AI systems can collect, process, and analyze personal data on an unprecedented scale, often without clear international safeguards or effective oversight. Another significant challenge concerns legal liability. Determining accountability for harms caused by autonomous systems remains difficult, as responsibility may involve developers, operators, or users—or a combination of them. Questions of fairness and non-discrimination also arise, as biased algorithms can reinforce existing social, racial, or economic inequalities, undermining justice and equality before the law. Moreover, the transnational operation of AI raises serious concerns regarding state sovereignty, cross-border data governance, and the extraterritorial application of laws. These issues demonstrate the inadequacy of fragmented national legal frameworks and highlight the need for coordinated global mechanisms of governance.

Addressing these challenges requires coordinated and practical international solutions. A critical step is the adoption of a binding international treaty that defines clear legal obligations, accountability mechanisms, and oversight structures. Such a treaty would provide a solid foundation for consistent regulation across jurisdictions. Complementing this, the establishment of a specialized international body under the auspices of the United Nations could enhance compliance monitoring and reporting, ensuring that states and private actors adhere to agreed-upon norms. Strengthening international cooperation between states, academic institutions, and the private sector is also essential to develop shared standards, exchange expertise, and coordinate enforcement efforts. Additionally, clear ethical standards must require governments and companies to respect principles of transparency, fairness, and human rights. Robust legal and institutional accountability mechanisms are equally necessary to ensure adherence to these standards. Finally, promoting education and public awareness about the legal and technological dimensions of AI will help build a more informed and resilient global society capable of engaging with these challenges.

Artificial intelligence represents both a transformative opportunity and an unprecedented challenge for the international legal order. Harnessing its benefits while mitigating its risks requires a comprehensive and cooperative international governance approach. Adopting a binding international treaty, creating an oversight body, and developing unified ethical and legal standards are fundamental steps toward building a global legal architecture that balances innovation with human rights protection. The future legal landscape of AI will not be shaped by fragmented national efforts but by genuine international collaboration that reflects a shared responsibility for safeguarding humanity in the digital age.

References

  1. Processing of Personal Data (Convention 108+). Retrieved from
    https://www.coe.int.
  2. European Union. (2019). Ethics guidelines for trustworthy AI. Retrieved from
    https://digital-strategy.ec.europa.eu.
  3. Organisation for Economic Co-operation and Development (OECD). (2019).
    OECD recommendation on artificial intelligence. Retrieved from
    https://oecd.ai.
  4. United Nations. (2020). Roadmap for digital cooperation. Retrieved from
    https://www.un.org.
  5. United Nations Educational, Scientific and Cultural Organization (UNESCO). (2021).
    Recommendation on the ethics of artificial intelligence. Retrieved from
    https://unesdoc.unesco.org.


.


Reuse permitted under CC BY 4.0 — please credit the author and link to the original page. CC BY 4.0