Technologies for International Law & International Law for Technologies

Dr Berenice Boutin | b.boutin@asser.nl

Advanced technologies such as artificial intelligence (AI), and their societal and policy implications, are at the forefront of current public debates (see, e.g., here, here, and here). The topic is also on the agenda of international organisations, and, in September 2018, the United Nations Secretary-General launched a ‘Strategy on New Technologies’ outlining ‘how the United Nations system will support the use of these technologies to accelerate the achievement of the 2030 Sustainable Development Agenda and to facilitate their alignment with the values enshrined in the UN Charter, the Universal Declaration of Human Rights, and the norms and standards of international law.’

New technologies offer unprecedented opportunities to foster global positive change, but also bring critical challenges for governance. This post explores these two aspects from the perspective of international law. It first presents some of the ways in which new technologies could be used to facilitate or improve international legal practice (§1), before discussing the role that international legal norms and institutions could play to address the regulatory and accountability challenges posed by AI and other technologies (§2).

  1. What advanced technologies can do for international law

AI is one of the most promising, complex, and fast advancing current technological developments. The term AI refers to computer systems that exhibit abilities to perform problem-solving, predictive analysis, and other cognitive tasks. In recent years, advanced machine-learning algorithms able to learn on their own on the basis of large data sets have raised legitimate concerns but also brought about novel opportunities to enhance decision-making processes, to inform policy choices, and to advance common goals. AI is fuelled by related developments, including the increasing availability of data in the digitized society and advancements in robotics. Another relevant technology for international law is blockchain. Blockchain technologies are decentralised cryptographic systems (notably used in the context of cryptocurrencies) which offer new possibilities to securely record and verify information.

The technical capabilities of AI, blockchain, and other emerging technologies have the potential of providing new tools to implement and give effect to norms of international law.

New technologies could first be used to monitor compliance and prevent violations of international law. Advanced computer and robotic systems possess capacities to access, capture, collect, and process data that go beyond what human alone can achieve. They can be used to document and analyse data, so as to identify factual patterns amounting to (risks of) violations of international law. For instance, the PAWS (Protection Assistant for Wildlife Security) project has developed and deployed a machine-learning algorithm that predicts where poaching attacks could occur. In the context of large remote wild areas, the system has proven to be particularly useful. Another example is Sentry, an AI-based technology developed by Hala Systems that predicts air strikes in the Syrian conflict to allow civilians to seek shelter. In a challenge proposed by the Asser Institute for a Hackathon organised in the Hague in November 2018, the potential of using AI and blockchain to address the issue of land grabbing will be explored.

Second, advanced technologies could be used to support investigations into violations of international law. In particular, combining detection and blockchain technologies could facilitate the production of evidence admissible in court. Blockchain technologies offer new ways to record, authenticate, and securely preserve and transfer information. In the context of armed conflict, blockchain can be used to verify and share evidence so as to enable the prosecution of international crimes. For example, in March 2018, the Global Legal Action Network (GLAN) announced the launch of a project exploring opportunities to use blockchain to securely gather evidence in relation to the conflict in Yemen.

Thirdly, new technologies could be used to inform the formulation of global policies. It can indeed be envisaged that advanced AI able to analyse information on a global scale could help addressing complex issues such as climate change, sustainable development, migration, armed conflict, and terrorism. By compiling and analysing large amounts of intricate data, AI technology can turn unintelligible information into tangible materials. It can identify unseen patterns and connections, and possibly suggest new ways to tackle global issues. The predictive capabilities of AI could also be used to identify emerging challenges and anticipate risks at a small and large scale. Human interpretation and judgement remain essential, but using AI to support law and policy-making could allow to uncover new solutions and improve global governance.

  1. What international law can do for advanced technologies

In order to achieve the potential benefits of new technologies outlined above, it is important to confront the critical challenges they raise with regard to transparency, privacy, equality, and accountability. Transparency is one of the most crucial issues of current AI. Because of their complex inner-workings and autonomous capabilities, machine-learning algorithms can reach results that humans are not able to explain. Experts agree that it is essential to confront the issue of explainability and intelligibility of the reasoning process of algorithms. The opacity of new technologies also relates to the fact that many technologies are developed by commercial corporations on the basis of privately-owned data. It has been suggested that certain algorithms and data should be ensured public ownership and control to improve transparency.

The second main challenge of new technologies concerns privacy, non-discrimination, and more generally respect for fundamental values and human rights. The important privacy implications of data-driven technologies are well-known, and initiatives such as the EU General Data Protection Regulation (GDPR) attempt to address them. The issue of bias in algorithmic decision-making (which stems from bias in humans and in data sets) has more recently been identified as a pressing challenge to be addressed in order to develop globally beneficial AI.

Another central issue of new technologies is accountability. Technologies that are able to take decisions and act autonomously are disruptive to concepts of responsibility, and it remains unsettled how to allocate responsibility for harm caused by technology operating with limited human involvement.

International law and international institutions can help addressing these challenges in a number of ways, by (1) coordinating the development of private standards, (2) adapting existing norms and concepts and filling regulatory gaps, and (3) providing frameworks for responsibility.

The current regulatory landscape of AI is characterised by a proliferation of private standards and guidelines developed by the industry (e.g. Google and Microsoft). Private regulation is a useful step but remains of a voluntary and non-binding nature. Further, private standards are not harmonized and are driven by various interests and values. In this context, the role of international law and institutions can be to coordinate the development of private regulation, possibly working towards the development of internationally-agreed guidelines that would also ensure that fundamental values are integrated in the design and development of AI.

New technologies do not necessarily call for new rules, and existing norms and concepts are certainly useful and applicable. Legal notions are flexible and abstract enough to adapt to new scenarios. Nonetheless, some technological developments relating to autonomy are so disruptive that existing legal frameworks have difficulty grasping them. Where the need to develop new rules is identified, it will be important to adopt multilateral and multi-stakeholder approaches that involve all relevant public and private actors.

Finally, international law could help in the formulation of frameworks to allocate responsibility amongst a multiplicity of actors each involved at different degrees in the design or use of technologies. Responsibility could – alternatively or cumulatively – be assigned to individual or collective, natural or fictional, private or public actors (operators, developers, deciders, corporations, states, etc.). In essence, allocation of responsibility should reflect genuine connections between the acts and omissions of an actor and a subsequent damage, while maximising opportunities for remedies and avoiding an excessive diffusion of responsibility amongst the plurality of actors. But with complex, decentralized, autonomous technologies, ascertaining human or institutional control over certain outcomes becomes increasingly difficult. In particular, the issue of non-explainability of AI affects the grounds for holding human agents responsible. Direct human operators can have limited control and understanding of the technology they use, while designers and developers have a far-removed connection with possible subsequent damage. At the collective level, models of diligent or strict liability could be envisaged. Under a negligence model, corporations and states would be responsible if they fail to diligently integrate ethical values and international norms when designing, developing, and using new technologies. Under a strict liability model, actors can be held responsible even if some diligence was exercised, on the basis of the inherent high risk of certain activities.

Apart from issues of individual and collective responsibility, the question of whether advanced AI systems with a high level of autonomy could and should be granted legal personality is heavily debated. Some consider that it would allow existing actors to evade responsibility, while others see it as an inevitable development that could contribute to better accountability. In this regard, it is important to note that holding one actor responsible does not exclude the responsibility of other actors, and that the responsibility of individuals, collectives, and AI, could be cumulative. An open and informed debate is necessary to advance options and models of shared responsibility for advanced technologies.

In conclusion, addressing responsibility and other regulatory and governance challenges is essential to advance the promises of new technologies, and international law can provide a platform for this endeavour.

This blog post is based on a presentation provided by the author at an event on ‘Artificial Intelligence & Blockchain for Good: Using Technology to Foster Trust’ organised on 5 July 2018 at the Asser Institute in cooperation with UNICRI’s Centre for Artificial Intelligence and Robotics.

photo BB 5 july 2018 (1)

Dr Berenice Boutin is a researcher in international law at the Asser Institute (The Hague). Her work focuses on responsibility, security, and new technologies. Berenice is currently leading a research project entitled ‘Conceptual and Policy Implications of Increasingly Autonomous Military Technologies for State Responsibility Under International Law’. She holds a PhD in international law from the University of Amsterdam (2015).

Previous
Previous

International Arbitration & the Remedy Gap for Victims of Business-Related Human Rights Abuses

Next
Next

President's Note on Volume 6, Issue 1