top of page
Search
  • Writer's pictureMichaela Jamelska

Protecting human rights in the era of Ai. What it even means?




There are a lot of articles nowadays about Ai threatening human rights. We know that Ai technology can bring positive improvements or negative consequences in many areas of life depending on how we use it. This well-recited statement makes us not only diplomatic but also creates alibi since we do not have enough data to come to a single conclusion. Most likely even with more data, there will never be one single conclusion. This article explores international efforts of placing human rights at the center of technological development but also challenges the views on Ai and human rights. It is written to promote a discussion and critical thinking around this topic.


What we know is that the International human rights law sets out globally accepted legal principles that uphold the dignity of all people. International human rights law applies globally, and so it requires international cooperation when adopting new technologies. Governments around the world are coming up with hundred-page reports on AI and accountability, the Ethical Use of Ai, and Ai and human rights.


Let’s take a look at a few country examples.

The first one is Australia which was really one of the first countries to connect Ai and human rights together. Australia’s overarching approach to new technologies, and especially the role of the Australian Government’s Digital Australia Strategy is to keep human rights at the center and develop the strategy that protects human rights in the era of fast-emerging technologies. ‘’The Commission recommends that the Digital Australia Strategy promote responsible innovation and human rights through measures including regulation, investment, and education. This will help foster a firm foundation of public trust in new and emerging technologies that are used in Australia.’’) Effective regulations, education, training, funding for responsible innovation, community action plans for the adoption of new technologies, are just a few of the incentives that the report suggests and for which the funding is allocated. ( Human Rights and Technology Final Report)


Looking at Germany, the government strategy’s main goal is to promote technological innovation for the good of society. The German national strategy also acknowledges the tension that can arise between effective rights protections and the pursuit of investment and growth and sets out a range of measures to help guide regulatory reform needed to protect rights and at the same time fund the development of innovative applications that respect rights.

We know that technological development is fueled by a group of venture capitalists. Some reports claim that as of now the world’s 10 venture capital firms have, together, invested over $150 billion in technology startups. Amnesty International recently analyzed 50 Vc firms and discovered that those venture firms do not have in place adequate human rights due diligence policies and processes.


The UK also recently published a ten-year plan to make Britain a global AI superpower, where it says that ​​the Government aims are to build the most trusted and pro-innovation system for AI governance in the world and work with its partners around the world. The government seeks to work towards the international agreements and standards that deliver prosperity and security and promote innovation that harnesses the benefits of AI as we embed our values such as fairness, openness, liberty, security, democracy, rule of law, and respect for human rights. The Uk places importance on international collaboration on this matter with key actors and partners on the global stage to promote the responsible development and deployment of AI. The country will also act to protect against efforts to adopt and apply these technologies in the service of authoritarianism and repression. (National AI report)

Recently, the United Nations human rights chief has publicly stated that the artificial intelligence system has a negative, even catastrophic" risk. The UN report warned of AI's use as a forecasting and profiling tool, saying the technology could have an impact on human rights such as "rights to privacy, to a fair trial, to freedom from arbitrary arrest and detention and the right to life."


It is clear that the incentives and efforts to build a framework for responsible adoption of Ai are now captured in the reports of governments. I remember a few years back when I mentioned technology and human rights in one sentence, people looked at me as I claimed dinosaurs still exist. Well, better later than ever though. However, the question or challenge that remains is the actual understanding of human rights at first. We often refer to human rights while barely knowing our own 10 fundamental human rights. We also like to throw ‘’human rights term’’ wherever it fits us or serves us. ( how you dare...it is my right!) We also demand more concrete protections from Ai since it has been revealed that several human rights violations were associated with increasing Ai. (which is great, it is better to be safe than sorry) The Ai that needs to respect human rights is often a drawn conclusion of many of these discussions. We should at first maybe start with people understanding human rights, and then for it to be conveyed to Ai. The technology design should place human rights into a picture from the beginning. The companies and organizations, smaller and bigger, should conduct technology-specific human rights impact assessments. Let’s not forget about the board room where the major decisions are taken, the business strategies and models drawn in those board rooms would benefit from the input of human rights consultants, experts, and human rights officers. It all comes down also to individuals learning more on this topic as it will be affecting us more and more. I hope this sparks discussion which continues online or offline about Ai and human rights, or about your 10 fundamental human rights. ( and Now, can you name them ?)

bottom of page