top of page
Search
  • Writer's pictureMichaela Jamelska

AI and Gender Equality


michaela jamelska


The term AI has become a frequent buzzword, often used in marketing to capture attention to the point of overuse. While researching alternatives, I encountered the term 'thinking' computer systems, or DoCS, but it's unclear if this abbreviation will resonate as strongly as AI does. Thus, for clarity and ease, this article will continue to use AI, recognizing our preference for familiar terms. The focus here is to identify and discuss various risks associated with AI and gender equality.

AI technology is significantly transforming the labor market by changing the types and numbers of jobs available. Automation is profoundly influencing the employment landscape, phasing out some jobs while creating new opportunities, and it's crucial to understand how these changes affect gender dynamics within the workforce and society at large. A UNESCO report highlights that the growing adoption of technology is fueling the creation of new jobs. However, this is the juncture where AI can either support or perpetuate gender bias, impeding the realization of diversity, equity, and inclusion (DEI) objectives. In line with ethical standards, the EU emphasizes the importance of AI technologies guaranteeing respect for the fundamental rights of EU citizens. The EU seeks to counteract the potential harm arising from AI misuse and tackle significant ethical concerns including bias, discrimination, algorithmic opacity, lack of transparency, privacy issues, and technological determinism. While it is true that automation is likely to impact both female-dominated and male-dominated occupations, women are more likely to work in roles involving a high degree of routine and repetitive tasks, such as clerical support work or retail jobs (Lawrence, 2018; Schmidpeter and Winter-Ebmer, 2018; Brussevich et al., 2019).


The Role of Data

The data fed into algorithms can significantly influence their functioning, embedding gender bias in AI systems designed by individuals. Consequently, AI systems utilize the provided data, identify patterns within it, and often amplify these patterns. One recent issue with data consumption was that AI was trained with an unimodal system, meaning it was specifically trained for a particular task, such as processing images, which was a fundamental problem leading to biased AI. Only recently have many of these algorithms been trained with a so-called multimodal system. While these systems have been previously utilized for research purposes, they are now becoming more commercially available. Similar to how humans process data from various sources, new AI algorithm training will also incorporate multiple sources, ensuring that there is no lack of context when processing data, allowing them to integrate different modalities and synthesize them. Although the new approach is an improvement, it is not without its drawbacks, as it often relies on datasets primarily from open-source frameworks, which may exhibit biases.


Another unaddressed challenge highlighted by Stanford’s Institute for Human-Centered Artificial Intelligence is that multimodal models can result in higher-quality, machine-generated content that is easier to personalize for misuse purposes. Therefore, it is unrealistic to assume that we can have completely unbiased technologies with multimodal training systems, as even we as human beings are not free of bias. However, we can mitigate our biases and habits by providing diverse data and information. An advantage of AI is that it reflects and mirrors some of the biases humans hold. Furthermore, the new algorithmic accountability policies emphasize the prioritization of public participation to develop more democratic and equitable systems. It is only recently that Amsterdam and Helsinki launched AI registries to detail how each city government uses algorithms to deliver services. The registry also provides citizens with an opportunity to provide feedback on algorithms and ensure that these AI systems work in favor of society rather than against it. This is hopefully one of many steps towards using AI to achieve gender equality.


Policy and Education as Tools for Change

The importance of innovation within education cannot be overstated. It is a driving force for a progressive society, but only when it keeps pace with the times. While the number of women entering the STEM field is increasing, it does not necessarily indicate that we are close to bridging the gender gap in digital skills. According to the World Economic Forum, within the G20 countries, women represent less than 15% of ICT professionals, and this gender and skills gap is widening every year. The European Institute for Gender Diversity reports that the gender gap in the AI workforce widens with career length. Women with more than 10 years of work experience in AI represent 12% of all professionals in the industry, compared to 20% of women with 0–2 years of experience. Therefore, while more women are entering computer science roles, their numbers significantly decline over time due to the lack of support, discrimination, or the glass ceiling phenomenon, which essentially leads them to transition into other fields. The number of women entering the technology field is still not sufficient.


To ensure that AI development supports gender equality and creates a more equitable society, several practical steps can be integrated into the ecosystem of technology and policy. First, it is important to encourage the formation of diverse AI development teams that include varied genders, ethnicities, and socio-economic backgrounds to minimize unconscious biases. Furthermore, ethics training should be a mandatory part of the curriculum for all AI and computer science programs, emphasizing the societal impacts of AI, particularly concerning gender biases. Companies should also be required to use gender- inclusive data sets and prepare AI Impact Statements that evaluate potential impacts on gender equality before new AI systems are deployed.


Public awareness campaigns can play a significant role in educating the populace on AI’s workings and potential biases, empowering individuals to demand accountability. Additionally, enhancing support for women in STEM through increased funding for scholarships, mentorship, and career development programs by governments and private enterprises can help address the barriers women face in tech.


Furthermore, to enhance the effectiveness of testing protocols for AI systems, it's essential to extend these procedures beyond internal company evaluations to include audits by independent bodies. These protocols should be rigorously designed to ensure AI systems are tested in diverse environments that accurately simulate real-world scenarios, which helps identify potential gender-specific impacts and other biases. This approach not only tests the functionality of the AI but also its fairness and impartiality across different demographics. Independent auditing bodies, with expertise in both technology and ethical standards, should regularly review these AI systems. Their assessments would provide an unbiased view of how the AI operates in various scenarios, highlighting any discrepancies that could lead to unfair outcomes. By involving external auditors, companies can maintain transparency with the public and regulatory bodies, fostering trust and credibility in their AI products.


Finally, implementing standardized feedback mechanisms within AI applications can enable users to report biases directly, with dedicated teams responsible for monitoring and addressing these concerns. These comprehensive steps will foster an AI development environment that not only propels technological advancements but also ensures these advancements are utilized to promote a fair and equitable society.


Sources:


UNESCO. (n.d.). Technology. Available at: https://www.unesco.org/gem-report/en/technology


World Economic Forum. (2022). How to close the digital gender divide. Available at: https://www.weforum.org/agenda/2022/03/how-to-close-digital-gender-divide/


Brussevich, M., Dabla-Norris, E. and Khalid, S. (2019), ‘Is technology widening the gender gap? Automation and the future of female employment’, IMF Working Papers, Vol. 19, No 91.


Written by Michaela Jamelska

bottom of page