<
 
 
 
 
×
>
You are viewing an archived web page, collected at the request of United Nations Educational, Scientific and Cultural Organization (UNESCO) using Archive-It. This page was captured on 05:02:14 Sep 26, 2020, and is part of the UNESCO collection. The information on this web page may be out of date. See All versions of this archived page.
Loading media information hide

Building peace in the minds of men and women

Featured articles

“We must educate algorithms”

Sexist algorithms? The question may seem odd. Coded by humans, the algorithms used by artificial intelligence are not free of stereotypes. But while they can induce sexist or racist biases, they can also be used to advance the cause of gender equality. This is what Aude Bernheim and Flora Vincent demonstrate in their book,  L'Intelligence artificielle, pas sans elles! (Artificial intelligence, not without women!)

Interview by Agnès Bardon, UNESCO

How did you become interested in the gender issue in artificial intelligence (AI)?

Aude Bernheim: Originally, our thinking focused on the links between gender equality and science. In 2013, we founded the association WAX Science, or WAX (What About Xperiencing Science), to examine how the lack of gender diversity in scientific research teams could potentially affect the products of science and technology. Our work on AI stems from this reflection.

Actually, we weren’t really surprised to find gender biases in these technologies because they exist in many other fields. There was no reason for AI to escape them. But the consequences are numerous, and go beyond the usual issues of professional equality or salaries. The stereotypes contained in the algorithms can have a negative impact on the way job applications are screened – by excluding women from technical positions – salary proposals, and even medical diagnoses.

Flora Vincent: Scientific teams lack diversity – the phenomenon is well-known. What is not so well-known is that this has consequences on how research is developed and what subjects are given priority. An American science historian, Londa Schiebinger, has been working on this topic recently. She shows that the more women there are on a team, the more likely it is for the gender issue to be taken into account in the study itself.

There are many examples of this discrimination in research. One example is that drugs are tested more on male rats because they have fewer hormones, and therefore it’s considered easier to measure side effects. Another example: for crash tests, standard 1.70-metre and seventy-kilogram dummies, modelled on the average size and build of a man, are used. As a result, the seatbelt does not take into account certain situations, such as pregnant women, for example. 

Has computer science been a predominantly male-dominated discipline from the outset?

Bernheim: No, that was not always the case. In the early twentieth century, computer science was a discipline that required a lot of rather tedious calculations. At the time, these were often done by women. When the first computers came along, women were in the lead. The work was not seen as prestigious at the time. As recently as 1984, thirty-seven per cent of those employed in the computer industry in the United States were women. By comparison, in France in 2018, only ten per cent of students in computer science courses were women; it is estimated that only twelve per cent of students in the AI sector are women.

In fact, a significant change took place in the 1980s, with the emergence of the personal computer (PC). From then on, computer technology acquired unprecedented economic importance. The recreational dimension of computers also emerged in those years, developing a very masculine cultural imagery around the figure of the geek. This dual trend was accompanied by the marginalization of women. This shows that boys' affinity for computers is not natural, but that it is, above all, cultural and constructed.

One might think that algorithms are neutral by nature. To what extent do they contribute to reproducing gender bias?

Bernheim: Some whistleblowers realized quite quickly that algorithms were biased. They found, for example, that translation software [into French, which has masculine and feminine nouns] tended to give professions a gender by translating the English “the doctor” into “le docteur” (masculine), and “the nurse” into “l'infirmière” (feminine). When voice assistants appeared – whether Alexa, Siri, or Cortana – they all had feminine names and responded to orders in a rather submissive manner, even when they were insulted.

In 2016, Joy Buolamwini, an African-American researcher at the Massachusetts Institute of Technology (MIT), became interested in facial recognition algorithms. Her work showed that they [the AI] were trained on databases which contained mostly photos of white males. As a result, they were much less effective on [recognizing] black women or Asian men, than on white men. You can imagine that if she had been part of the team developing these algorithms, the situation would have been different.

Vincent: Coding an algorithm is like writing a text. There’s a certain amount of subjectivity that manifests itself in the choice of words, the turns of phrases – even if we have the impression that we are writing a very factual text. To identify the biases, our approach consisted of dissecting  the different stages of what we call “sexist contagion”. That’s because there isn’t a single cause that creates a biased algorithm, but rather, it’s the result of a chain of causality that intervenes at the different stages of its construction. In effect, if the people who code, test, control and use an algorithm are not aware of these potential biases, they reproduce them. In the vast majority of cases, there’s no wilful intention to discriminate. More often than not, we simply reproduce unconscious stereotypes forged in the course of our lives and education.

Is there an awareness of the bias in certain AI products today?

Bernheim: AI is a field where everything is evolving very quickly – the technology itself, but also the thinking about its use. Compared to other disciplines, the problem of discrimination emerged very early on. Barely three years after the onset of algorithm fever, whistleblowers started drawing attention to the differentiated treatment of certain algorithms. This is already a subject in its own right in the scientific community. It fuels many debates and has led to research work on the detection of bias and the implications of algorithms from an ethical, mathematical and computer science point of view. This awareness has also recently been reflected in the mainstream media. Not all the problems have been solved, but they have been identified and once they have been, solutions can be implemented.

How can algorithms be made more egalitarian?

Bernheim: To begin with, we must act at the level of databases, so that they are representative of the population in all its diversity. Some companies are already doing this and are working on databases that take into account differences in gender, nationality or morphology. As a result of work published on the shortcomings of facial recognition software, some companies have retrained their algorithms to be more inclusive. Companies have also emerged that specialize in developing tools to evaluate algorithms and determine whether they are biased.

Vincent: At the same time, in the scientific and research community, there has been reflection on how to implement a more independent evaluation, and on the need for algorithmic transparency. Some experts, such as Buolamwini, advocate the development and generalization of an inclusive code, just as there is for inclusive writing.

Among existing initiatives, we should also mention the work done by the collective Data for Good, which is thinking about ways to make algorithms serve the general interest. This collective has drafted an ethical charter called the Hippocratic Oath for Data Scientists, establishing a list of very concrete parameters to be checked before implementing an algorithm, to ensure it isn’t discriminatory. It is important to support this type of initiative.

Could AI eventually become an example for how biases can be combated?

Bernheim: In a sense, yes, to the extent that we became aware fairly quickly of the biases these new technologies could induce. AI is in the process of revolutionizing our societies, so it can also make things evolve in a positive way. AI makes it possible to manage and analyze very large amounts of data. It enabled Google, in particular, to create an algorithm in 2016 to quantify the speaking time of women in major American film productions and show their under-representation. At the same time, the teams developing algorithms also need to become more gender-balanced. Today, however, for a number of reasons – including girls' self-censorship when it comes to scientific fields, and the sexism that reigns in high-tech companies – very few women study computer science. It will take time to reverse this trend.

Vincent: Of course, the algorithms need to be educated, but changing a few lines of code will not be enough to solve the problems. We must bear in mind that there will be no willingness to code for equality if the teams involved do not include women.

Read more:

Democratizing AI in Africa, The UNESCO Courier, July-September 2018.

Aude Bernheim and Flora Vincent

Biologists Aude Bernheim and Flora Vincent are researchers at the Weizmann Institute of Science in Israel. They are the founders of WAX Science, an association that promotes gender equality in the sciences.