Amazon’s Alexa was accused of sexism for not answering the Lionesses’ match question.

Amazon is facing accusations of sexism due to its voice assistant Alexa’s inability to provide the result of the Lionesses’ Women’s World Cup semi-final victory when asked. A question about the England-Australia football match result prompted Alexa to respond that there was no match. Amazon acknowledged the error and stated that it has been rectified. Academic Joanne Rodda alerted the BBC to the issue, asserting that this incident highlights the presence of ingrained sexism in Alexa’s football-related responses. Dr. Rodda, a senior psychiatry lecturer with a focus on AI, discovered that Alexa only provided a response when she specified women’s football.

In response to Amazon’s statement that the situation had been addressed, Dr. Rodda expressed her disappointment, suggesting that it took nearly a decade for the AI algorithm to recognise women’s World Cup football as simply “football.” Amazon explained its information retrieval process, which involves pulling data from multiple sources, including Amazon, licenced content providers, and websites. The company employs automated systems with AI to understand context and provide relevant information, but these systems faltered in this instance. Amazon anticipates that these systems will improve over time and has dedicated teams to prevent similar issues in the future.

Dr. Rodda questioned the extent of the fix, indicating that she encountered similar problems with Alexa’s responses related to the Women’s Super League. The incident highlights the issue of bias within AI-powered systems, especially in the rapidly growing AI sector. While some view AI’s rapid growth as a potential threat to humanity’s future, others, including the EU’s competition chief, Margrethe Vestager, express concern about AI exacerbating existing prejudices. This concern arises from the fact that AI’s performance is contingent on the quality of its training data. Developers are responsible for ensuring diverse datasets, although this is not always achieved.

Furthermore, addressing embedded bias within AI systems can be complex and challenging. Sometimes the only option is to start over, but this may be undesirable for firms due to the substantial costs associated with AI development. As AI increasingly influences various aspects of our lives, from information presentation to financial decisions and healthcare, overlooking individuals due to biassed algorithms becomes a more problematic experience.