The risks of Bias in Artificial Intelligence

Artificial intelligence is not science fiction anymore. This technology has become part of our lives and you might even be surprised to find out that you’re already using it. However, artificial intelligence models need large amounts of data to be trained and in many cases, this is associated with bias.

This was the subject of our last “In Code We Trust” meetup, “The risks of bias in AI”, led by Cristina Aranda, CMO at Intelygenz, and Ana de Prado, Machine Learning Program Leader at Intelygenz and Terminus7.

Cristina welcomed us with some figures and trends concerning Artificial Intelligence, highlighting that currently, 80% of enterprises are already investing in this technology.

Ana introduced the Deep Learning hype and the need to understand how this technology will transform the industry and the way the world does business. In this regard, she also presented some important concepts, such as “word embedding” and its modeling role, as well as how it features learning techniques in natural language processing (NLP).

Opening up the subject at the meetup, Ana showed us some interesting – but unfortunate –  examples of bias in Artificial Intelligence that are clearly influenced by social stereotypes and prejudices.

Without a doubt, identifying and mitigating bias is essential to ensure that Artificial Intelligence can have a positive impact on society. As Ana mentioned in her speech, these technologies need more: women, people of color, people from different fields, ethics, inclusion, and regulation.

You can watch the full video of the event here. Don’t miss it!

Do you want to know more about us?

Follow us


Share This

Share this post with your friends!