Description
PyData DC 2016
Algorithms have become an integral part of our everyday lives. While algorithms can make our lives simpler and make decisions faster, there is a growing need for algorithms to be transparent and for the users of those algorithms to be accountable for the automated decisions made by them. This talk covers where algorithms are used, how they can go wrong, and how they can be investigated.
An algorithm is set of steps that perform calculations, process data, or automate tasks. Algorithms are everywhere we look (and even places we don’t look) controlling what we see, do, and where we go. They’re great for solving our problems and helping us make better and quicker decisions, or taking the decision-making out of our hands. Their guidance is perfect in their objective and unbiased calculation. Except they are not, actually. Like everything else, they are created by people, and people have biases that get encoded into the algorithms they create. Algorithms learn from data, which is also created by people, so the algorithms also learn biases from data. This can be a problem when algorithms encode these biases into their calculations and go on to perpetuate the bias.
In this talk you will hear why we should care about algorithmic accountability, and details on a case study on how computational journalism can be used to investigate algorithms and advocate the need for transparency and accountability.