Contribute Media
A thank you to everyone who makes this possible: Read More

Despicable machines: how computers can be assholes

Translations: en

Description

There's a widespread belief among machine learning practitioners that algorithms are objective and allow us to deal with the messy reality in a nice, objective way, without worrying about all the yucky human nature things. This talk will argue that this belief is wrong. Algorithms, just like the humans who create them, can be severely biased and despicable indeed.

Abstract

When working on a new ML solution to solve a given problem, do you think that you are simply using objective reality to infer a set of unbiased rules that will allow you to predict the future? Do you think that worrying about the morality of your work is something other people should do? If so, this talk is for you.

In this brief time, I will try to convince you that you hold great power over how the future world will look like and that you should incorporate thinking about morality into the set of ML tools you use every day. We will take a short journey through several problems, which surfaced over the last few years, as ML and AI generally, became more widely used. We will look at bias present in training data, at some real-world consequences of not considering it (including one or two hair-raising stories) and cutting-edge research on how to counteract this.

The outline of the talk is:

  • Intro the problem: ML algos can be biased!
  • Two concrete examples.
  • What's been done so far (i.e. techniques from recently-published papers).
  • What to do next: unanswered questions.

Details

Improve this page