All videos
All videos
Explaining neural networks predictions
April 16, 2018
Recently Deep Neural Networks have become superior in many machine learning tasks. However, they are more difficult to interpret than simpler models like Support Vector Machines or Decision Trees. One may say that neural nets are black boxes that produce predictions but we can’t explain why a given prediction is made. Such a condition is not acceptable in industries like healthcare or law. In this talk, I will show known ways of understanding neural networks predictions.
Tags
Other videos that you might like
![fallback image](https://img.youtube.com/vi/OJVdhQpyQ_o/hqdefault.jpg)
Descriptive statistics – the mighty dwarf of data science
Paweł Rzeszuciński
![fallback image](https://img.youtube.com/vi/8ICJHwSa6tw/hqdefault.jpg)
pandas-stubs — How we enhanced pandas with type annotations
Joanna Sendorek, Zbyszek Królikowski
![fallback image](https://img.youtube.com/vi/g61XdncGGvY/hqdefault.jpg)
Building Successful Machine Learning Products
Maciej Dąbrowski
![fallback image](https://img.youtube.com/vi/1PImEPRa7BQ/hqdefault.jpg)
Life after the model
Michał Jakóbczyk