FEEDBACK

January 26, 2018, 1:00 pm - 2:00 pm

June 22nd, 1:00 pm - 2:00 pm

Beyond the Black Box: How deep learning and AI can drive scientific discovery



Barbara Thompson (671)

Despite the seemingly mystifying science behind deep learning, few areas of research evoke more ready opinions from non-experts. Researchers who are active in machine learning, neural nets (or any of the other zoo of terms associated with this branch of statistics) are accustomed to strong reactions. These reactions range from "this is really tough problem, can't you give it to a machine and have it do magic" to "it's a black box, you don't understand what's really happening" or "yeah, you get correlations, but it's not science."

For the most part, the tendency to overstate or understate the role of machine learning in scientific research decreases with understanding of how the various approaches work. Even a small foray into this arena can have benefits, as the ability to clearly articulate a problem and define "progress" can unexpectedly provide insight into the nature of a system. The added information and insight derived from your exploring data space can reveal paradoxes or mistaken assumptions, leading to much deeper understanding of the underlying physics. The key is to understand the relationship between 'information' and 'knowledge': improved statistics and correlations have little impact on our field without the involvement of those who truly understand the physics of the systems being studied.

This is an "audience participation" presentation (participation is optional), with exercises to demonstrate "machine-thinking." Machine learning and neural nets can provide valuable insight, demonstrating the potential of "HelioAnalytics" to transform our knowledge.

NASA Logo, National Aeronautics and Space Administration