ISSIP

ISSIP Cognitive Systems Institute Group Speaker Series : September 20, 10:30am US Eastern Time

AI Fairness 360
Kush Varshney, IBM

When:  Thursday, September 20, 10:30 am US Eastern.

Zoom Detail Below

Background: 

Kush R. Varshney was born in Syracuse, NY in 1982. He received the B.S. degree (magna cum laude) in electrical and computer engineering with honors from Cornell University, Ithaca, NY, in 2004. He received the S.M. degree in 2006 and the Ph.D. degree in 2010, both in electrical engineering and computer science from the Massachusetts Institute of Technology (MIT), Cambridge.  While at MIT, he was a National Science Foundation Graduate Research Fellow.Dr. Varshney is a principal research staff member and manager with IBM Research AI at the Thomas J. Watson Research Center, Yorktown Heights, NY, where he leads the Learning and Decision Making group.  He is the founding co-director of the IBM Science for Social Good initiative.  He applies data science and predictive analytics to human capital management, healthcare, olfaction, computational creativity, public affairs, international development, and algorithmic fairness, which has led to recognitions such as the 2013 Gerstner Award for Client Excellence for contributions to the WellPoint team and the Extraordinary IBM Research Technical Accomplishment for contributions to workforce innovation and enterprise transformation. He conducts academic research on the theory and methods of statistical signal processing and machine learning. His work has been recognized through best paper awards at the Fusion 2009, SOLI 2013, KDD 2014, and SDM 2015 conferences. He is a senior member of the IEEE and a member of the Partnership on AI’s Safety-Critical AI working group.

Machine learning models are increasingly used to inform high stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage. Biases in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias.In this presentation, we introduce AI Fairness 360, a new Python package that includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models. We have developed the package with extensibility in mind.  We encourage the contribution of your metrics, explainers, and debiasing algorithms. Please join the community to get started as a contributor.

Zoom meeting Link: https://zoom.us/j/7371462221

Zoom Callin: (415) 762-9988 or (646) 568-7788 Meeting id 7371462221

Zoom International Numbers: https://zoom.us/zoomconference

Check the website in case the date or time changes: http://cognitive-science.info/community/weekly-update/  & where you can find slides & recordings of prior calls)

Please retweet

Join LinkedIn Group https://www.linkedin.com/groups/6729452