丁文璿 | Treat Everyone Fairly: Enabling an Unbiased and Explainable Algorithmic Decision Making

时间:2019-10-22浏览:298设置

时间:2019年11月20日(周三)下午13:00-14:30

地点:紫竹国际教育园区(闵行谈家塘路155号)A1楼417会议室

题目:Treat Everyone Fairly: Enabling an Unbiased and Explainable Algorithmic Decision Making

报告人:丁文璿 人工智能与商业分析教授  法国里昂商学院全球商业智能中心副主任

报告摘要:

Driven by big data and rapid improvements in computing power, many firms and organizations are interested in deploying various AI and machine learning methods to create values and improve their decision-making. Although conventional machine learning methods have shown a great social impact and a good performance in many application areas, the European Union’s new General Data Protection Regulation raises emerging concerns on the potential discriminatory and unexplainable decisions generated by conventional machine leaning models. Thus, how to eliminate discrimination and produce trans-parent and explainable decision-making becomes an urgent and important challenge in AI and machine learning communities.

There are two fundamental issues in current machine learning methods: First, they rely on aggregate-level learning using a data sample consisting of different sub-jects. Thus, sample selection bias may occur, resulting in biased decisions when a subject is not in the same population as those in the training sample. Second, the learning processes are not theory-driven and focus on correlations among feature variables in data samples rather than their causal effects. Therefore, the produced outcomes are un-interpretable.

This paper presents a novel theory-based individual dynamic learning model to overcome discrimination and emphasize on a causal inference to enable explanation. The model (1) uses limited data from each individual subject without employing other subjects’ data to overcome the discrimination issue; and (2) learns the underlying data generating process from individual subject to identify the corresponding causality mechanism to achieve a fair and interpretable decision. Using a real-world credit and risk assessment as the context, we empirically test our model and demonstrate a greater performance comparing to conventional supervised learning models and decision tree models in terms of fairness, transparency, and accuracy.

报告人简介:

Dr. Ding obtained PhD in Cognitive Science and Information Technology from Carnegie Mellon University, USA. She conducts forefront theoretical and empirical research on human-level artificial intelligence (AI), machine learning, and their applications in business analytics, digital transformation and marketing, and mobile health. Her research has appeared in various top-tier journals including Information Systems Research, Journal of the Academy of Marketing Science, Decision Support Systems, Springer Computational Intelligence Series, the Proceedings of Association for Advancement of Artificial Intelligence, Defense & Security Analysis, Journal of Defense Modeling and Simulation, Oxford Journal of Management Mathematics, and Safety Science. One of her recent research papers received the Best Paper Award in the Human-Computer Interaction Track at the prestigious International Conference in Information Systems in 2016.

Recently, she presented a novel dynamic model, enabling a machine to generate real-time intelligence in response to unknown unknowns, at one of the most prestigious annual conferences on artificial intelligence (the 28th international joint conference on AI, 2019). She is a member of the IEEE standards committee on Wellbeing Metrics Standard for Ethical AI and Autonomous Systems. She co-organized 2019 AAAI (Association for Advancement of Artificial Intelligence) Spring Symposium on Interpretable AI at Stanford University, USA.

返回原图
/