III: Small: Towards Explainable Recommendation Systems
Expected Duration: 10/1/2019 – 9/30/2022.
 
Recommendation systems are essential components of our daily life. Today, intelligent recommendation systems are used in many Web-based systems. These systems provide personalized information to help human decisions. Leading examples include e-commerce recommendations for everyday shopping, job recommendations for employment markets, and social recommendations to make people better connected. However, most recommendation systems merely suggest recommendations to users. They rarely tell users why such recommendations are provided. This is primarily due to the closed nature algorithms behind the systems that are difficult to explain. The lack of good explainability sacrifices transparency, effectiveness, persuasiveness, and trustworthiness of recommendation systems. This research will allow for personalized recommendations to be provided in more explainable manners, improving search performance and transparency. The research will benefit users in real systems through researchers’ industry collaboration with e-commerce and social networks. New algorithms and datasets developed in the project will supplement courses in computer science and iSchool programs. Presentation of the work and demos will help to engage with wider audiences that are interested in computational research. Ultimately, the project will make it easier for humans to understand and trust the machine decisions.
 
This project will explore a new framework for explainable recommendation that involves both system designers and end users. The system designers will benefit from structured explanations that are generated for model diagnostics. The end users will benefit from receiving natural language explanations for various algorithmic decisions. This project will address three fundamental research challenges. First, it will create new machine learning methods for explainable decision making. Second, it will develop new models to generate free-text natural language explanations. Third, it will identify key factors to evaluate the quality of explanations. In the process, the project will also develop aggregated explainability measures and release evaluation benchmarks to support reproducible explainable recommendation research. The project will result in the dissemination of shared data and benchmarks to the Information Retrieval, Data Mining, Recommender System, and broader AI communities.
 
The project is in collaboration with co-PI Chirag Shah at the University of Washington. More details can be found on the National Science Foundation’s webpage: https://www.nsf.gov/awardsearch/showAward?AWD_ID=1910154

Project Personal: PI: Yongfeng Zhang, Rutgers University (http://yongfeng.me/), co-PI: Chirag Shah, University of Washington (https://ischool.uw.edu/people/faculty/profile/chirags)

Key Outcomes

[1] Zuohui Fu, Yikun Xian, Ruoyuan Gao, Jieyu Zhao, Qiaoying Huang, Yingqiang Ge, Shuyuan Xu, Shijie Geng, Chirag Shah, Yongfeng Zhang and Gerard de Melo. “Fairness-aware Explainable Recommendation over Knowledge Graphs”. ACM SIGIR 2020.

[2] Yikun Xian, Zuohui Fu, Handong Zhao, Yingqiang Ge, Xu Chen, Qiaoying Huang, Shijie Geng, Zhou Qin, Gerard de Melo, S. Muthukrishnan, Yongfeng Zhang. “CAFE: Coarse-to-Fine Neural Symbolic Reasoning for Explainable Recommendation”. ACM CIKM 2020.

[3] Lei Li, Yongfeng Zhang, Li Chen. “Generate Neural Template Explanations for Recommendation”. ACM CIKM 2020.

[4] Yongfeng Zhang, Xu Chen, Yi Zhang, Min Zhang, Chirag Shah. “EARS 2020: The 3rd International Workshop on ExplainAble Recommendation and Search”. ACM SIGIR 2020.

[5] Yongfeng Zhang and Xu Chen. “Explainable Recommendation: A Survey and New Perspectives”. Foundations and Trends in Information Retrieval: Vol. 14: No. 1, pp 1-101.

[6] Shaoyun Shi, Hanxiong Chen, Weizhi Ma, Jiaxin Mao, Min Zhang and Yongfeng Zhang. “Neural Logic Reasoning”. CIKM 2020.

[7] Hanxiong Chen, Shaoyun Shi, Yunqi Li and Yongfeng Zhang. “Neural Collaborative Reasoning”. WWW 2021.

[8] Yunqi Li, Hanxiong Chen, Zuohui Fu, Yingqiang Ge and Yongfeng Zhang. “User-oriented Fairness in Recommendation”. WWW 2021.

[10] Zelong Li, Jianchao Ji, Zuohui Fu, Yingqiang Ge, Shuyuan Xu, Chong Chen and Yongfeng Zhang. “Efficient Knowledge Graph Embedding without Negative Sampling”. WWW 2021.

[11] Yaxin Zhu, Yikun Xian, Zuohui Fu, Gerard de Melo and Yongfeng Zhang. “Faithfully Explainable Recommendation via Neural Logic Reasoning”. NAACL 2021.

[12] Yongfeng Zhang, Min Zhang, Hanxiong Chen, Xu Chen, Xianjie Chen, Chuang Gan, Tong Sun, Xin Luna Dong. “The 1st International Workshop on Machine Reasoning: International Machine Reasoning Conference (MRC 2021)”. WSDM 2021.

Acknowledgement: This material is based upon work supported by the National Science Foundation under Grant No. (IIS-1910154).

Disclaimer: Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.