III: Small: Collaborative Research: Scrutable and Explainable Information Retrieval with Model Intrinsic and Agnostic Approaches
Expected Duration: 10/1/2020 – 9/30/2023.
Information Retrieval (IR) systems are important for people for information access. For example, intelligent search engines are widely used in Web-based services such as web search, product search, and job search. Recently, sophisticated data and complicated black-box models have made modern IR systems less transparent to users. However, as more and more people rely on IR systems to guide their daily life and decision making, there has been growing needs of explainable search results, both for technical communities and the general public, so that they understand why certain search results are provided. Meanwhile, governmental agencies are demanding IR systems to provide not only high-quality results, but also reasonable justifications, so as to enhance the trustworthiness of the systems. This project focuses on developing algorithms and frameworks to improve the scrutability, explainability, and transparency of modern IR systems. It will inspire large-scale academic-industry collaboration, which benefits billions of users by facilitating the development of reliable and explainable information access services. 

This project will develop general and reusable frameworks for scrutable and explainable IR. Research in this project will be performed on two directions. The first direction aims at new retrieval models for model-intrinsic explanation. This includes developing transparent inference process and decision boundaries for retrieval actions, scrutable functions that support result exploration with user feedback, and traceable information flow to distinguish the contribution of model inputs. The second direction aims at building analytical and simulative framework for model-agnostic explanation. This includes post-hoc explanation systems with external knowledge, and a simulation framework over black-box retrieval models with explainable outputs. Besides model-intrinsic and model-agnostic approaches, this project will also investigate crowd-sourcing tasks and systematic metrics to compare the effectiveness of intrinsic and agnostic explanations. The research outcomes will include multiple public benchmark datasets and evaluation platforms for explainable IR, which will contribute to the research community for sustainable and reproducible future studies.
The project is in collaboration with Dr. Qingyao Ai at the University of Utah. More details can be found on the National Science Foundation’s webpage: https://www.nsf.gov/awardsearch/showAward?AWD_ID=2007907&HistoricalAwards=false

Project Personal: PI: Yongfeng Zhang, Rutgers University (http://yongfeng.me/), PI: Qingyao Ai, University of Utah (http://ir.aiqingyao.org/home)

Key Outcomes

[1] Hanxiong Chen, Shaoyun Shi, Yunqi Li and Yongfeng Zhang. “Neural Collaborative Reasoning”. WWW 2021.

[2] Yunqi Li, Hanxiong Chen, Zuohui Fu, Yingqiang Ge and Yongfeng Zhang. “User-oriented Fairness in Recommendation”. WWW 2021.

[3] Zelong Li, Jianchao Ji, Zuohui Fu, Yingqiang Ge, Shuyuan Xu, Chong Chen and Yongfeng Zhang. “Efficient Knowledge Graph Embedding without Negative Sampling”. WWW 2021.

[4] Yaxin Zhu, Yikun Xian, Zuohui Fu, Gerard de Melo and Yongfeng Zhang. “Faithfully Explainable Recommendation via Neural Logic Reasoning”. NAACL 2021.

[5] Yongfeng Zhang, Min Zhang, Hanxiong Chen, Xu Chen, Xianjie Chen, Chuang Gan, Tong Sun, Xin Luna Dong. “The 1st International Workshop on Machine Reasoning: International Machine Reasoning Conference (MRC 2021)”. WSDM 2021.

[6] Shuyuan Xu, Yingqiang Ge, Yunqi Li, Zuohui Fu, Xu Chen, Yongfeng Zhang. “Causal Collaborative Filtering”. arXiv:2102.01868.

Acknowledgement: This material is based upon work supported by the National Science Foundation under Grant No. (IIS-2007907).

Disclaimer: Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.