August 2022: Our paper "Mechanizing Soundness of Off-Policy Evaluation" was published at ITP 2022 (paper link). This paper provides a machine-checked prove of some of the safety components within Seldonian reinforcement learning algorithms.
August 2022: Our paper "Enforcing Delayed-Impact Fairness Guarantees," which presents the first method for providing delayed-impact (long-term improvement in equity) guarantees when the precise relationship between model predictions and long-term impact is not known a priori is now available on Arxiv here.
April 2022: Our paper "Fairness Guarantees under Demographic Shift" was published at ICLR 2022 (paper link). This paper presents a Seldonian algorithm for fair machine learning when the demographics in the training data differ from the demographics when the machine learning model will be deployed.
January 2022: Prof. Thomas gave a talk at the Johns Hopkins Institute for Assured Autonomy Seminar Series titled "Safe and Fair Machine Learning: A Seldonian Approach".
Fall 2021: Professors Philip Thomas and Yuriy Brun received the Google "Award for Inclusion Research" [link] and an award from Facebook's "Building Tools to Enhance Transparency in Fairness and Privacy" program [link].
December 2021: We published three papers at NeurIPS 2021 related to creating Seldonian RL algorithms: Universal Off-Policy Evaluation [link], SOPE: Spectrum of Off-Policy Estimators [link], and Multi-Objective SPIBB: Seldonian Offline Policy Improvement with Safety Constraints in Finite MDPs [link].
December 2021: Prof. Thomas gave a talk at the 2021 NeurIPS Offline Reinforcement Learning Workshop on new tools for safety tests in the reinforcement learning setting titled "Advances in (High-Confidence) Off-Policy Evaluation".
November 2021: Prof. Yuriy Brun gives a Let's Talk Tech seminar at Google, titled "Reducing Bias in Software Systems," advocating for the need for fairness research to improve ad recommendation systems.
November 2021: Prof. Thomas was an invited panelist at the Amhest College CHI Salon: "Making Robots Good People," where he discussed the need for Seldonian algorithms.
September 2021: Prof. Thomas gave a talk at the Brown BigAI Seminar titled "Safe and Fair Machine Learning: A Seldonian Approach".
August 2021: Prof. Thomas gave a talk at Google Brain (Montreal) titled "Safe and Fair Machine Learning: A Seldonian Approach".
July 2021: We published two papers at ICML 2021 related to creating Seldonian algorithms: High Confidence Generalization for Reinforcement Learning [link] and Towards Practical Mean Bounds for Small Samples [link].
May 2021:: Prof. Yuriy Brun receives the 2021 IEEE CS TCSE New Directions Award for founding the research subfield of software fairness and for his advocacy for industrial uptake of software fairness testing, which has charted a roadmap for software engineering research in this new area. [link]
February 2021: We published a paper at AAAI 2021 related to creating Seldonian RL algorithms: High Confidence Off-Policy (or Counterfactual) Variance Estimation [link].
January 2021: Prof. Yuriy Brun outlines the state-of-the-art and future needs of funding in a talk titled "Engineering Software to Prevent Undesirable Behavior of Intelligent Machines" to the Networking and Information Technology Research and Development (NITRD) Program Interagency Working Group on Software Productivity Sustainability Quality, an organization of major federal funding agencies, including the National Science Foundation, the Department of Defense, the Department of Energy, the Department of Commerce, the Department of Homeland Security, the Department of Justice, and NASA, among others.
January 2021: Prof. Thomas presented at the Computing and Social Justice Lecture Series at UMass Amherst, where he described the need for Seldonian algorithms in a talk titled "Why are AI Systems Racist, Sexist, and Generally Unfair, and Can We Make Them Fair?"
December 2020: We published two papers at NeurIPS 2020 related to creating Seldonian RL algorithms: Security Analysis of Safe and Seldonian Reinforcement Learning Algorithms [link] and Towards Safe Policy Improvement for Non-Stationary MDPs [link].
August 2020: Prof. Thomas was an invited guest on the podcast "Computing Up", for an episode titled "Channeling Hari Seldon for Safer and Fairer AI - Computing Up 38th Conversation".
July 2020: We published a paper at ICML 2020 related to creating Seldonian RL algorithms: Optimizing for the Future in Non-Stationary MDPs [link].
July 2020: Prof. Thomas gave a talk at the Army Research Laboratory (R2AI Group) titled "Safe and Fair Machine Learning".
February 2020: Prof. Philip Thomas testified to the US House Committee on Financial Services, Task Force on Artificial Intelligence, in a hearing titled "Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services." Details can be found here, and video of the hearing is provided below.
January 2020: According to Almetric, the Science paper is now in the top 0.1% of all publications tracked by Almetric. Media coverage includes Wired, LA Times, and The Economist, as well as international coverage from Spain (SINC), Russia (Popmech), and China (Sohu) among others. A partial list of coverage can be found here.
December 2019: We published a follow-up paper to the Science paper at the top conference NeurIPS, presenting a Seldonian algorithm for solving problems called contextual bandits, with example applications to loan approval and predicting criminal recidivism, as well as a user study showing how adaptive online courses powered by Seldonian algorithms can ensure that they do not disciminate against minorities in the classroom. [link]
November 2019: Our paper introducing Seldonian algorithms was published in Science [link]. [Press releases 1, 2]