Research on AutoML and Explainability.
-
Updated
Jan 25, 2024 - Python
Research on AutoML and Explainability.
This repository is the code basis for the paper titled "Balancing Privacy and Explainability in Federated Learning"
Code for evaluating saliency maps with classification metrics.
Repository for ReVel framework to Measure Local-Linear Explanationsfor Black-Box Models
ConsisXAI is an implementation of a technique to evaluate global machine learning explainability (XAI) methods based on feature subset consistency
Open and extensible benchmark for XAI methods
Semantic Meaningfulness: Evaluating counterfactual approaches for real world plausibility
Replication package for the KNOSYS paper titled "An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability".
Add a description, image, and links to the xai-evaluation topic page so that developers can more easily learn about it.
To associate your repository with the xai-evaluation topic, visit your repo's landing page and select "manage topics."