aix360.res.ibm.com Open in urlscan Pro
2606:4700:90:0:ad53:2d25:62a9:163a  Public Scan

Submitted URL: http://aix360.res.ibm.com/
Effective URL: https://aix360.res.ibm.com/
Submission: On November 01 via api from US — Scanned from US

Form analysis 0 forms found in the DOM

Text Content

IBM Research Trusted AI 
 * Home
 * Demo
 * Resources
 * Events
 * Videos
 * Community



AI EXPLAINABILITY 360


This extensible open source toolkit can help you comprehend how machine learning
models predict labels by various means throughout the AI application lifecycle.
We invite you to use it and improve it.

API Docs ↗︎   Get Code ↗︎


NOT SURE WHAT TO DO FIRST? START HERE!


READ MORE

Learn more about explainability concepts, terminology, and tools before you
begin.



TRY A WEB DEMO

Step through the process of explaining models to consumers with different
personas in an interactive web demo that shows a sample of capabilities
available in this toolkit.



WATCH VIDEOS

Watch videos to learn more about AI Explainability 360 toolkit.



READ A PAPER

Read a paper describing how we designed AI Explainability 360 toolkit.



USE TUTORIALS

Step through a set of in-depth examples that introduce developers to code that
explains data and models in different industry and application domains.



ASK A QUESTION

Join our AI Explainability 360 Slack Channel to ask questions, make comments,
and tell stories about how you use the toolkit.



VIEW NOTEBOOKS

Open a directory of Jupyter notebooks in GitHub that provide working examples of
explainability in sample datasets. Then share your own notebooks!



CONTRIBUTE

You can add new algorithms and metrics in GitHub. Share Jupyter notebooks
showcasing how you have enabled explanations in your machine learning
application.



LEARN HOW TO PUT THIS TOOLKIT TO WORK FOR YOUR APPLICATION OR INDUSTRY PROBLEM.
TRY THESE TUTORIALS.


CREDIT APPROVAL

See how to explain credit approval models using the FICO Explainable Machine
Learning Challenge dataset.



MEDICAL EXPENDITURE

See how to create interpretable machine learning models in a care management
scenario using Medical Expenditure Panel Survey data.



DERMOSCOPY

See how to explain dermoscopic image datasets used to train machine learning
models that help physicians diagnose skin diseases.



HEALTH AND NUTRITION SURVEY

See how to quickly understand the National Health and Nutrition Examination
Survey datasets to hasten research in epidemiology and health policy.



PROACTIVE RETENTION

See how to explain predictions of a model that recommends employees for
retention actions from a synthesized human resources dataset.



THESE ARE EIGHT STATE-OF-THE-ART EXPLAINABILITY ALGORITHMS THAT CAN ADD
TRANSPARENCY THROUGHOUT AI SYSTEMS. ADD MORE!


BOOLEAN DECISION RULES VIA COLUMN GENERATION (LIGHT EDITION)

Directly learn accurate and interpretable ‘or’-of-‘and’ logical classification
rules.



GENERALIZED LINEAR RULE MODELS

Directly learn accurate and interpretable weighted combinations of ‘and’ rules
for classification or regression.



PROFWEIGHT

Improve the accuracy of a directly interpretable model such as a decision tree
using the confidence profile of a neural network.



TEACHING AI TO EXPLAIN ITS DECISIONS

Predict both labels and explanations with a model whose training set contains
features, labels, and explanations.



CONTRASTIVE EXPLANATIONS METHOD

Generate justifications for neural network classifications by highlighting
minimally sufficient features, and minimally and critically absent features.



CONTRASTIVE EXPLANATIONS METHOD WITH MONOTONIC ATTRIBUTE FUNCTIONS

Contrastive explanations for colored images or images with rich structure.



DISENTANGLED INFERRED PRIOR VAE

Learn disentangled representations for interpreting unlabeled data.



PROTODASH

Select prototypical examples from a dataset.



ALTHOUGH IT IS ULTIMATELY THE CONSUMER WHO DETERMINES THE QUALITY OF AN
EXPLANATION, THE RESEARCH COMMUNITY HAS PROPOSED QUANTITATIVE METRICS AS PROXIES
FOR EXPLAINABILITY.


FAITHFULNESS

Correlation between the feature importance assigned by the interpretability
algorithm and the effect of features on model accuracy.



MONOTONICITY

Test whether model accuracy increases as features are added in order of their
importance.


About this site

AI Explainability 360 was created by IBM Research and donated by IBM to the
Linux Foundation AI & Data.

Additional research sites that advance other aspects of Trusted AI include:

AI Fairness 360
AI Privacy 360
Adversarial Robustness 360
Uncertainty Quantification 360
AI FactSheets 360




IBM web domains

ibm.com, ibm.org, ibm-zcouncil.com, insights-on-business.com, jazz.net,
mobilebusinessinsights.com, promontory.com, proveit.com, ptech.org, s81c.com,
securityintelligence.com, skillsbuild.org, softlayer.com, storagecommunity.org,
think-exchange.com, thoughtsoncloud.com, alphaevents.webcasts.com,
ibm-cloud.github.io, ibmbigdatahub.com, bluemix.net, mybluemix.net, ibm.net,
ibmcloud.com, galasa.dev, blueworkslive.com, swiss-quantum.ch,
blueworkslive.com, cloudant.com, ibm.ie, ibm.fr, ibm.com.br, ibm.co, ibm.ca,
community.watsonanalytics.com, datapower.com, skills.yourlearning.ibm.com,
bluewolf.com, carbondesignsystem.com
About cookies on this site Our websites require some cookies to function
properly (required). In addition, other cookies may be used with your consent to
analyze site usage, improve the user experience and for advertising. For more
information, please review your cookie preferences options. By visiting our
website, you agree to our processing of information as described in
IBM’sprivacy statement.  To provide a smooth navigation, your cookie preferences
will be shared across the IBM web domains listed here.

Accept all Required only

Cookie Preferences