
Practical Explainable AI Using Python: Artificial Intelligence Model Explanations Using Python-based
Pradeepta Mishra
Résumé
You'll begin with an introduction to model explainability and interpretability basics, ethical consideration, and biases in predictions generated by AI models. Next, you'll look at methods and systems to interpret linear, non-linear, and time-series models used in AI. The book will also cover topics ranging from interpreting to understanding how an AI algorithm makes a decision
Further, you will learn the most complex ensemble models, explainability, and interpretability using frameworks such as Lime, SHAP, Skater, ELI5, etc. Moving forward, you will be introduced to model explainability for unstructured data, classification problems, and natural language processing-related tasks. Additionally, the book looks at counterfactual explanations for AI models. Practical Explainable AI Using Python shines the light on deep learning models, rule-based expert systems, and computer vision tasks using various XAI frameworks.
What You'll Learn
- Review the different ways of making an AI model interpretable and explainable
- Examine the biasness and good ethical practices of AI models
- Quantify, visualize, and estimate reliability of AI models
- Design frameworks to unbox the black-box models
- Assess the fairness of AI models
- Understand the building blocks of trust in AI models
- Increase the level of AI adoption
Who This Book Is For
AI engineers, data scientists, and software developers involved in driving AI projects/ AI products.
Chapter 2: AI Ethics, Biasness and Reliability Chapter Goal: This chapter aims at covering different frameworks using XAI Python libraries to control biasness, execute the principles of reliability and maintain ethics while generating predictions.No of pages: 30-40
Chapter 3: Model Explainability for Linear Models Using XAI ComponentsChapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by linear models for supervised learning task, for structured dataNo of pages : 30-40
Chapter 4: Model Explainability for Non-Linear Models using XAI ComponentsChapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by non-linear models, such as tree based models for supervised learning task, for structured dataNo of pages: 30-40
Chapter 5: Model Explainability for Ensemble Models Using XAI Components
Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by ensemble models, such as tree based ensemble models for supervised learning task, for structured data No of pages: 30-40
Chapter 6: Model Explainability for Time Series Models using XAI ComponentsChapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by time series models for structured data, both univariate time series model and multivariate time series modelNo of pages: 30-40
Chapter 7: Model Explainability for Natural Language Processing using XAI ComponentsChapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by models from text classification, summarization, sentiment classification No of pages: 30-40
Chapter 8: AI Model Fairness Using What-If ScenarioChapter Goal: This chapter explains use of Google's WIT Tool and custom libraries to explain the fairness of an AI modelNo of pages: 30-40
Chapter 9: Model Explainability for Deep Neural Network ModelsChapter Goal: This chapter explains use of Python libraries to interpret the neural network models and deep learning models such as LSTM models, CNN models etc. using smooth grad and deep shiftNo of pages: 30-40
Chapter 10: Counterfactual Explanations for XAI modelsChapter Goal: This chapter aims at providing counterfactual explanations to explain predictions of individual instances. The "event" is the predicted outcome of an instance, the "cause" are the particular feature values of this instance that were the input to the model that "caused" a certain prediction.No of pages: 30-40
Chapter 11: Contrastive Explanation for Machine Learning
Chapter Goal: In this chapter we will use foil trees: a model-agnostic approach to extracting explanations for finding the set of rules that causes the explanation to be predicted the actual outcome (fact) instead of the other (foil)No of pages: 20-30
Chapter 12: Model-Agnostic Explanations By Identifying Prediction InvarianceChapter Goal: In this chapter we will use anchor-LIME (a-LIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear.No of pages: 20-30
Chapter 13: Model Explainability for Rule based Expert System Chapter Goal: In this chapter we will use anchor-LIME (a-LIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear.No of pages: 20-30
Chapter 14: Model Explainability for Computer Vision.Chapter Goal: In this chapter we will use Python libraries to explain computer vision tasks such as object detection, image classification models.No of pages: 20-30
Caractéristiques techniques
PAPIER | |
Éditeur(s) | Apress |
Auteur(s) | Pradeepta Mishra |
Parution | 14/12/2021 |
Nb. de pages | 344 |
EAN13 | 9781484271575 |
Avantages Eyrolles.com
Consultez aussi
- Les meilleures ventes en Graphisme & Photo
- Les meilleures ventes en Informatique
- Les meilleures ventes en Construction
- Les meilleures ventes en Entreprise & Droit
- Les meilleures ventes en Sciences
- Les meilleures ventes en Littérature
- Les meilleures ventes en Arts & Loisirs
- Les meilleures ventes en Vie pratique
- Les meilleures ventes en Voyage et Tourisme
- Les meilleures ventes en BD et Jeunesse