Google AutoML: Vertex Explainable AI — Automatically Obtained Model Providing Feature Importance Information
Recently, I googled “Explainable AI”, and one of the top results was Google Cloud Explainable AI. It is part of their AutoML service. To be honest, I’ve never heard of automatic AI before. It appeared in 2018, 4 years ago. I think I need to find a platform to keep myself up to date on what is happening. This service aims to regularize and simplify some standard and tedious data science processes, allowing people with no expertise in data engineering to use neural-network-related techniques. I’d like to know what it is and why Google claims it is explainable.
- What is Google AutoML
- How to use AutoML
- Can I use the Explain Function standalone?
1. What is Google AutoML — Vertex Explainable AI
Checking their documentation, “Vertex Explainable AI integrates feature attributions into Vertex AI”. They add one more step on top of the original Vertex AI/ AutoML process. Currently, AutoML can be used on classification and regression tasks only. The explainability here is…