Responsible AI 101: The Role of Error Rate and Coverage in Building Trustworthy Models
In the world of AI and machine learning, especially when we're talking about Responsible AI (RAI), one of the biggest concerns is how fairly and accurately an AI system performs across different groups of people. Two important metrics that help us understand this are Error Rate and Error Coverage.
They sound similar—but they measure very different things. Let’s break them down with simple language and examples.
đź’ˇ What is Error Rate?
Error Rate tells you how often the AI makes mistakes within a specific group. Think of it as measuring how "fairly" the model treats people in that group.
Example:
Say we have an AI that predicts whether people should get approved for a loan.
100 women apply for loans. The AI wrongly denies loans to 10 of them.
Error Rate for women = 10 errors / 100 women = 10%
This means the model is wrong for 1 in every 10 women. If men have an error rate of only 2%, this might suggest bias against women.
🔍 What is Error Coverage?
Error Coverage tells you how much of the AI’s total mistakes happen within a specific group. It shows how much that group contributes to all the errors made by the model.
Continuing the example:
Across all users, the AI makes 50 mistakes.
Of those, 10 were mistakes involving women.
Error Coverage for women = 10 errors / 50 total errors = 20%
So, women are involved in 20% of all mistakes made by the model.

Error Rate vs Error Coverage in RAI Dashboard

The dashboard is currently displaying data for the "Global cohort: All data (default)", which includes the full dataset.
It's visualizing how errors are distributed across various subgroups using a tree-like structure (often derived from a decision tree or feature-based slicing).
📊 Key Metrics
Error Coverage: 100%
This indicates that the visualization accounts for all of the model’s mistakes in the dataset (i.e., 100% of total errors are shown in the branches of the tree).Error Rate: 2.38%
This is the overall rate of errors in the dataset — meaning the model made incorrect predictions in about 2.38% of all cases.
đź§ Why Both Metrics Matter
Let’s say you only looked at Error Coverage. If a group had low coverage, it might seem like there’s no problem. But if that group is small and still has a high error rate, that’s a sign of unfair treatment.
On the flip side, a group might have high error coverage just because there are a lot of people in that group—even if the model treats them fairly.
👥 Real-Life example : The Job Interview AI
Imagine an AI system that screens job candidates.
Group A = 100 candidates
Group B = 50 candidates
Suppose:
The AI wrongly rejects 20 people from Group A.
It wrongly rejects 10 people from Group B.
Total mistakes = 30
Now:
| Group | Error Rate | Error Coverage |
| Group A | 20/100 = 20% | 20/30 = 66.7% |
| Group B | 10/50 = 20% | 10/30 = 33.3% |
Even though both groups have the same error rate, Group A shows up more in total errors simply because it’s bigger.
📊 Takeaway
Error Rate helps spot disparities in treatment between groups.
Error Coverage helps show where the model is making most of its mistakes.
When building or auditing an AI system, it's important to look at both metrics to get a complete picture of fairness and reliability.
âś… Conclusion
Understanding the difference between Error Rate and Error Coverage is key to building and auditing fair AI systems. While error rate shows how often mistakes happen within a group, error coverage reveals how much that group contributes to the model’s total errors. By using both metrics together, we get a fuller, more accurate picture of how inclusive and responsible an AI system really is.