Bias: In the context of machine learning models, bias refers to systematic errors or prejudices that can occur when training data contains unfair or discriminatory patterns.
Protected attributes: These are characteristics such as race, gender, age, or religion that are legally protected from being used as a basis for discrimination. Fairness metrics often focus on evaluating how well a model avoids making decisions based on these attributes.
Disparate impact: This term describes situations where an algorithm's decision disproportionately affects certain groups more than others. Fairness metrics help identify and mitigate disparate impacts in machine learning models.