Responsible AI for Designing ML Systems with AWS
Ref: https://www.udemy.com/course/aws-certified-machine-learning-engineer-associate-mla-c01/learn/lecture/45287085
Core Dimensions of Responsible AI
- Fairness
- Is there bias in the AI model?
- Explainability
- Can you explain how AI model produced that output?
- Privacy and Security
- Are you training AI model on sensitive data? Can it leak it?
- Safety
- Do people feel safe when using AI app?
- Controllability
- How can you control AI model output? Can you tune it with dials & knobs?
- Veracity and Robustness
- Does AI model tell the truth? How robust is it vs hallucinations?
- Governance
- How do I monitor AI model? How can I ensure AI model is compliant with regulations?
- Transparency
- Are capabilities, limitations & risks of AI model published? What does the system do?
AWS Tools for Responsible AI
- Amazon Bedrock
- Model evaluation tools
- Bedrock guardrails
- SageMaker Clarify
- Bias detection (can also help fix bias)
- Model evaluation (can evaluate once or continuously)
- Explainability (SHAP - how model reacts when eliminating a feature)
- SageMaker Model Monitor
- Get alerts for inaccurate responses
- Amazon Augmented AI (A2I)
- Insert responsible human oversight in the loop to help correct results (e.g. RLHF)
- ❗ Human oversight can be irresponsible! Are you paying low wages on humans working from developing countries? With A2I you can bring in a team with guarantees for responsible AI.
- ML governance in SageMaker
- SageMaker Role Manager (define permissions for groups of people)
- SageMaker Model Cards
- SageMaker Model Dashboard