Unit 11 Part 1 - Model Performance Measurement
This unit examined the impact of parameter changes on performance metrics such as AUC, RMSE, and R² error. The analysis demonstrated how adjustments to model settings influence evaluation metrics, providing insights into effective machine learning model tuning.
Key Learning Outcomes
- Legal, Social, and Ethical Issues: Emphasized the need for ethical evaluation of models to avoid biased or misleading results. Highlighted the responsibility of machine learning professionals to ensure model transparency and explainability.
- Model Evaluation Techniques:
- AUC (Area Under Curve): Assessed classification model accuracy.
- R² Error: Measured regression model fit quality.
- RMSE and MAE: Evaluated error margins in predictions.
- Demonstrated how parameter adjustments affect metric values and overall model performance.
Key Artefacts
- Code Implementation: Python-based analysis of performance metrics using sample data.
- Metric Observations: Adjusted parameters (e.g., iterations, thresholds) to study their impact on AUC and R² error.
- Discussions and Feedback: Explored the trade-offs between overfitting and underfitting when tuning models.
Self-Reflection
- Strengths: Developed a comprehensive understanding of performance metrics and their role in evaluating machine learning models.
- Improvements: Enhanced skills in parameter tuning and interpreting evaluation results to ensure robust models.