Unit 9 Unit 11/2

Unit 11 Part 1 - Model Performance Measurement

This unit examined the impact of parameter changes on performance metrics such as AUC, RMSE, and R² error. The analysis demonstrated how adjustments to model settings influence evaluation metrics, providing insights into effective machine learning model tuning.

Key Learning Outcomes

  • Legal, Social, and Ethical Issues: Emphasized the need for ethical evaluation of models to avoid biased or misleading results. Highlighted the responsibility of machine learning professionals to ensure model transparency and explainability.
  • Model Evaluation Techniques:
    • AUC (Area Under Curve): Assessed classification model accuracy.
    • R² Error: Measured regression model fit quality.
    • RMSE and MAE: Evaluated error margins in predictions.
  • Demonstrated how parameter adjustments affect metric values and overall model performance.

Key Artefacts

  • Code Implementation: Python-based analysis of performance metrics using sample data.
  • Metric Observations: Adjusted parameters (e.g., iterations, thresholds) to study their impact on AUC and R² error.
  • Discussions and Feedback: Explored the trade-offs between overfitting and underfitting when tuning models.

Self-Reflection

  • Strengths: Developed a comprehensive understanding of performance metrics and their role in evaluating machine learning models.
  • Improvements: Enhanced skills in parameter tuning and interpreting evaluation results to ensure robust models.

Code Showcase

						
# Calculating R^2 score
from sklearn.metrics import r2_score

# Example data
y_true = [3, -0.5, 2, 7]
y_pred = [2.5, 0.0, 2, 8]

# Calculate R^2
r2 = r2_score(y_true, y_pred)
print("R^2 Score:", r2)

# AUC calculation for classification
from sklearn.metrics import roc_auc_score

y_true = [0, 0, 1, 1]
y_scores = [0.1, 0.4, 0.35, 0.8]

auc = roc_auc_score(y_true, y_scores)
print("AUC Score:", auc)