Unit 7 - Perceptron Activities
This unit explored perceptron-based models, including simple perceptrons, logic operators, and multi-layer perceptrons (MLPs). Tasks focused on implementing and training perceptrons, addressing their applicability and challenges in solving machine learning problems.
Key Learning Outcomes
- Applicability: Demonstrated the effectiveness of perceptrons for linearly separable tasks (e.g., AND operator) and analyzed their limitations for non-linear problems.
- Challenges: Encountered matrix alignment issues during MLP backpropagation, highlighting the importance of correct dimensional setup and debugging in neural network design.
- Skills: Gained hands-on experience in weight adjustment, learning rates, and activation functions to improve model performance.
Key Artefacts
- Simple Perceptron: Implemented to classify binary data, achieving convergence after five iterations by dynamically updating weights.
- AND Operator Perceptron: Modeled the binary logic AND operation, accurately predicting all input combinations through effective weight adjustments.
- Multi-Layer Perceptron: Attempted to solve non-linear tasks, facing challenges in backpropagation but gaining critical insights into neural network debugging and design.
Self-Reflection
- Strengths: Successfully applied perceptron fundamentals to solve linear problems and contributed to understanding their functionality in machine learning workflows.
- Improvements: Developed a deeper understanding of network design and backpropagation techniques to handle multi-layer architectures more effectively.