Unit 6 Unit 8

Unit 7 - Perceptron Activities

This unit explored perceptron-based models, including simple perceptrons, logic operators, and multi-layer perceptrons (MLPs). Tasks focused on implementing and training perceptrons, addressing their applicability and challenges in solving machine learning problems.

Key Learning Outcomes

  • Applicability: Demonstrated the effectiveness of perceptrons for linearly separable tasks (e.g., AND operator) and analyzed their limitations for non-linear problems.
  • Challenges: Encountered matrix alignment issues during MLP backpropagation, highlighting the importance of correct dimensional setup and debugging in neural network design.
  • Skills: Gained hands-on experience in weight adjustment, learning rates, and activation functions to improve model performance.

Key Artefacts

  • Simple Perceptron: Implemented to classify binary data, achieving convergence after five iterations by dynamically updating weights.
  • AND Operator Perceptron: Modeled the binary logic AND operation, accurately predicting all input combinations through effective weight adjustments.
  • Multi-Layer Perceptron: Attempted to solve non-linear tasks, facing challenges in backpropagation but gaining critical insights into neural network debugging and design.

Self-Reflection

  • Strengths: Successfully applied perceptron fundamentals to solve linear problems and contributed to understanding their functionality in machine learning workflows.
  • Improvements: Developed a deeper understanding of network design and backpropagation techniques to handle multi-layer architectures more effectively.

Code Showcase

							
# Simple perceptron implementation
import numpy as np

# Define activation function
def step_function(x):
	return 1 if x >= 0 else 0

# Perceptron training
def perceptron_train(X, y, learning_rate, epochs):
	weights = np.zeros(X.shape[1])
	bias = 0
	for _ in range(epochs):
		for i in range(len(X)):
			prediction = step_function(np.dot(X[i], weights) + bias)
			weights += learning_rate * (y[i] - prediction) * X[i]
			bias += learning_rate * (y[i] - prediction)
	return weights, bias