This project demonstrates a deep learning model for classifying the Iris dataset, which contains three species of Iris flowers: Setosa, Versicolor, and Virginica. The dataset includes features such as sepal length, sepal width, petal length, and petal width for each species.
- Data Preprocessing 🧹: Clean the dataset and apply feature scaling to improve model performance.
- Model Architecture 🏗️: Build a neural network using Keras for multi-class classification.
- Training & Evaluation 📊: Train the model and evaluate its accuracy in classifying the Iris species.
-
Setosa's Distinct Sepal Length 📏: Setosa typically has shorter sepal lengths, which are clearly visible in the distribution plot.
-
Overlap Between Versicolor and Virginica 🤝: These two species show some overlap in sepal length, but Virginica generally has longer sepals.
-
Petal Length Distribution 🌺: Setosa has a narrow range of petal lengths, while Versicolor and Virginica have broader distributions. Virginica generally has longer petals.
-
Pairplot Overview 🔠: The pairplot shows that Setosa is easily distinguishable from Versicolor and Virginica, especially in terms of petal length and width, while Versicolor and Virginica overlap slightly.
This project demonstrates how to build a linear regression model from scratch using the Ames Housing Dataset 🏘️. It includes:
- Implementing the Gradient Descent algorithm for optimizing model parameters.
- Analyzing the data to gain insights and visualize trends.
- Evaluating the model's performance using metrics like RMSE.
- Visualizing results such as learning curves and feature impacts.
The project is organized as follows:
- Main Notebook: All analysis and code are consolidated in the
linear-regression-with-gd.ipynb
file. - Dataset: Located in the
data
directory asAmes_Housing.csv
. - Visualizations: Plots and images are stored in the
visualizations
directory, showcasing learning curves and insights.
Feel free to check out the directory structure, dive into the notebook, and explore how linear regression works with Gradient Descent! 🚀
Classify 🐱 vs. 🐾 (non-cats) using Logistic Regression implemented from scratch. Understand core concepts like forward propagation, backpropagation, and optimization.
datasets/
: Training & testing images.Logistic_Regression_with_Neural_Network.ipynb
: Main notebook.
numpy
,matplotlib
,PIL
,scikit-learn
- Data Preprocessing: Flatten & normalize images.
- Training: Update weights using gradient descent.
- Evaluation: Analyze accuracy & confusion matrix.
Evaluate performance with metrics like accuracy and visualize results.
Build a simple yet effective neural network to classify cats while learning foundational ML concepts!
This project demonstrates the power of Multi-Layer Perceptron (MLP) in classifying planar data, showcasing how neural networks can solve problems involving non-linearly separable datasets. With the help of gradient descent optimization, the MLP learns to create complex decision boundaries to classify the data points effectively.
-
Key Features ✨:
- Planar Data Classification using MLP 🤖: A hands-on approach to solving non-linearly separable classification tasks.
- Gradient Descent Optimization 🔄: The model learns by minimizing the binary cross-entropy loss function.
- Intuitive Visualizations 📊: Visualize the training process with plots like the decision boundary, loss curve, and accuracy progression, stored in the
Visualizations/
directory. - Step-by-Step Implementation 📝: Detailed notebook with clear code comments for an educational understanding of MLP training.
-
Technical Insights ⚙️:
- Activation Function: Sigmoid 🟢
- Loss Function: Binary Cross-Entropy 📉
- Optimizer: Gradient Descent 🚴♂️
- Metrics: Accuracy 📈 and visualized decision boundaries for model evaluation.
-
Directory Structure 📂:
- Main Notebook:
MLP-Planar-Data-Classification.ipynb
📝, where all the implementation takes place. - Visualizations Directory: Contains key plots to track model performance, such as:
- Decision Boundary 🔵🟠
- Loss Curve 📉
- Accuracy Progression 📈
- Main Notebook:
-
Contributing 🤝: Contributions are encouraged! Fork the repo, submit issues, or create pull requests for improvements and enhancements.
-
Contact 📧: For any questions or feedback, feel free to reach out!