Skip to content

Commit

Permalink
add new project #221
Browse files Browse the repository at this point in the history
  • Loading branch information
SimranShaikh20 committed Nov 10, 2024
1 parent 89c5053 commit 0807466
Show file tree
Hide file tree
Showing 12 changed files with 75,329 additions and 0 deletions.
35,378 changes: 35,378 additions & 0 deletions models/Phishing Website URL Detection by ML/Dataset/1.Benign_list_big_final.csv

Large diffs are not rendered by default.

14,859 changes: 14,859 additions & 0 deletions models/Phishing Website URL Detection by ML/Dataset/2.online-valid.csv

Large diffs are not rendered by default.

5,001 changes: 5,001 additions & 0 deletions models/Phishing Website URL Detection by ML/Dataset/3.legitimate.csv

Large diffs are not rendered by default.

5,001 changes: 5,001 additions & 0 deletions models/Phishing Website URL Detection by ML/Dataset/4.phishing.csv

Large diffs are not rendered by default.

10,001 changes: 10,001 additions & 0 deletions models/Phishing Website URL Detection by ML/Dataset/5.urldata.csv

Large diffs are not rendered by default.

13 changes: 13 additions & 0 deletions models/Phishing Website URL Detection by ML/Dataset/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Data Files

This folder has the raw & extracted datafiles of this project. The description of each file is as follows:

* [1.Benign_list_big_final.csv:](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/DataFiles/1.Benign_list_big_final.csv) This file has list of legitimate urls. The total count is 35,300. The source of the dataset is University of New Brunswick, https://www.unb.ca/cic/datasets/url-2016.html.

* [2.online-valid.csv:](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/DataFiles/2.online-valid.csv) This file is downloaded from the opensource service called PhishTank. This service provide a set of phishing URLs in multiple formats like csv, json etc. that gets updated hourly. To download the latest data: https://www.phishtank.com/developer_info.php.

* [3.legitimate.csv:](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/DataFiles/3.legitimate.csv) This file has the extracted features of the 5000 legitimate URLs which are randonmly selected from the '1.Benign_list_big_final.csv' file.

* [4.phishing.csv](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/DataFiles/4.phishing.csv) This file has the extracted features of the 5000 phishing URLs which are randonmly selected from the '2.online-valid.csv' file.

* [5.urldata.csv](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/DataFiles/5.urldata.csv) This file is nothing but a combination of the above two files. It contains extracted features of 10,000 URLs both legitimate & phishing.
31 changes: 31 additions & 0 deletions models/Phishing Website URL Detection by ML/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Phishing Url Detection by Machine Learning Techniques

## Objective
A phishing url is a common social engineering method that mimics trustful uniform resource locators (URLs) and webpages. The objective of this project is to train machine learning models and deep neural nets on the dataset created to predict phishing websites. Both phishing and benign URLs of websites are gathered to form a dataset and from them required URL and website content-based features are extracted. The performance level of each model is measures and compared.

## Data Collection
The set of phishing URLs are collected from opensource service called **PhishTank**. This service provide a set of phishing URLs in multiple formats like csv, json etc. that gets updated hourly. From this dataset, 5000 random phishing URLs are collected to train the ML models.

This dataset has a collection of benign, spam, phishing, malware & defacement URLs. Out of all these types, the benign url dataset is considered for this project. From this dataset, 5000 random legitimate URLs are collected to train the ML models.
.

## Feature Extraction
The below mentioned category of features are extracted from the URL data:

1. Address Bar based Features <br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In this category 9 features are extracted.
2. Domain based Features<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In this category 4 features are extracted.

## Models & Training

Before stating the ML model training, the data is split into 80-20 i.e., 8000 training samples & 2000 testing samples. From the dataset, it is clear that this is a supervised machine learning task. There are two major types of supervised machine learning problems, called classification and regression.

This data set comes under classification problem, as the input URL is classified as phishing (1) or legitimate (0). The supervised machine learning models (classification) considered to train the dataset in this project are:

* Decision Tree
* Random Forest
* Multilayer Perceptrons
* XGBoost
* Autoencoder Neural Network
* Support Vector Machines
37 changes: 37 additions & 0 deletions models/Phishing Website URL Detection by ML/model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# model.py

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report, accuracy_score
import joblib

def train_model(data_path):
# Load the dataset
data = pd.read_csv(data_path)

# Split the data into features and target
X = data.drop(columns=['Label']) # Assuming 'Label' is the target column
y = data['Label']

# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize the model
model = RandomForestClassifier(n_estimators=100, random_state=42)

# Train the model
model.fit(X_train, y_train)

# Make predictions
y_pred = model.predict(X_test)

# Evaluate the model
print("Accuracy:", accuracy_score(y_test, y_pred))
print(classification_report(y_test, y_pred))

# Save the model as a .pkl file
joblib.dump(model, 'phishing_model.pkl')

if __name__ == "__main__":
train_model('data.csv') # Replace with your dataset path
Loading

0 comments on commit 0807466

Please sign in to comment.