Skip to content

Commit

Permalink
Merge pull request #1153 from venkat-2811/Weapon-Detection-System
Browse files Browse the repository at this point in the history
Weapon-Detection-System
  • Loading branch information
Niketkumardheeryan authored Oct 29, 2024
2 parents 03e5a9a + 43d964e commit d9e797d
Show file tree
Hide file tree
Showing 6 changed files with 1,036 additions and 0 deletions.
2 changes: 2 additions & 0 deletions WEAPON-DETECTION-SYSTEM/.gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Auto detect text files and perform LF normalization
* text=auto
97 changes: 97 additions & 0 deletions WEAPON-DETECTION-SYSTEM/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
Weapon Detection Using YOLOv3
=============================

Project Overview
----------------

This project utilizes the YOLOv3 (You Only Look Once) algorithm for real-time weapon detection. The code supports two modes of operation:

1. **Webcam-based weapon detection**: Uses the webcam to detect weapons in real-time.
2. **Video file-based weapon detection**: Uses a pre-recorded video file to detect weapons.

Features
--------

- Real-time detection of weapons using either a webcam or a video file.
- Display of bounding boxes and labels for detected objects.
- Customizable detection thresholds.

Technologies and Libraries Used
-------------------------------

- **Python (version 3.x)**
- **OpenCV**: Library for computer vision tasks.
- **NumPy**: Library for numerical computations.
- **YOLOv3**: Pre-trained deep learning model for object detection.

Getting Started
---------------

### Prerequisites

Make sure you have the following installed:

- Python 3.x
- OpenCV
- NumPy

You can install the required libraries using pip:

`pip install opencv-python numpy`



### Setup

1. **Clone the repository and navigate to the project directory**:

```bash
git clone <repository_url>
cd <repository_name>

2. **Download YOLOv3 weights and configuration files**:

Download the YOLOv3 weights and configuration files from the official YOLO website or any other reliable source. Place these files in the project directory.

### Running the Code

Depending on your use case, run the appropriate script:

1. **Webcam-based weapon detection**:

```bash
python weapon_detection1.py
2. **Video file-based weapon detection**:
```bash
python weapon_detection.py

### Sample Output

The following image demonstrates the output of the YOLOv3 model, showing bounding boxes and labels for detected objects:

![341933578-7d7e3f96-2c43-4b0e-a6f4-9a0ec38037ae](https://github.com/venkat-2811/WEAPON-DETECTION-SYSTEM/assets/147246984/aa55d2f7-d279-4972-85e9-3c16637d7fc0)


By following the steps outlined above, you can execute the code and perform real-time weapon detection using either a webcam or a video file.

Customization
-------------

You can customize the detection thresholds and other parameters within the script to suit your specific requirements. Detailed comments within the code will guide you through making these adjustments.

Contact
-------

For further assistance or inquiries, please reach out via the repository's contact information.
DESCRIPTION
------------
This project is a real-time weapon detection system utilizing OpenCV and YOLO (You Only Look Once) object detection framework. The system is designed to detect various weapons such as knives, guns, and bombs using a pre-trained YOLOv3 model. The implementation involves loading the YOLO model with custom-trained weights and configuration files, capturing live video feed from the camera, and processing each frame to detect and highlight weapons. The detection results are displayed on the screen with bounding boxes and labels around the detected weapons. This tool is potentially useful for enhancing security measures in public spaces by providing an automated method for weapon detection.
![341661359-02732cda-5175-47bc-86ef-31f15307e2f0](https://github.com/venkat-2811/WEAPON-DETECTION-SYSTEM/assets/147246984/159229ec-db56-416a-a665-87417d69a477)
here is the video for weapon detection using yolo: -
https://github.com/charann29/cmr_opensource/assets/130299325/32dcf337-e16e-4c4a-aedd-026f7dbea1b6
Binary file added WEAPON-DETECTION-SYSTEM/req.txt
Binary file not shown.
74 changes: 74 additions & 0 deletions WEAPON-DETECTION-SYSTEM/weapon_detection.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
import cv2
import numpy as np

# Load Yolo
net = cv2.dnn.readNet("yolov3_training_2000.weights", "yolov3_testing.cfg")
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_DEFAULT)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
classes = ["Weapon"]

# Open the video file
cap = cv2.VideoCapture("pistol1.mp4")

if not cap.isOpened():
print("Error: Could not open video file.")
exit()

while True:
ret, img = cap.read()

if not ret:
print("Error: Could not read frame from video.")
break

height, width, channels = img.shape

# Detecting objects
blob = cv2.dnn.blobFromImage(img, 0.00392, (416, 416), (0, 0, 0), True, crop=False)
net.setInput(blob)

layer_names = net.getLayerNames()
output_layers = [layer_names[i - 1] for i in net.getUnconnectedOutLayers()]
colors = np.random.uniform(0, 255, size=(len(classes), 3))
outs = net.forward(output_layers)

# Showing information on the screen
class_ids = []
confidences = []
boxes = []
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.5:
# Object detected
center_x = int(detection[0] * width)
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)

# Rectangle coordinates
x = int(center_x - w / 2)
y = int(center_y - h / 2)

boxes.append([x, y, w, h])
confidences.append(float(confidence))
class_ids.append(class_id)

indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
if len(indexes) > 0:
for i in indexes.flatten():
x, y, w, h = boxes[i]
label = str(classes[class_ids[i]])
color = colors[class_ids[i]]
cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)
cv2.putText(img, label, (x, y + 30), cv2.FONT_HERSHEY_PLAIN, 3, color, 3)

cv2.imshow("Image", img)
key = cv2.waitKey(1)
if key == 27: # Press 'Esc' key to break the loop
break

cap.release()
cv2.destroyAllWindows()
74 changes: 74 additions & 0 deletions WEAPON-DETECTION-SYSTEM/weapon_detection1.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
import cv2
import numpy as np

# Load Yolo
net = cv2.dnn.readNet("yolov3_training_2000.weights", "yolov3_testing.cfg")
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_DEFAULT)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
classes = ["Weapon"]

# Open the camera
cap = cv2.VideoCapture(0)

if not cap.isOpened():
print("Error: Could not open camera.")
exit()

while True:
ret, img = cap.read()

if not ret:
print("Error: Could not read frame from camera.")
break

height, width, channels = img.shape

# Detecting objects
blob = cv2.dnn.blobFromImage(img, 0.00392, (416, 416), (0, 0, 0), True, crop=False)
net.setInput(blob)

layer_names = net.getLayerNames()
output_layers = [layer_names[i - 1] for i in net.getUnconnectedOutLayers()]
colors = np.random.uniform(0, 255, size=(len(classes), 3))
outs = net.forward(output_layers)

# Showing information on the screen
class_ids = []
confidences = []
boxes = []
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.5:
# Object detected
center_x = int(detection[0] * width)
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)

# Rectangle coordinates
x = int(center_x - w / 2)
y = int(center_y - h / 2)

boxes.append([x, y, w, h])
confidences.append(float(confidence))
class_ids.append(class_id)

indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
if len(indexes) > 0:
for i in indexes.flatten():
x, y, w, h = boxes[i]
label = str(classes[class_ids[i]])
color = colors[class_ids[i]]
cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)
cv2.putText(img, label, (x, y + 30), cv2.FONT_HERSHEY_PLAIN, 3, color, 3)

cv2.imshow("Image", img)
key = cv2.waitKey(1)
if key == 27: # Press 'Esc' key to break the loop
break

cap.release()
cv2.destroyAllWindows()
Loading

0 comments on commit d9e797d

Please sign in to comment.