Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Deepstack/CodeProject-AI.Server detector plugin #6143

Merged
merged 8 commits into from
May 4, 2023
21 changes: 21 additions & 0 deletions docs/docs/configuration/detectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -256,3 +256,24 @@ model:
width: 416
height: 416
```


## Deepstack / CodeProject AI.Server Detector
The Deepstack/CodeProject AI.Server detector plugin for Frigate allows you to integrate Deeppstack or CodeProject.AI object detection capabilities into Frigate video surveillance system. CodeProject.AI and DeepStack is an open-source AI platforms that can be run on various devices, such as Raspberry Pi, Nvidia Jetson, and other compatible hardware. It is important to note that the integration is performed over the network, so the inference times may not be as fast as native Frigate detectors, but it still provides an efficient and reliable solution for object detection and tracking.
skrashevich marked this conversation as resolved.
Show resolved Hide resolved

### Setup
To get started with CodeProject.AI, visit their official website at [www.codeproject.com](https://www.codeproject.com/Articles/5322557/CodeProject-AI-Server-AI-the-easy-way) and follow the instructions to download and install the AI server on your preferred device. Detailed setup instructions for CodeProject.AI are outside the scope of the Frigate documentation.

To integrate CodeProject.AI with Frigate, you'll need to make the following changes to your Frigate configuration file:
skrashevich marked this conversation as resolved.
Show resolved Hide resolved

```yaml
detectors:
deepstack:
api_url: http://<your_codeproject_ai_server_ip>:<port>/v1/vision/detection
type: deepstack
api_timeout: 0.1 # seconds
```

Replace `<your_codeproject_ai_server_ip>` and `<port>` with the IP address and port of your CodeProject.AI server.

To verify that the integration is working correctly, start Frigate and observe the logs for any error messages related to CodeProject.AI. Additionally, you can check the Frigate web interface to see if the objects detected by CodeProject.AI are being displayed and tracked properly.
78 changes: 78 additions & 0 deletions frigate/detectors/plugins/deepstack.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
import logging
import numpy as np
import requests
import io

from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import BaseDetectorConfig
from typing import Literal
from pydantic import Extra, Field
from PIL import Image


logger = logging.getLogger(__name__)

DETECTOR_KEY = "deepstack"


class DeepstackDetectorConfig(BaseDetectorConfig):
type: Literal[DETECTOR_KEY]
api_url: str = Field(
default="http://localhost:80/v1/vision/detection", title="DeepStack API URL"
)
api_timeout: float = Field(default=0.1, title="DeepStack API timeout (in seconds)")
api_key: str = Field(default="", title="DeepStack API key (if required)")


class DeepStack(DetectionApi):
type_key = DETECTOR_KEY

def __init__(self, detector_config: DeepstackDetectorConfig):
self.api_url = detector_config.api_url
self.api_timeout = detector_config.api_timeout
self.api_key = detector_config.api_key
self.labels = detector_config.model.merged_labelmap

self.h = detector_config.model.height
self.w = detector_config.model.width

def get_label_index(self, label_value):
if label_value.lower() == "truck":
label_value = "car"
for index, value in self.labels.items():
if value == label_value.lower():
return index
return -1

def detect_raw(self, tensor_input):
image_data = np.squeeze(tensor_input).astype(np.uint8)
image = Image.fromarray(image_data)
with io.BytesIO() as output:
image.save(output, format="JPEG")
image_bytes = output.getvalue()
data = {"api_key": self.api_key}
response = requests.post(
self.api_url, files={"image": image_bytes}, timeout=self.api_timeout
)
response_json = response.json()
detections = np.zeros((20, 6), np.float32)

for i, detection in enumerate(response_json["predictions"]):
logger.debug(f"Response: {detection}")
if detection["confidence"] < 0.4:
logger.debug(f"Break due to confidence < 0.4")
break
label = self.get_label_index(detection["label"])
if label < 0:
logger.debug(f"Break due to unknown label")
break
detections[i] = [
label,
float(detection["confidence"]),
detection["y_min"] / self.h,
detection["x_min"] / self.w,
detection["y_max"] / self.h,
detection["x_max"] / self.w,
]

return detections