In this section we will explain how to build our project from source for exploration, testing or development.
Prefered development environment is Linux with 30GB+ disk space available. A CUDA supported GPU is required for running Machine Learning services such as image annotation and image retrieval.
Before starting you should fetch the repository from it's source:
$ git clone [email protected]:Dogacel/Kalas-Iris.git
$ cd Kalas-Iris && git pull --recurse-submodules
$ sudo add-apt-repository ppa:deadsnakes/ppa
$ sudo apt update
$ sudo apt install python3.8
$ pip install virtualenv # Highly Suggested
Please install NVIDIA CUDA 11.0 drivers on your machine. Link
$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | bash # Install node version manager (nvm)
$ nvm install 14.4 # Install the required npm version
Install flutter from source and follow the instructions given.
Spin up an Android emulator or connect your Android device and enable USB debugging.
Run the application using the following command:
$ cd mobile
$ flutter pub get
$ flutter run
Install and setup Ngrok by following the instructions.
After following the instructions for installing and connecting your account. Use the following command to expose the port where the Flask App is running (5000 by default).
$ ./ngrok http 5000
You can also install and use the Visual Studio Code Extension of Ngrok. Follow the listed instructions and make sure to give the correct port number for the Flask App.
$ cd api # Locate to the api/ folder
$ python -m virtualenv venv # Create a virtual env for python
$ . venv/bin/activate # Activate the virtual env use don't forget to use 'deactivate' to exit venv
$ source venv/bin/activate # Mac users should use this to activate the virtual env
$ pip install -r requirements.txt # Install python dependencies
Repeat the steps above again for the mmfashion
folder as well.
MMFashion setup is highly complex, you can always visit MMFashion Docs when you need.
First, visit the Model Zoo Page and download the following models:
- VGG-16 Backbone =>
checkpoint/vgg16.pth
- Attribute Prediction Coarse / ResNet-50 Global Pooling =>
checkpoint/resnet_coarse_global.pth
- Category Attribute Prediction Fine / VGG-16 Global Pooling =>
checkpoint/vgg16_fine_global.pth
- In-Shop Clothes Retrieval / VGG-16 Global Pooling =>
checkpoint/Retrieve/vgg/global/epoch_100.pth
MMFashion Data Preperation Docs
Download DeepFashion Dataset and put it under mmfashion/data
.
Set your file structure as the following:
In-shop
├── Anno
│ ├── segmentation
│ | ├── DeepFashion_segmentation_train.json
│ | ├── DeepFashion_segmentation_query.json
│ | ├── DeepFashion_segmentation_gallery.json
│ ├── list_bbox_inshop.txt
│ ├── list_description_inshop.json
│ ├── list_item_inshop.txt
│ └── list_landmarks_inshop.txt
├── Eval
│ └── list_eval_partition.txt
└── Img
├── img
| ├──XXX.jpg
├── img_highres
└── ├──XXX.jpg
And run python prepare_in_shop.py
to arrange the dataset. For more information check MMFashion Dataset Docs.
Finally update your backend IP address to point your local under web/src/api/api.js
and api/flaskr/routes/image.py
.
Create a MongoDB instance on your local or on the Mongo Atlas cloud. Update your server paths under api/flaskr/db.py
After creating a database, create a .env file containing your username and password. It should have the following format.
$ DATABASE_USERNAME = "yourusername"
$ DATABASE_PASSWORD = "yourpassword"
For more on .env files, you can visit here and here
$ cd api
$ sh run.sh # Or ./run.sh
$ cd web
$ npm start # This might take a while on the first run. It will install dependencies
$ cd mmfashion
$ sudp python app.py
The website can be accesed at http://localhost:5000
on your browser.
-
Click on the Image Annotation tab to access the image annotation page.
-
Click Upload image to upload a new image. Upload image will be automatically annotated. You can also hover over the images and delete or preview them.
-
Drag your mouse around the picture to draw a bounding box to the clothing item you want to annotate.
-
Click on the annotate cropped image button to re-annotate the image. This time only the cropped image will be annotated.
-
You can see the suggested Attributes, Categories and Colors from these colums. You select the correct annotations via the checkbox corresponding to it. The selected annotations can be saved.
-
Click the button in order to save the suggested annotations for the image. Those annotations are saved in our backend services for improving the performance of the model.
-
Click on the 'Past Reviews' tab to see which suggestions has been made to the automatically annotated products.
-
Type the index of the item or paginate using the left and right arrows to see other suggested annotations.
-
The suggested annotations are shown on this top row.
-
The image of the annotation is shown here.
-
Click on the 'Image Retrieval' tab for searching the similar items by a given image.
-
Click Upload image to upload a new image. Upload image will be automatically used to find similar products. This process might take a while depending on your gallery size. You can also hover over the images and delete or preview them.
-
Drag your mouse around the picture to draw a bounding box to the clothing item you want to search for.
-
Click on the retrieve similar products for crop button to re-search the image. This time only the cropped image will be searched.
-
This slider shows you the retrieved similar items. You can use arrow keys to go right or left. Also the slider automatically changes images periodically.
- Click on the "Login" tab.
- Click to "Signup Now". It will redirect you to the signup page.
-
Fill the user credentials.
-
Click "Signup". Upon success, you will be redirected to the login screen.
-
Enter your username and password.
-
Click login to enter to the system.
Refer to the WooCommerce Documentation to setup your WooCommerce API.
- Click on the "Integrations" tab.
-
Click to "WooCommerce".
-
Enter the url of your website, and the consumer key and consumer secret you receieved from WooCommerce.
-
Click submit to add your integration information to Kalas-Iris.
- Create a Webhook on WooCommerce by following these steps
- Launch Ngrok with the port number used by the Flask App. Ngrok will then give you the exposed URL for the back-end.
- Make sure you choose "Product created" for Topic and enter
http://$LINK_FROM_NGROK/newProductCreated
for the "Delivery URL" and "Active" for Status.
- Congrats, now when you create a product with its name and image, it will be annotated automatically.
Head over to http://kalas-iris.com/app.apk to download the APK for your device.
Once you open the app, you will be welcomed with a camera view.
-
Select whether you want to use the front-facing camera or the back-facing camera.
-
Click the camera icon to take a picture. Taking a picture will automatically forward you to the cropping view.
-
You can see the last taken picture under this gallery view. Click on the picture to show image gallery view.
After you take a picture, you will see an image editor.
-
You can select the aspect ratio, cropped area and rotation using the tools provided on the bottom bar.
-
Click on the approval icon to continue with the cropped image.
-
Click on the cancel icon to use the uncropped image.
Once you crop your image, or click the button on the gallery view button on your camera view, you will see the image information page. In this page you can preview your image or use one of our services using the previewed image.
-
Click the Annotate Image button to annotate the image.
-
Click the Retrieve Similar Products button to search for similar products.
After you annotate your image you will see the annotation result page.
-
Found colors can be seen on the top row slider.
-
Predicted categories can be seen on the left column.
-
Predicted attributes can be seen on the right column.
-
Click to go back to the image preview view.
-
A sliding widget shows you the similar images the service found.
-
Click to go back to the image preview view.