Skip to content

Commit

Permalink
updated 9 PyTorch Android demo apps to PyTorch 1.10 (pytorch#210)
Browse files Browse the repository at this point in the history
* initial commit

* Revert "initial commit"

This reverts commit 5a65775.

* main readme and helloworld/demo app readme updates

* updated 9 Android demo apps to PyTorch 1.10

* vit4mnist model update
  • Loading branch information
jeffxtang authored Dec 8, 2021
1 parent bb5b9f0 commit 76ba0e0
Show file tree
Hide file tree
Showing 27 changed files with 128 additions and 98 deletions.
15 changes: 8 additions & 7 deletions ASLRecognition/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@

## Prerequisites

* PyTorch 1.9.0 and torchvision 0.10.0 (Optional)
* PyTorch 1.10.0 and torchvision 0.11.1 (Optional)
* Python 3.8 or above (Optional)
* Android Pytorch library pytorch_android_lite:1.9.0, pytorch_android_torchvision:1.9.0
* Android Pytorch library pytorch_android_lite:1.10.0, pytorch_android_torchvision_lite:1.10.0
* Android Studio 4.0.1 or later

## Quick Start
Expand All @@ -17,9 +17,9 @@ To Test Run the ASL recognition Android App, follow the steps below:

### 1. Train and Prepare the Model

If you don't have the PyTorch 1.9.0 and torchvision 0.10.0 installed, or if don't want to install them, you can skip this step. The trained, scripted and optimized model is already included in the repo, located at `ASLRecognitionapp/src/main/assets`.
If you don't have the PyTorch 1.10.0 and torchvision 0.11.1 installed, or if don't want to install them, you can skip this step. The trained, scripted and optimized model is already included in the repo, located at `ASLRecognitionapp/src/main/assets`.

Otherwise, open a terminal window, make sure you have torch 1.9.0 and torchvision 0.10.0 installed using command like `pip list|grep torch`, or install them using command like `pip install torch torchvision`, then run the following commands:
Otherwise, open a terminal window, make sure you have torch 1.10.0 and torchvision 0.11.1 installed using command like `pip list|grep torch`, or install them using command like `pip install torch torchvision`, then run the following commands:

```
git clone https://github.com/pytorch/android-demo-app
Expand All @@ -29,6 +29,7 @@ cd android-demo-app/ASLRecognition/scripts
Download the ASL alphabet dataset [here](https://www.kaggle.com/grassknoted/asl-alphabet) and unzip it into the `ASLRecognition/scripts` folder. Then run the scripts below, which are based on this [tutorial](https://debuggercafe.com/american-sign-language-detection-using-deep-learning/), to pre-process the training images, train the model and convert and optimize the trained model to the mobile interpreter model:

```
pip install opencv-python pandas sklearn imutils matplotlib
python preprocess_image.py
python create_csv.py
python train.py --epochs 5 # on a machine without GPU this can take hours
Expand All @@ -51,8 +52,8 @@ For more information on how to use a test script like the above to find out the
Open the ASLRecognition project using Android Studio. Note the app's `build.gradle` file has the following lines:

```
implementation 'org.pytorch:pytorch_android_lite:1.9.0'
implementation 'org.pytorch:pytorch_android_torchvision:1.9.0'
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.10.0'
```

and in the MainActivity.java, the code below is used to load the model:
Expand All @@ -68,7 +69,7 @@ Select an Android emulator or device and build and run the app. Some of the 26 t
![](screenshot2.png)
![](screenshot2.png)

To test the live ASL alphabet gesture recognition, after you get familiar with the 26 ASL signs by tapping Next and Recognize, select the LIVE button and make some ASL gesture in front of the camera. A screencast of the app running is available [here](https://drive.google.com/file/d/1NxehGHlU-RiYP_JU9qkpCEcQR2hG-vyv/view?usp=sharing).
To test the live ASL alphabet gesture recognition, after you get familiar with the 26 ASL signs by tapping Next and Recognize, select the LIVE button and make some ASL gesture in front of the camera. A screencast of the app running is available [here](https://drive.google.com/file/d/1NxehGHlU-RiYP_JU9qkpCEcQR2hG-vyv/view?usp=sharing).

### 4. What's Next
With a different sign language dataset such as the RWTH-PHOENIX-Weather 2014 MS [Public Hand Shape Dataset](https://www-i6.informatik.rwth-aachen.de/~koller/1miohands-data/) or the [Continuous Sign Language Recognition Dataset](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX/) and a state-of-the-art [sign language transformer](https://arxiv.org/pdf/2003.13830v1.pdf) based model, more powerful sign language recognition Android app can be developed based on the app here.
4 changes: 2 additions & 2 deletions ASLRecognition/app/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,6 @@ dependencies {
implementation "androidx.camera:camera-core:$camerax_version"
implementation "androidx.camera:camera-camera2:$camerax_version"

implementation 'org.pytorch:pytorch_android_lite:1.9.0'
implementation 'org.pytorch:pytorch_android_torchvision:1.9.0'
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.10.0'
}
Binary file modified ASLRecognition/app/src/main/assets/asl.ptl
Binary file not shown.
14 changes: 7 additions & 7 deletions ASLRecognition/scripts/test.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,20 @@
import numpy as np
import cv2
import argparse
import albumentations
import torchvision.transforms as transforms
import torch.nn.functional as F
import time
import cnn_models
from PIL import Image

# construct the argument parser and parse the arguments
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--img', default='../app/src/main/assets/C1.jpg', type=str,
help='path for the image to test on')
args = vars(parser.parse_args())

aug = albumentations.Compose([
albumentations.Resize(224, 224, always_apply=True),
aug = transforms.Compose([
transforms.Resize((224, 224)),
])

# load label binarizer
Expand All @@ -29,10 +31,8 @@
print(model)
print('Model loaded')

image = cv2.imread(f"{args['img']}")
image_copy = image.copy()

image = aug(image=np.array(image))['image']
image = Image.open(f"{args['img']}")
image = aug(image)
image = np.transpose(image, (2, 0, 1)).astype(np.float32)
image = torch.tensor(image, dtype=torch.float)
image = image.unsqueeze(0)
Expand Down
8 changes: 4 additions & 4 deletions D2Go/ObjectDetection/app/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ android {


packagingOptions {
doNotStrip '**.so'
pickFirst "**"
}
}

Expand All @@ -64,7 +64,7 @@ dependencies {
implementation "androidx.camera:camera-core:$camerax_version"
implementation "androidx.camera:camera-camera2:$camerax_version"

implementation 'org.pytorch:pytorch_android:1.8.0'
implementation 'org.pytorch:pytorch_android_torchvision:1.8.0'
implementation 'org.pytorch:torchvision_ops:0.9.0'
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.10.0'
implementation 'org.pytorch:torchvision_ops:0.10.0'
}
12 changes: 6 additions & 6 deletions D2Go/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,16 @@ This D2Go Android demo app shows how to prepare and use the D2Go model on Androi

## Prerequisites

* PyTorch 1.8.0 and torchvision 0.9.0 (Optional)
* PyTorch 1.10.0 and torchvision 0.11.1 (Optional)
* Python 3.8 or above (Optional)
* Android Pytorch library 1.8.0, torchvision library 1.8.0, torchvision_ops library 0.9.0
* Android Pytorch library pytorch_android_lite 1.10.0, pytorch_android_torchvision_lite 1.10.0, torchvision_ops library 0.10.0
* Android Studio 4.0.1 or later

## Quick Start

This section shows how to create and use the D2Go model and the pre-built torchvision-ops library in a completed Android app. To just build and run the app without creating the D2Go model yourself, go directly to Step 4.

1. Install PyTorch 1.8.0 and torchvision 0.9.0, for example:
1. Install PyTorch 1.10.0 and torchvision 0.11.1, for example:

```
conda create -n d2go python=3.8.5
Expand Down Expand Up @@ -54,9 +54,9 @@ In Android Studio, open `android-demo-app/D2Go` (not `android-demo-app/D2Go/Obje

The main changes needed to use the D2Go model and the required and pre-built torchvision-ops library are adding
```
implementation 'org.pytorch:pytorch_android:1.8.0'
implementation 'org.pytorch:pytorch_android_torchvision:1.8.0'
implementation 'org.pytorch:torchvision_ops:0.9.0'
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.10.0'
implementation 'org.pytorch:torchvision_ops:0.10.0'
```
in the build.gradle file and
```
Expand Down
2 changes: 1 addition & 1 deletion D2Go/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ allprojects {
}

dependencies {
classpath 'com.android.tools.build:gradle:3.3.2'
classpath 'com.android.tools.build:gradle:4.0.1'
}
}

Expand Down
12 changes: 6 additions & 6 deletions ImageSegmentation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ This repo offers a Python script that converts the [PyTorch DeepLabV3 model](htt

## Prerequisites

* PyTorch 1.9.0 and torchvision 0.10.0 (Optional)
* PyTorch 1.10.0 and torchvision 0.11.1 (Optional)
* Python 3.8 or above (Optional)
* Android Pytorch library pytorch_android_lite:1.9.0, pytorch_android_torchvision:1.9.0
* Android Pytorch library pytorch_android_lite:1.10.0, pytorch_android_torchvision_lite:1.10.0
* Android Studio 4.0.1 or later

## Quick Start
Expand All @@ -17,9 +17,9 @@ To Test Run the Image Segmentation Android App, follow the steps below:

### 1. Prepare the Model

If you don't have the PyTorch 1.9.0 environment set up, you can download the optimized-for-mobile Mobile Interpreter version of model file to the `android-demo-app/ImageSegmentation/app/src/main/assets` folder using the link [here](https://drive.google.com/file/d/1FCm-pHsLiPiiXBsJwookAa0VFS2zTgv-/view?usp=sharing).
If you don't have the PyTorch 1.10.0 environment set up, you can download the optimized-for-mobile Mobile Interpreter version of model file to the `android-demo-app/ImageSegmentation/app/src/main/assets` folder using the link [here](https://pytorch-mobile-demo-apps.s3.us-east-2.amazonaws.com/deeplabv3_scripted.pt).

Otherwise, open a terminal window, first install PyTorch 1.9.0 and torchvision 0.10.0 using command like `pip install torch torchvision`, then run the following commands:
Otherwise, open a terminal window, first install PyTorch 1.10.0 and torchvision 0.11.1 using command like `pip install torch torchvision`, then run the following commands:

```
git clone https://github.com/pytorch/android-demo-app
Expand All @@ -34,8 +34,8 @@ The Python script `deeplabv3.py` is used to generate the TorchScript-formatted m
Open the ImageSegmentation project using Android Studio. Note the app's `build.gradle` file has the following lines:

```
implementation 'org.pytorch:pytorch_android_lite:1.9.0'
implementation 'org.pytorch:pytorch_android_torchvision:1.9.0'
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.10.0'
```

and in the MainActivity.java, the code below is used to load the model:
Expand Down
4 changes: 2 additions & 2 deletions ImageSegmentation/app/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,6 @@ dependencies {
androidTestImplementation 'androidx.test.ext:junit:1.1.2'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'

implementation 'org.pytorch:pytorch_android_lite:1.9.0'
implementation 'org.pytorch:pytorch_android_torchvision:1.9.0'
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.10.0'
}
2 changes: 1 addition & 1 deletion ImageSegmentation/deeplabv3.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import torch
from torch.utils.mobile_optimizer import optimize_for_mobile

model = torch.hub.load('pytorch/vision:v0.9.0', 'deeplabv3_resnet50', pretrained=True)
model = torch.hub.load('pytorch/vision:v0.11.0', 'deeplabv3_resnet50', pretrained=True)
model.eval()

scripted_module = torch.jit.script(model)
Expand Down
15 changes: 9 additions & 6 deletions ObjectDetection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@

## Prerequisites

* PyTorch 1.9.0 or later (Optional)
* PyTorch 1.10.0 and torchvision 0.11.1 (Optional)
* Python 3.8 (Optional)
* Android Pytorch library pytorch_android_lite:1.9.0 and pytorch_android_torchvision:1.9.0
* Android Pytorch library pytorch_android_lite:1.10.0, pytorch_android_torchvision_lite:1.10.0
* Android Studio 4.0.1 or later

## Quick Start
Expand All @@ -19,9 +19,7 @@ To Test Run the Object Detection Android App, follow the steps below:

### 1. Prepare the model

If you don't have the PyTorch environment set up to run the script, you can download the model file `yolov5s.torchscript.ptl` [here](https://drive.google.com/u/1/uc?id=1_MF7NVi9Csm1lizoSCp1wCtUUUpuhwet&export=download) to the `android-demo-app/ObjectDetection/app/src/main/assets` folder, then skip the rest of this step and go to step 2 directly.

Be aware that the downloadable model file was created with PyTorch 1.9.0, matching the PyTorch Android library 1.9.0 specified in the project's `build.gradle` file as `implementation 'org.pytorch:pytorch_android_lite:1.9.0'`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same PyTorch Android library version in the `build.gradle` file to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest PyTorch master code to create the model, follow the steps at [Building PyTorch Android from Source](https://pytorch.org/mobile/android/#building-pytorch-android-from-source) and [Using the PyTorch Android Libraries Built](https://pytorch.org/mobile/android/#using-the-pytorch-android-libraries-built-from-source-or-nightly) on how to use the model in Android.
If you don't have the PyTorch environment set up to run the script, you can download the model file `yolov5s.torchscript.ptl` [here](https://pytorch-mobile-demo-apps.s3.us-east-2.amazonaws.com/yolov5s.torchscript.ptl) to the `android-demo-app/ObjectDetection/app/src/main/assets` folder, then skip the rest of this step and go to step 2 directly.

The Python script `export.py` in the `models` folder of the [YOLOv5 repo](https://github.com/ultralytics/yolov5) is used to generate a TorchScript-formatted YOLOv5 model named `yolov5s.torchscript.pt` for mobile apps.

Expand Down Expand Up @@ -51,7 +49,12 @@ Note that small sized version of the YOLOv5 model, which runs faster but with le

### 2. Build with Android Studio

Start Android Studio, then open the project located in `android-demo-app/ObjectDetection`
Start Android Studio, then open the project located in `android-demo-app/ObjectDetection`. Note the app's `build.gradle` file has the following lines:

```
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.10.0'
```

### 3. Run the app

Expand Down
4 changes: 2 additions & 2 deletions ObjectDetection/app/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,6 @@ dependencies {
implementation "androidx.camera:camera-core:$camerax_version"
implementation "androidx.camera:camera-camera2:$camerax_version"

implementation 'org.pytorch:pytorch_android_lite:1.9.0'
implementation 'org.pytorch:pytorch_android_torchvision:1.9.0'
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.10.0'
}
13 changes: 5 additions & 8 deletions QuestionAnswering/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,28 +8,25 @@ In this demo app, written in Kotlin, we'll show how to quantize and convert the

## Prerequisites

* PyTorch 1.9.0 or later (Optional)
* PyTorch 1.10.0 or later (Optional)
* Python 3.8 (Optional)
* Android Pytorch library org.pytorch:pytorch_android_lite:1.9.0
* Android Pytorch library org.pytorch:pytorch_android_lite:1.10.0
* Android Studio 4.0.1 or later

## Quick Start

To Test Run the Android QA App, run the following commands on a Terminal:
To Test Run the Android QA App, follow the steps below:

### 1. Prepare the Model

If you don't have PyTorch installed or want to have a quick try of the demo app, you can download the scripted QA model `qa360_quantized.ptl` [here](https://drive.google.com/file/d/1PgD3pAEf0riUiT3BfwHOm6UEGk8FfJzI/view?usp=sharing) and save it to the `QuestionAnswering/app/src/main/assets` folder, then continue to Step 2.
If you don't have PyTorch installed or want to have a quick try of the demo app, you can download the scripted QA model `qa360_quantized.ptl` [here](https://pytorch-mobile-demo-apps.s3.us-east-2.amazonaws.com/qa360_quantized.ptl) and save it to the `QuestionAnswering/app/src/main/assets` folder, then continue to Step 2.

Be aware that the downloadable model file was created with PyTorch 1.9.0, matching the PyTorch Android library 1.9.0 specified in the project's `build.gradle` file as `implementation 'org.pytorch:pytorch_android:1.9.0'`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same PyTorch Android library version in the `build.gradle` file to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest PyTorch master code to create the model, follow the steps at [Building PyTorch Android from Source](https://pytorch.org/mobile/android/#building-pytorch-android-from-source) and [Using the PyTorch Android Libraries Built](https://pytorch.org/mobile/android/#using-the-pytorch-android-libraries-built-from-source-or-nightly) on how to use the model in Android.

With PyTorch 1.9.0 installed, first install the Huggingface `transformers` by running `pip install transformers`, then run `python convert_distilbert_qa.py`.
With PyTorch 1.10.0 installed, first install the Huggingface `transformers` by running `pip install transformers`, then run `python convert_distilbert_qa.py`.

Note that a pre-defined question and text, resulting in the size of the input tokens (of question and text) being 360, is used in the `convert_distilbert_qa.py`, and 360 is the maximum token size for the user text and question in the app. If the token size of the inputs of the text and question is less than 360, padding will be needed to make the model work correctly.

After the script completes, copy the model file `qa360_quantized.ptl` to the Android app's assets folder.


### 2. Build and run with Android Studio

Start Android Studio, open the project located in `android-demo-app/QuestionAnswering`, and run on your AVD or real Android device. See this [video](https://drive.google.com/file/d/10hwGNFo5tylalKwut_CWFPJmV7JRdDKF/view?usp=sharing) for a screencast of the app running. Some example translation results are:
Expand Down
5 changes: 2 additions & 3 deletions QuestionAnswering/app/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,10 @@ dependencies {
implementation 'androidx.appcompat:appcompat:1.2.0'
implementation 'androidx.constraintlayout:constraintlayout:2.0.4'

implementation 'org.pytorch:pytorch_android_lite:1.9.0'
implementation "androidx.core:core-ktx:+"
implementation "androidx.core:core-ktx:1.6.0"
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"


implementation 'org.pytorch:pytorch_android_lite:1.10.0'
}
repositories {
mavenCentral()
Expand Down
6 changes: 3 additions & 3 deletions Seq2SeqNMT/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ This Android demo app shows:

## Prerequisites

* PyTorch 1.9.0 (Optional)
* PyTorch 1.10.0 (Optional)
* Python 3.8 (Optional)
* Android Pytorch library org.pytorch:pytorch_android_lite:1.9.0
* Android Pytorch library org.pytorch:pytorch_android_lite:1.10.0
* Android Studio 4.0.1 or later

## Quick Start
Expand All @@ -25,7 +25,7 @@ To Test Run the Object Detection Android App, follow the steps below:

If you don't have the PyTorch environment set up to run the script, you can download the PyTorch trained and optimized NMT encoder and decoder models compressed in a zip [here](https://drive.google.com/file/d/1S75cWNEp43U6nCp2MOBR-jE-ZnlHz1PI/view?usp=sharing), unzip it, copy them to the Android app's assets folder, and skip the rest of this step and go to step 2 directly.

If you have a good GPU and want to train your model from scratch, run `python seq2seq2_nmt.py` to go through the whole process of training, saving, loading, optimizing and saving the final mobile-ready models `optimized_encoder_150k.ptl` and `optimized_decoder_150k.ptl`. Copy the two model files to the Android app's assets folder.
If you have a good GPU and want to train your model from scratch, run `python seq2seq_nmt.py` to go through the whole process of training, saving, loading, optimizing and saving the final mobile-ready models `optimized_encoder_150k.ptl` and `optimized_decoder_150k.ptl`. Copy the two model files to the Android app's assets folder.

### 2. Build and run with Android Studio

Expand Down
2 changes: 1 addition & 1 deletion Seq2SeqNMT/app/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@ dependencies {
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'

implementation 'org.pytorch:pytorch_android_lite:1.9.0'
implementation 'org.pytorch:pytorch_android_lite:1.10.0'

}
Loading

0 comments on commit 76ba0e0

Please sign in to comment.