Skip to content
This repository has been archived by the owner on Dec 2, 2021. It is now read-only.

Commit

Permalink
Fixed bug in follower.py resulting in the detection of non-target peo…
Browse files Browse the repository at this point in the history
…ple.

Fixed bug with marker box generation

Updating phrasing.
  • Loading branch information
bkinman committed Sep 28, 2017
1 parent 6676c7e commit 2ec8478
Show file tree
Hide file tree
Showing 3 changed files with 16 additions and 11 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,9 @@ If for some reason you choose not to use Anaconda, you must install the followin
5. Once you are comfortable with performance on the training dataset, see how it performs in live simulation!

## Collecting Training Data ##
A simple training dataset has been provided above in this repository. This dataset will allow you to verify that you're segmentation network is semi-functional. However, if you're interested in improving your score, you may be interested in collecting additional training data. To do, please see the following steps.
A simple training dataset has been provided in this project's repository. This dataset will allow you to verify that your segmentation network is semi-functional. However, if your interested in improving your score,you may want to collect additional training data. To do it, please see the following steps.

The data directory is organized as follows:
The data directory is organized as follows:
```
data/runs - contains the results of prediction runs
data/train/images - contains images for the training set
Expand Down
11 changes: 8 additions & 3 deletions code/follower.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,10 @@

import time

import signal
import sys


# Create socketio server and Flask app
sio = socketio.Server()
app = Flask(__name__)
Expand Down Expand Up @@ -107,6 +111,7 @@ def __init__(self, image_hw, model, pred_viz_enabled = False, queue=None):
self.pred_viz_enabled = pred_viz_enabled
self.target_found = False


def on_sensor_frame(self, data):
rgb_image = Image.open(BytesIO(base64.b64decode(data['rgb_image'])))
rgb_image = np.asarray(rgb_image)
Expand All @@ -124,7 +129,7 @@ def on_sensor_frame(self, data):
if self.pred_viz_enabled:
self.queue.put([rgb_image, pred])

target_mask = pred[:, :, 1] > 0.5
target_mask = pred[:, :, 2] > 0.5
# reduce the number of false positives by requiring more pixels to be identified as containing the target
if target_mask.sum() > 10:
centroid = scoring_utils.get_centroid_largest_blob(target_mask)
Expand All @@ -136,7 +141,7 @@ def on_sensor_frame(self, data):
depth_img = get_depth_image(data['depth_image'])

# Get XYZ coordinates for specific pixel
pixel_depth = depth_img[centroid[0]][centroid[1]][0]*100/255.0
pixel_depth = depth_img[centroid[0]][centroid[1]][0]*50/255.0
point_3d = get_xyz_from_image(centroid[0], centroid[1], pixel_depth, self.image_hw)
point_3d.append(1)

Expand Down Expand Up @@ -217,5 +222,5 @@ def sio_server():

follower = Follower(image_hw, model, args.pred_viz, queue)
# start eventlet server

sio_server()
12 changes: 6 additions & 6 deletions code/model_training.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -122,18 +122,18 @@
"metadata": {},
"source": [
"## Build the Model <a id='build'></a>\n",
"In the following cells, you will build an FCN to train a model to detect the hero target and location within an image. The steps are:\n",
"In the following cells, you will build an FCN to train a model to detect and locate the hero target within an image. The steps are:\n",
"- Create an `encoder_block`\n",
"- Create a `decoder_block`\n",
"- Build the FCN consiting of encoder block(s), a 1x1 convolution, and decoder block(s). This step requires experimentation with different numbers of layers and filter sizes to build your model."
"- Build the FCN consisting of encoder block(s), a 1x1 convolution, and decoder block(s). This step requires experimentation with different numbers of layers and filter sizes to build your model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Encoder Block\n",
"Create an encoder block that includes a separable convolution layer using the separable_conv2d_batchnorm() function. The `filters` parameter defines the size or depth of the output layer. For example, 32 or 64. "
"Create an encoder block that includes a separable convolution layer using the `separable_conv2d_batchnorm()` function. The `filters` parameter defines the size or depth of the output layer. For example, 32 or 64. "
]
},
{
Expand All @@ -156,7 +156,7 @@
"metadata": {},
"source": [
"### Decoder Block\n",
"The decoder block, as covered in the Classroom, comprises of three steps:\n",
"The decoder block is comprised of three parts:\n",
"- A bilinear upsampling layer using the upsample_bilinear() function. The current recommended factor for upsampling is set to 2.\n",
"- A layer concatenation step. This step is similar to skip connections. You will concatenate the upsampled small_ip_layer and the large_ip_layer.\n",
"- Some (one or two) additional separable convolution layers to extract some more spatial information from prior layers."
Expand Down Expand Up @@ -529,7 +529,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python [default]",
"language": "python",
"name": "python3"
},
Expand All @@ -543,7 +543,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.1"
"version": "3.5.2"
},
"widgets": {
"state": {},
Expand Down

0 comments on commit 2ec8478

Please sign in to comment.