Real-time Object Detection API using Amazon SageMaker and Amazon API Gateway

By using SageMaker’s built-in algorithms, we can deploy our models with a simple line of code.

In this post, I will use SageMaker to deploy a car classification model and invoke model endpoint using API Gateway and AWS Lambda.

Create an Amazon SageMaker Notebook Instance

We need to use the notebook instance to create and manage Jupyter notebook so that we can prepare and process data and to train and deploy machine learning models. For more details check out the docs here.

Prepare Training Data

You can use AWS Ground Truth tools to label your own datasets. In this example, I use the Cars Dataset from Stanford.

The Cars dataset contains 16,185 images of 196 classes of cars. The data is split into 8,144 training images and 8,041 testing images, where each class has been split roughly in a 50–50 split. Classes are typically at the level of Make, Model, Year, e.g. 2012 Tesla Model S or 2012 BMW M3 coupe.

Download data

Let’s go to Jupyter notebook instance and create a new notebook, then download the dataset,

import os
import urllib.request
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
download('http://imagenet.stanford.edu/internal/car196/cars_train.tgz')
download('https://ai.stanford.edu/~jkrause/cars/car_devkit.tgz')

…and unpack them.

%%bash
tar -xzf car_devkit.tgz
tar -xzf cars_train.tgz

Note that the annotation is in “.mat” format(Matlab File). We have to convert it into an array with the following values: picture name, picture category ID, train/validation label.

import scipy.io as sio
def readClasses(matFile):   
content = sio.loadmat(matFile)
classes = [(_[0]) for _ in content['class_names'][0]]
return classes

def readAnnotations(matFile):
content = sio.loadmat(matFile)
return content['annotations'][0]

Prepare annotation data

The Amazon SageMaker Object Detection algorithm supports both RecordIO and Image & JSON format, I use the below script to convert an array to a JSON file as annotations input:

from imageio import imread

categories = readClasses("devkit/cars_meta.mat")
annotations = readAnnotations("devkit/cars_train_annos.mat")

for img in images :
shape = imread('cars_train/{}'.format(img)).shape
jsonFile = img.split('.')[0]+'.json'

line = {}
line['file'] = img
line['image_size'] = [{
'width':int(shape[1]),
'height':int(shape[0]),
'depth':3
}]

line['annotations'] = []
line['categories'] = []
#print(annotations)
for anno in annotations:
if(anno[5][0]==img):
#print(anno)
line['annotations'].append({
'class_id':int(fix_index_mapping(anno[4][0][0])),
'top':int(anno[1][0][0]),
'left':int(anno[0][0][0]),
'width':abs(int(anno[2][0][0])- int(anno[0][0][0])),
'height':abs(int(anno[3][0][0]) -int(anno[1][0][0])),
})
class_name = ''
for ind,cat in enumerate(categories, start=1):
if int(anno[4][0][0]) == ind:
class_name = str(cat)
assert class_name is not ''
line['categories'].append({
'class_id':int(anno[4][0][0]),
'name':class_name
})

if line['annotations']:
with open(os.path.join('car-generated', jsonFile),'w') as p:
json.dump(line,p)

The following is an example of a .json file.

{"file": "00001.jpg", "image_size": [{"width": 600, "height": 400, "depth": 3}], "annotations": [{"class_id": 13, "top": 116, "left": 39, "width": 530, "height": 259}], "categories": [{"class_id": 14, "name": "Audi TTS Coupe 2012"}]}

Upload to S3

Amazon SageMaker expects the dataset to be available in an S3 Bucket. We need to upload the images and annotation JSON files to S3 bucket.

%%time
prefix = "prefix = 'car-Detection'"
train_channel = prefix + '/car-train'
validation_channel = prefix + '/car-validation'
train_annotation_channel = prefix + '/train_annotation'
validation_annotation_channel = prefix + '/validation_annotation'
sess.upload_data(path='car-train', bucket=bucket, key_prefix=train_channel)
sess.upload_data(path='car-validation', bucket=bucket, key_prefix=validation_channel)
sess.upload_data(path='car-train_annotation', bucket=bucket, key_prefix=train_annotation_channel)
sess.upload_data(path='car-validation_annotation', bucket=bucket, key_prefix=validation_annotation_channel)
s3_train_data = 's3://{}/{}'.format(bucket, train_channel)
s3_validation_data = 's3://{}/{}'.format(bucket, validation_channel)
s3_train_annotation = 's3://{}/{}'.format(bucket, train_annotation_channel)
s3_validation_annotation = 's3://{}/{}'.format(bucket, validation_annotation_channel)
s3_output_location = 's3://{}/{}/output'.format(bucket, prefix)

Now we have some labeled data which can be used by training jobs, we are ready to proceed with the next step.

Train and Build the Model

In this example, we use a built-in algorithm Object Detection to train our model. You can see the whole notebook in this Github repository. The relevant code for training data is as follows:

If the training process goes well, we will have our model and it will be uploaded to output s3 bucket. The model can be seen in the AWS Console and AWS command line:

$aws sagemaker list-training-jobs --region ap-southeast-2

Deploy the Model

Once the training is done, we can deploy the trained model as an Amazon SageMaker real-time hosted endpoint.

object_detector = od_model.deploy(initial_instance_count = 1,                                  instance_type = 'ml.m4.xlarge')

You can check the endpoint configuration and status by navigating to the “Endpoints” tab in the Amazon SageMaker console.

Create a Serverless REST API

Once the Sagemaker endpoint is created, you can use the endpoint for Inference from notebook. AWS team provides a sample script to easily visualize the detection outputs. You can visualize the high-confidence predictions with a bounding box by filtering out low-confidence detections using the script below:

Great — this is working! I want to make it available to the outside world, so we’ll have to create an API. This can be easily achieved using a Serverless Framework

Get started with Serverless Framework

First, you need to install the Serverless Framework

$sls create --template aws-python3 --path car-classification

The directory that is created includes two files — handler.py is the Lambda function. The serverless.yml file, This file is needed to configure how our application will behave:

Note that the Allow policy resource is an SSM (AWS Systems Manager Agent) parameter. To store a value in SSM, I need to run the following command:

$aws ssm put-parameter --name sagemakerarn --type String --value arn:aws:sagemaker:ap-southeast-2:YOUR_ACCOUNT_ID:endpoint/object-detection-2019-06-01-04-13-54-575 --region ap-southeast-2D

Add Lambda function

Now, let’s update our handler.py to invoke SageMaker endpoint. This is how the handler.py file looks:

Deploy API

To deploy your API, run the following:

$ serverless deploy -v

Test the API

We are at the end of our journey! We can now invoke deployed serverless API endpoints from CURL or use it to integrate with other clients, let’s use curl:

$curl -d '{"img_url":"https://bit.ly/2IbFF70"}' -H "Content-Type: application/json" -X POST https://xxxx.execute-api.ap-southeast-2.amazonaws.com/dev/sagemaker

That’s about it! I hope you have found this article useful, You can find a complete project in my GitHub repo.

read original article here