If you've developed a deep neural network model that takes an image and outputs a set of labels, bounding boxes or any piece of information, then you may have also wanted to make your model available as a service. If this question has crossed your mind, then the post below may provide some answers.

Building blocks discussed in the post are;

  1. Docker to containerize the web application.
  2. Resnet50 provides the pre-trained deep neural network that labels an image.
  3. Python CherryPy is the web framework used to develop the web application.
  4. Google Cloud Platform's Cloudrun is the service that is used to deploy the containerized web application to the cloud.


Create a directory in your workspace and cd into it.

mkdir resnet50service
cd resnet50service

Create a Dockerfile.

touch Dockerfile

Copy the following into the Dockerfile.

FROM gw000/keras:2.1.4-py2-tf-cpu

# install dependencies from debian packages
RUN apt-get update -qq \
 && apt-get install --no-install-recommends -y \
    python-matplotlib \
    python-pillow \

# install dependencies from python packages
RUN pip --no-cache-dir install \
    simplejson \

COPY resnet50_service.py /
COPY resnet50_weights_tf_dim_ordering_tf_kernels.h5 /
ENTRYPOINT ["python", "resnet50_service.py"]

The Dockerfile describes the "recipe" to create a suitable environment for your application. The base image already comes with the keras library. A few additional libraries are installed. cherrypy is used to develop the web application.

resnet50_service.py is a Python program that will create the image labeling service and resnet50_weights_tf_dim_ordering_tf_kernels.h5 are the pre-trained model weights that are used to predict labels from an image.


Copy the Python code below into a file named resnet50_service.py and save it in the resnet50service directory.

from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
import os
import cherrypy
import tensorflow as tf
import uuid
import simplejson

print("ResNet50 service starting..")

# Initialize model
model = ResNet50(weights= 'resnet50_weights_tf_dim_ordering_tf_kernels.h5')
graph = tf.get_default_graph() # See https://github.com/keras-team/keras/issues/2397#issuecomment-254919212 explaining need to save the tf graph

def classify_with_resnet50(img_file):
    label_results = []
    img = image.load_img(img_file, target_size=(224, 224))
    # Convert to array
    img_arr = image.img_to_array(img)
    img_arr = np.expand_dims(img_arr, axis=0)
    img_arr = preprocess_input(img_arr)
    # Make prediction and extract top 3 predicted labels
    # see https://github.com/keras-team/keras/issues/2397#issuecomment-254919212 for additional details on using global graph
    global graph
    with graph.as_default():
        predictions = model.predict(img_arr)
    predictions = decode_predictions(predictions, top=3)[0]
    for each_pred in predictions:
        label_results.append({'label': each_pred[1], 'prob': str(each_pred[2])})
    return simplejson.dumps(label_results)

class ResNet50Service(object):
    def index(self):
        return """
            <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.0/jquery.js"></script>
            // ref https://codepen.io/mobifreaks/pen/LIbca
            function readURL(input) {
                if (input.files && input.files[0]) {
                    var reader = new FileReader();
                    reader.onload = function (e) {
                            .attr('src', e.target.result);
        <form method="post" action="/classify" enctype="multipart/form-data">
                <input type="file" name="img_file"  onchange="readURL(this);"/>
                <input type="submit" />
        <img id="img_upload" src=""/>


    def classify(self, img_file):
        upload_path = os.path.dirname(__file__)
        upload_filename = str(uuid.uuid4())
        upload_file = os.path.normpath(os.path.join(upload_path, upload_filename))
        with open(upload_file, 'wb') as out:
            while True:
                data = img_file.file.read(8192)
                if not data:
        return classify_with_resnet50(upload_file)

if __name__ == '__main__':
    cherrypy.server.socket_host = ''
    cherrypy.server.socket_port = 8080

There are two endpoints defined and both respond to an image by returning the top 3 predicted labels with their probabilities. One endpoint, provides a simple UI to upload an image to the service and is accessible via a web browser and the other endpoint, /classify accepts image files programmatically.


Download the ResNet50 pre-trained weights file into the resnet50service directory.

wget https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5

You should now have three files in your resnet50service directory;

  1. Dockerfile

  2. resnet50_service.py

  3. resnet50_weights_tf_dim_ordering_tf_kernels.h5

You are now ready to create the container that runs image the labeling service.


docker build -t resnet50labelservice:v1 . will build the image.

docker run -it -p 8080:8080 resnet50labelservice:v1 will run the image labeling service inside the container.


Open you web browser and type in localhost:8080 in the address bar. You will see a simple form to which you can upload an image. Choose an image file you would like to label and submit to the form.

The following Python snippet demonstrates how you can use submit images to the /classify endpoint in a programmatic way.

import requests

labeling_service_url = 'http://localhost:8080/classify'
# Replace below with image file of your choice
img_file = {'img_file': open('maxresdefault.jpg', 'rb')}
resp = requests.post(labeling_service_url, files=img_file)


You have successfully created a containerized image labeling service that uses a deep neural network to predict image labels. At this point, you can deploy this container on an internal server or to a cloud service of your choice. The rest of this post describes how you can deploy this containerized application to Google Cloud Platform's Cloudrun service. Steps described below will work for any containerized application.


Cloud Run is a managed compute platform that automatically scales your stateless containers.

Do the following before you can start the process of building and deploying your container.

  • Install Google Cloud SDK.
  • Create a project in Google Cloud console.
  • Enable Cloud Run API and Cloud Build API for the newly created project.

You are now ready to follow these instructions to build and deploy your container to Google Cloud Run.
The instructions first help you build your container and submit to Google Clouds' container registry after which you run the container to create a service.

Run the following commands while you are in the resnet50service directory.

The gcloud builds submit command below will build your container image and submit to Google Clouds' container registry. Replace mltest-202903 in the commands below with your own projects' name.

gcloud builds submit --tag gcr.io/mltest-202903/resnet50classify

Now that you have built and submitted your container image, you are ready to deploy the container as a service.

The gcloud beta run deploy command below will create a revision of your service resnet50classify and deploy it. The --memory 1Gi parameter is necessary without which the deployment fails (due to the container requiring more than the 250m default memory).

Once you invoke the command below, you will be prompted to choose a region (select us-central1) and service name (leave default). For testing purposes you can choose to allow unauthenticated invocations but remember to delete this service after you are done testing.

After the command succeeds you will be given a url which you can paste your in your browser to load the image labeling submit page. Give it 3 to 4 seconds to load.

gcloud beta run deploy --image gcr.io/mltest-202903/resnet50classify  --memory 1Gi --platform managed

After successfully deploying I received https://resnet50classify-ntnpecitvq-uc.a.run.app url.


Very helpful tutorials here and here on file upload in cherrypy.

Using a tensorflow model in a web framework can cause inference to happen in a different thread than where the model is loaded, leading to ValueError: Tensor .... is not an element of this graph. I faced the same issue and used the solution provided here.

See here to further customize the gcloud beta run deploy command.