Instructions
Deep learning networks are based on multiple frameworks, including pytorch, tensorflow, keras, caffe, etc. These frameworks require entire sets of environments, system setting, and detailed configuration.
In order to test models based on different frameworks on the same platform, we choose Docker as the solution. Docker can pack all environment into a single file, thus eliminating most of the complications. Although, sometimes, docker compiled on one computer may still not function well on another computer, for reasons like cpu versions.
For Example, in our system, the pre-loaded image classification models are flask applications deployed in ubuntu system. Resnet34 model is based on pytorch, and mobilenet is based on tensorflow.
Standard setting of docker files:
The name of the docker image must be the same with the docker file name. For example, if the docker image name is resnet34,then the docker file name must be resnet34.tar
The docker must provide an api, which receives an image matrix, and returns the classification results of the image and the back-propagate gradient of the top 5 possible labels. All variables should be json form in string. The api would receive an image in the form of 3*224*224 array. The value in the array is between 0-255. The image would be transmitted as a request args named "imageArray". The returned information must contain two args: one is 'predictions', which is a 1*1000 vector of label prediction possibilities. The labels are imagenet picture labels. The other args is 'grads', which is the gradient of top 5 labels in every pixel. It should be a 5*1*3*224*224 float array.
An example command is like this:
docker run -d -p 127.0.0.1:{}:274 mobilenet /root/miniconda3/envs/api/bin/python /root/api/run.py
This command is used to launch the docker image on the server. The example docker is a python flask application.
The docker docks a ubuntu system.
The name of the docker image is mobilenet.
The python environment is in /root/miniconda3/envs/api/bin/python.
The application program is /root/api/run.py
Note that when you launch the application from the root path, all path in the program should be set to absolute path in case the run.py program cannot find other files in the same folder.
The {} part is reserved for an allocated port on the server.
When we launch the api in the docker, the '{}' is replaced by '274'. Which means, the docker provide an api at 127.0.0.1:274 in the host system.
If all these requirements are strictly fulfilled, we would be able to test the image classification model.