Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add Docker Container example #94

Open
wants to merge 3 commits into
base: dev
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 13 additions & 7 deletions examples/docker_submission/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,24 +4,30 @@ TODO: Add a description of the submission process here.


## Launching the submission container
TODO: Create a docker-compose file

First we have to build the container
```bash
cd ./http_submission
docker build -t sample_pysaliency .
docker build -t sample_pysaliency docker
```

Then we can start it
```bash
docker run --rm -it -p 4000:4000 sample_pysaliency
```
The above command will launch the image as interactive container in the foregroun
and expose the port `4000` to the host machine.
If you prefer to run it in the background, use
```bash
docker run --name sample_pysaliency -dp 4000:4000 sample_pysaliency
```
The above command will launch a container named `sample_pysaliency` and expose the port `4000` to the host machine. The container will be running in the background.
which will launch a container named `sample_pysaliency`. The container will be running in the background.

To test the model server, run the sample_evaluation script (Make sure to have the `pysaliency` package installed):
```bash
python ./http_evaluation/sample_evaluation.py
python ./sample_evaluation.py
```


To delete the container, run the following command:
To delete the background container, run the following command:
```bash
docker stop sample_pysaliency && docker rm sample_pysaliency
```
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,19 @@ WORKDIR /app
ENV HTTP_PORT=4000

RUN apt-get update \
&& apt-get -y install gcc
&& apt-get -y install gcc \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/*

COPY ./requirements.txt ./
RUN python -m pip install -U pip \
&& python -m pip install -r requirements.txt
RUN python -m pip install --no-cache -U pip \
&& python -m pip install --no-cache -r requirements.txt

COPY . ./
COPY ./model_server.py ./
COPY ./sample_submission.py ./

# This is needed for Singularity builds.
EXPOSE $HTTP_PORT

# The entrypoint for a container,
CMD ["gunicorn", "-w", "1", "-b", "0.0.0.0:4000", "--pythonpath", ".", "model_server:app"]
CMD ["gunicorn", "-w", "1", "-b", "0.0.0.0:4000", "--pythonpath", ".", "--access-logfile", "-", "model_server:app"]
Original file line number Diff line number Diff line change
@@ -1,20 +1,22 @@
from flask import Flask, request, jsonify
from flask_orjson import OrjsonProvider
import numpy as np
import json
from PIL import Image
from io import BytesIO
# import pickle
import orjson


# Import your model here
from sample_submission import MySimpleScanpathModel

app = Flask("saliency-model-server")
app.json_provider = OrjsonProvider(app)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we even using the OrjsonProvider? I guess not, since we're manually calling orjson. In this case i would remove the dependency. Alternatively, you can try to keep it, use jsonify and set the config option to prevent pretty printing

app.logger.setLevel("DEBUG")

# # TODO - replace this with your model
model = MySimpleScanpathModel()


@app.route('/conditional_log_density', methods=['POST'])
def conditional_log_density():
data = json.loads(request.form['json_data'])
Expand All @@ -28,14 +30,16 @@ def conditional_log_density():
stimulus = np.array(image)

log_density = model.conditional_log_density(stimulus, x_hist, y_hist, t_hist, attributes)
return jsonify({'log_density': log_density.tolist()})
log_density_list = log_density.tolist()
response = orjson.dumps({'log_density': log_density_list})
return response


@app.route('/type', methods=['GET'])
def type():
type = "ScanpathModel"
version = "v1.0.0"
return jsonify({'type': type, 'version': version})
return orjson.dumps({'type': type, 'version': version})


def main():
Expand Down
10 changes: 10 additions & 0 deletions examples/docker_submission/docker/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
cython
flask
gunicorn
numpy

# Add additional dependencies here
pysaliency
scipy
torch
flask_orjson
79 changes: 79 additions & 0 deletions examples/docker_submission/docker/sample_submission.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
import numpy as np
import sys
from typing import Union
from scipy.ndimage import gaussian_filter
import pysaliency


class LocalContrastModel(pysaliency.Model):
def __init__(self, bandwidth=0.05, **kwargs):
super().__init__(**kwargs)
self.bandwidth = bandwidth

def _log_density(self, stimulus: Union[pysaliency.datasets.Stimulus, np.ndarray]):

# _log_density can either take pysaliency Stimulus objects, or, for convenience, simply numpy arrays
# `as_stimulus` ensures that we have a Stimulus object
stimulus_object = pysaliency.datasets.as_stimulus(stimulus)

# grayscale image
gray_stimulus = np.mean(stimulus_object.stimulus_data, axis=2)

# size contains the height and width of the image, but not potential color channels
height, width = stimulus_object.size

# define kernel size based on image size
kernel_size = np.round(self.bandwidth * max(width, height)).astype(int)
sigma = (kernel_size - 1) / 6

# apply Gausian blur and calculate squared difference between blurred and original image
blurred_stimulus = gaussian_filter(gray_stimulus, sigma)

prediction = gaussian_filter((gray_stimulus - blurred_stimulus)**2, sigma)

# normalize to [1, 255]
prediction = (254 * (prediction / prediction.max())).astype(int) + 1

density = prediction / prediction.sum()

return np.log(density)

class MySimpleScanpathModel(pysaliency.ScanpathModel):
def __init__(self, spatial_model_bandwidth: float=0.05, saccade_width: float=0.1):
self.spatial_model_bandwidth = spatial_model_bandwidth
self.saccade_width = saccade_width
self.spatial_model = LocalContrastModel(spatial_model_bandwidth)
# self.spatial_model = pysaliency.UniformModel()


def conditional_log_density(self, stimulus, x_hist, y_hist, t_hist, attributes=None, out=None,):
stimulus_object = pysaliency.datasets.as_stimulus(stimulus)

# size contains the height and width of the image, but not potential color channels
height, width = stimulus_object.size

spatial_prior_log_density = self.spatial_model.log_density(stimulus)
spatial_prior_density = np.exp(spatial_prior_log_density)

# compute saccade bias
last_x = x_hist[-1]
last_y = y_hist[-1]

xs = np.arange(width, dtype=float)
ys = np.arange(height, dtype=float)
XS, YS = np.meshgrid(xs, ys)

XS -= last_x
YS -= last_y

# compute prior
max_size = max(width, height)
actual_kernel_size = self.saccade_width * max_size

saccade_bias = np.exp(-0.5 * (XS ** 2 + YS ** 2) / actual_kernel_size ** 2)

prediction = spatial_prior_density * saccade_bias

density = prediction / prediction.sum()
return np.log(density)

26 changes: 26 additions & 0 deletions examples/docker_submission/docker_deepgaze3/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Specify a base image depending on the project.
FROM bitnami/python:3.8
# For more complex examples, might need to use a different base image.
# FROM pytorch/pytorch:1.9.1-cuda11.1-cudnn8-runtime

WORKDIR /app

ENV HTTP_PORT=4000

RUN apt-get update \
&& apt-get -y install gcc \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/*

COPY ./requirements.txt ./
RUN python -m pip install --no-cache -U pip \
&& python -m pip install --no-cache -r requirements.txt

COPY ./model_server.py ./
COPY ./sample_submission.py ./

# This is needed for Singularity builds.
EXPOSE $HTTP_PORT

# The entrypoint for a container,
CMD ["gunicorn", "-w", "1", "-b", "0.0.0.0:4000", "--pythonpath", ".", "--access-logfile", "-", "model_server:app"]
76 changes: 76 additions & 0 deletions examples/docker_submission/docker_deepgaze3/model_server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
from flask import Flask, request
# from flask_orjson import OrjsonProvider
import numpy as np
import json
from PIL import Image
from io import BytesIO
import orjson
from scipy.ndimage import zoom
from scipy.special import logsumexp
import torch

# Import your model here
import deepgaze_pytorch

# Flask server
app = Flask("saliency-model-server")
# app.json_provider = OrjsonProvider(app)
app.logger.setLevel("DEBUG")

# # TODO - replace this with your model
model = deepgaze_pytorch.DeepGazeIII(pretrained=True)

@app.route('/conditional_log_density', methods=['POST'])
def conditional_log_density():
# get data
data = json.loads(request.form['json_data'])

# extract scanpath history
x_hist = np.array(data['x_hist'])
y_hist = np.array(data['y_hist'])
# t_hist = np.array(data['t_hist'])
# attributes = data.get('attributes', {})

# extract stimulus
image_bytes = request.files['stimulus'].read()
image = Image.open(BytesIO(image_bytes))
stimulus = np.array(image)

# centerbias for deepgaze3 model
centerbias_template = np.zeros((1024, 1024)) # (1024, 1024)
centerbias = zoom(centerbias_template,
(stimulus.shape[0]/centerbias_template.shape[0],
stimulus.shape[1]/centerbias_template.shape[1]),
order=0,
mode='nearest'
)
centerbias -= logsumexp(centerbias)

# make tensors for deepgaze3 model
image_tensor = torch.tensor([stimulus.transpose(2, 0, 1)])
centerbias_tensor = torch.tensor([centerbias])
x_hist_tensor = torch.tensor([x_hist[model.included_fixations]])
y_hist_tensor = torch.tensor([y_hist[model.included_fixations]])

# return model response
log_density = model(image_tensor, centerbias_tensor, x_hist_tensor, y_hist_tensor)
log_density_list = log_density.tolist()
response = orjson.dumps({'log_density': log_density_list})
return response


@app.route('/type', methods=['GET'])
def type():
type = "ScanpathModel"
version = "v1.0.0"
return orjson.dumps({'type': type, 'version': version})




def main():
app.run(host="localhost", port="4000", debug="True", threaded=True)


if __name__ == "__main__":
main()
11 changes: 11 additions & 0 deletions examples/docker_submission/docker_deepgaze3/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
cython
flask
gunicorn
numpy

# Add additional dependencies here
pysaliency
scipy
torch
flask_orjson
git+https://github.com/matthias-k/deepgaze
79 changes: 79 additions & 0 deletions examples/docker_submission/docker_deepgaze3/sample_submission.py
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is not being used at all, right?

Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
import numpy as np
import sys
from typing import Union
from scipy.ndimage import gaussian_filter
import pysaliency


class LocalContrastModel(pysaliency.Model):
def __init__(self, bandwidth=0.05, **kwargs):
super().__init__(**kwargs)
self.bandwidth = bandwidth

def _log_density(self, stimulus: Union[pysaliency.datasets.Stimulus, np.ndarray]):

# _log_density can either take pysaliency Stimulus objects, or, for convenience, simply numpy arrays
# `as_stimulus` ensures that we have a Stimulus object
stimulus_object = pysaliency.datasets.as_stimulus(stimulus)

# grayscale image
gray_stimulus = np.mean(stimulus_object.stimulus_data, axis=2)

# size contains the height and width of the image, but not potential color channels
height, width = stimulus_object.size

# define kernel size based on image size
kernel_size = np.round(self.bandwidth * max(width, height)).astype(int)
sigma = (kernel_size - 1) / 6

# apply Gausian blur and calculate squared difference between blurred and original image
blurred_stimulus = gaussian_filter(gray_stimulus, sigma)

prediction = gaussian_filter((gray_stimulus - blurred_stimulus)**2, sigma)

# normalize to [1, 255]
prediction = (254 * (prediction / prediction.max())).astype(int) + 1

density = prediction / prediction.sum()

return np.log(density)

class MySimpleScanpathModel(pysaliency.ScanpathModel):
def __init__(self, spatial_model_bandwidth: float=0.05, saccade_width: float=0.1):
self.spatial_model_bandwidth = spatial_model_bandwidth
self.saccade_width = saccade_width
self.spatial_model = LocalContrastModel(spatial_model_bandwidth)
# self.spatial_model = pysaliency.UniformModel()


def conditional_log_density(self, stimulus, x_hist, y_hist, t_hist, attributes=None, out=None,):
stimulus_object = pysaliency.datasets.as_stimulus(stimulus)

# size contains the height and width of the image, but not potential color channels
height, width = stimulus_object.size

spatial_prior_log_density = self.spatial_model.log_density(stimulus)
spatial_prior_density = np.exp(spatial_prior_log_density)

# compute saccade bias
last_x = x_hist[-1]
last_y = y_hist[-1]

xs = np.arange(width, dtype=float)
ys = np.arange(height, dtype=float)
XS, YS = np.meshgrid(xs, ys)

XS -= last_x
YS -= last_y

# compute prior
max_size = max(width, height)
actual_kernel_size = self.saccade_width * max_size

saccade_bias = np.exp(-0.5 * (XS ** 2 + YS ** 2) / actual_kernel_size ** 2)

prediction = spatial_prior_density * saccade_bias

density = prediction / prediction.sum()
return np.log(density)

Loading
Loading