Skip to content

Commit

Permalink
Tutorial about serverless functions (#3124)
Browse files Browse the repository at this point in the history
Co-authored-by: Roman Donchenko <roman.donchenko@intel.com>
  • Loading branch information
Nikita Manovich and Roman Donchenko authored Jul 21, 2021
1 parent 330b8a8 commit 0baf794
Show file tree
Hide file tree
Showing 73 changed files with 1,581 additions and 64 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Support of cloud storage without copying data into CVAT: server part (<https://github.com/openvinotoolkit/cvat/pull/2620>)
- Filter `is_active` for user list (<https://github.com/openvinotoolkit/cvat/pull/3235>)
- Ability to export/import tasks (<https://github.com/openvinotoolkit/cvat/pull/3056>)
- Add a tutorial for semi-automatic/automatic annotation (<https://github.com/openvinotoolkit/cvat/pull/3124>)
- Explicit "Done" button when drawing any polyshapes (<https://github.com/openvinotoolkit/cvat/pull/3417>)

### Changed
Expand Down
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,7 @@ For more information about supported formats look at the
| [Inside-Outside Guidance](/serverless/pytorch/shiyinzhang/iog/nuclio) | interactor | PyTorch | X | |
| [Faster RCNN](/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio) | detector | TensorFlow | X | X |
| [Mask RCNN](/serverless/tensorflow/matterport/mask_rcnn/nuclio) | detector | TensorFlow | X | X |
| [RetinaNet](serverless/pytorch/facebookresearch/detectron2/retinanet/nuclio) | detector | PyTorch | X | X |

<!--lint enable maximum-line-length-->

Expand Down Expand Up @@ -162,8 +163,8 @@ Other ways to ask questions and get our support:
- [DataIsKey](https://dataiskey.eu/annotation-tool/) uses CVAT as their prime data labeling tool
to offer annotation services for projects of any size.
- [Human Protocol](https://hmt.ai) uses CVAT as a way of adding annotation service to the human protocol.
<!-- prettier-ignore-start -->
<!-- Badges -->
<!-- prettier-ignore-start -->
<!-- Badges -->

[docker-server-pulls-img]: https://img.shields.io/docker/pulls/openvino/cvat_server.svg?style=flat-square&label=server%20pulls
[docker-server-image-url]: https://hub.docker.com/r/openvino/cvat_server
Expand Down
4 changes: 3 additions & 1 deletion serverless/deploy_cpu.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@ FUNCTIONS_DIR=${1:-$SCRIPT_DIR}

nuctl create project cvat

for func_config in $(find "$FUNCTIONS_DIR" -name "function.yaml")
shopt -s globstar

for func_config in "$FUNCTIONS_DIR"/**/function.yaml
do
func_root=$(dirname "$func_config")
echo Deploying $(dirname "$func_root") function...
Expand Down
25 changes: 10 additions & 15 deletions serverless/deploy_gpu.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,24 +2,19 @@
# Sample commands to deploy nuclio functions on GPU

SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
FUNCTIONS_DIR=${1:-$SCRIPT_DIR}

nuctl create project cvat

nuctl deploy --project-name cvat \
--path "$SCRIPT_DIR/tensorflow/faster_rcnn_inception_v2_coco/nuclio" \
--platform local --base-image tensorflow/tensorflow:2.1.1-gpu \
--desc "GPU based Faster RCNN from Tensorflow Object Detection API" \
--image cvat/tf.faster_rcnn_inception_v2_coco_gpu \
--triggers '{"myHttpTrigger": {"maxWorkers": 1}}' \
--resource-limit nvidia.com/gpu=1 --verbose

nuctl deploy --project-name cvat \
--path "$SCRIPT_DIR/tensorflow/matterport/mask_rcnn/nuclio" \
--platform local --base-image tensorflow/tensorflow:1.15.5-gpu-py3 \
--desc "GPU based implementation of Mask RCNN on Python 3, Keras, and TensorFlow." \
--image cvat/tf.matterport.mask_rcnn_gpu\
--triggers '{"myHttpTrigger": {"maxWorkers": 1}}' \
--resource-limit nvidia.com/gpu=1 --verbose
shopt -s globstar

for func_config in "$FUNCTIONS_DIR"/**/function-gpu.yaml
do
func_root=$(dirname "$func_config")
echo "Deploying $(dirname "$func_root") function..."
nuctl deploy --project-name cvat --path "$func_root" \
--volume "$SCRIPT_DIR/common:/opt/nuclio/common" \
--file "$func_config" --platform local
done

nuctl get function
4 changes: 2 additions & 2 deletions serverless/openvino/dextr/nuclio/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ def init_context(context):
context.logger.info("Init context... 0%")

model = ModelHandler()
setattr(context.user_data, 'model', model)
context.user_data.model = model

context.logger.info("Init context...100%")

def handler(context, event):
context.logger.info("call handler")
data = event.body
points = data["pos_points"]
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
image = Image.open(buf)

polygon = context.user_data.model.handle(image, points)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ def init_context(context):
context.logger.info("Init context... 0%")

model = ModelHandler()
setattr(context.user_data, 'model', model)
context.user_data.model = model

context.logger.info("Init context...100%")

def handler(context, event):
context.logger.info("Run person-reidentification-retail-0300 model")
data = event.body
buf0 = io.BytesIO(base64.b64decode(data["image0"].encode('utf-8')))
buf1 = io.BytesIO(base64.b64decode(data["image1"].encode('utf-8')))
buf0 = io.BytesIO(base64.b64decode(data["image0"]))
buf1 = io.BytesIO(base64.b64decode(data["image1"]))
threshold = float(data.get("threshold", 0.5))
max_distance = float(data.get("max_distance", 50))
image0 = Image.open(buf0)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,22 @@ def init_context(context):
context.logger.info("Init context... 0%")

# Read labels
functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml"))
with open("/opt/nuclio/function.yaml", 'rb') as function_file:
functionconfig = yaml.safe_load(function_file)

labels_spec = functionconfig['metadata']['annotations']['spec']
labels = {item['id']: item['name'] for item in json.loads(labels_spec)}

# Read the DL model
model = ModelHandler(labels)
setattr(context.user_data, 'model', model)
context.user_data.model = model

context.logger.info("Init context...100%")

def handler(context, event):
context.logger.info("Run semantic-segmentation-adas-0001 model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
threshold = float(data.get("threshold", 0.5))
image = Image.open(buf)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,21 @@ def init_context(context):
context.logger.info("Init context... 0%")

# Read labels
functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml"))
with open("/opt/nuclio/function.yaml", 'rb') as function_file:
functionconfig = yaml.safe_load(function_file)
labels_spec = functionconfig['metadata']['annotations']['spec']
labels = {item['id']: item['name'] for item in json.loads(labels_spec)}

# Read the DL model
model = ModelHandler(labels)
setattr(context.user_data, 'model', model)
context.user_data.model = model

context.logger.info("Init context...100%")

def handler(context, event):
context.logger.info("Run text-detection-0004 model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
pixel_threshold = float(data.get("pixel_threshold", 0.8))
link_threshold = float(data.get("link_threshold", 0.8))
image = Image.open(buf)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,22 @@ def init_context(context):
context.logger.info("Init context... 0%")

# Read labels
functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml"))
with open("/opt/nuclio/function.yaml", 'rb') as function_file:
functionconfig = yaml.safe_load(function_file)

labels_spec = functionconfig['metadata']['annotations']['spec']
labels = {item['id']: item['name'] for item in json.loads(labels_spec)}

# Read the DL model
model = ModelHandler(labels)
setattr(context.user_data, 'model', model)
context.user_data.model = model

context.logger.info("Init context...100%")

def handler(context, event):
context.logger.info("Run faster_rcnn_inception_v2_coco model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
threshold = float(data.get("threshold", 0.5))
image = Image.open(buf)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,22 @@ def init_context(context):
context.logger.info("Init context... 0%")

# Read labels
functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml"))
with open("/opt/nuclio/function.yaml", 'rb') as function_file:
functionconfig = yaml.safe_load(function_file)

labels_spec = functionconfig['metadata']['annotations']['spec']
labels = {item['id']: item['name'] for item in json.loads(labels_spec)}

# Read the DL model
model = ModelHandler(labels)
setattr(context.user_data, 'model', model)
context.user_data.model = model

context.logger.info("Init context...100%")

def handler(context, event):
context.logger.info("Run mask_rcnn_inception_resnet_v2_atrous_coco model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
threshold = float(data.get("threshold", 0.2))
image = Image.open(buf)

Expand Down
8 changes: 5 additions & 3 deletions serverless/openvino/omz/public/yolo-v3-tf/nuclio/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,22 @@ def init_context(context):
context.logger.info("Init context... 0%")

# Read labels
functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml"))
with open("/opt/nuclio/function.yaml", 'rb') as function_file:
functionconfig = yaml.safe_load(function_file)

labels_spec = functionconfig['metadata']['annotations']['spec']
labels = {item['id']: item['name'] for item in json.loads(labels_spec)}

# Read the DL model
model = ModelHandler(labels)
setattr(context.user_data, 'model', model)
context.user_data.model = model

context.logger.info("Init context...100%")

def handler(context, event):
context.logger.info("Run yolo-v3-tf model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
threshold = float(data.get("threshold", 0.5))
image = Image.open(buf)

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
metadata:
name: pth.facebookresearch.detectron2.retinanet_r101
namespace: cvat
annotations:
name: RetinaNet R101
type: detector
framework: pytorch
spec: |
[
{ "id": 1, "name": "person" },
{ "id": 2, "name": "bicycle" },
{ "id": 3, "name": "car" },
{ "id": 4, "name": "motorcycle" },
{ "id": 5, "name": "airplane" },
{ "id": 6, "name": "bus" },
{ "id": 7, "name": "train" },
{ "id": 8, "name": "truck" },
{ "id": 9, "name": "boat" },
{ "id":10, "name": "traffic_light" },
{ "id":11, "name": "fire_hydrant" },
{ "id":13, "name": "stop_sign" },
{ "id":14, "name": "parking_meter" },
{ "id":15, "name": "bench" },
{ "id":16, "name": "bird" },
{ "id":17, "name": "cat" },
{ "id":18, "name": "dog" },
{ "id":19, "name": "horse" },
{ "id":20, "name": "sheep" },
{ "id":21, "name": "cow" },
{ "id":22, "name": "elephant" },
{ "id":23, "name": "bear" },
{ "id":24, "name": "zebra" },
{ "id":25, "name": "giraffe" },
{ "id":27, "name": "backpack" },
{ "id":28, "name": "umbrella" },
{ "id":31, "name": "handbag" },
{ "id":32, "name": "tie" },
{ "id":33, "name": "suitcase" },
{ "id":34, "name": "frisbee" },
{ "id":35, "name": "skis" },
{ "id":36, "name": "snowboard" },
{ "id":37, "name": "sports_ball" },
{ "id":38, "name": "kite" },
{ "id":39, "name": "baseball_bat" },
{ "id":40, "name": "baseball_glove" },
{ "id":41, "name": "skateboard" },
{ "id":42, "name": "surfboard" },
{ "id":43, "name": "tennis_racket" },
{ "id":44, "name": "bottle" },
{ "id":46, "name": "wine_glass" },
{ "id":47, "name": "cup" },
{ "id":48, "name": "fork" },
{ "id":49, "name": "knife" },
{ "id":50, "name": "spoon" },
{ "id":51, "name": "bowl" },
{ "id":52, "name": "banana" },
{ "id":53, "name": "apple" },
{ "id":54, "name": "sandwich" },
{ "id":55, "name": "orange" },
{ "id":56, "name": "broccoli" },
{ "id":57, "name": "carrot" },
{ "id":58, "name": "hot_dog" },
{ "id":59, "name": "pizza" },
{ "id":60, "name": "donut" },
{ "id":61, "name": "cake" },
{ "id":62, "name": "chair" },
{ "id":63, "name": "couch" },
{ "id":64, "name": "potted_plant" },
{ "id":65, "name": "bed" },
{ "id":67, "name": "dining_table" },
{ "id":70, "name": "toilet" },
{ "id":72, "name": "tv" },
{ "id":73, "name": "laptop" },
{ "id":74, "name": "mouse" },
{ "id":75, "name": "remote" },
{ "id":76, "name": "keyboard" },
{ "id":77, "name": "cell_phone" },
{ "id":78, "name": "microwave" },
{ "id":79, "name": "oven" },
{ "id":80, "name": "toaster" },
{ "id":81, "name": "sink" },
{ "id":83, "name": "refrigerator" },
{ "id":84, "name": "book" },
{ "id":85, "name": "clock" },
{ "id":86, "name": "vase" },
{ "id":87, "name": "scissors" },
{ "id":88, "name": "teddy_bear" },
{ "id":89, "name": "hair_drier" },
{ "id":90, "name": "toothbrush" }
]
spec:
description: RetinaNet R101 from Detectron2 optimized for GPU
runtime: 'python:3.8'
handler: main:handler
eventTimeout: 30s

build:
image: cvat/pth.facebookresearch.detectron2.retinanet_r101
baseImage: ubuntu:20.04

directives:
preCopy:
- kind: ENV
value: DEBIAN_FRONTEND=noninteractive
- kind: RUN
value: apt-get update && apt-get -y install curl git python3 python3-pip
- kind: WORKDIR
value: /opt/nuclio
- kind: RUN
value: pip3 install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
- kind: RUN
value: pip3 install 'git+https://github.com/facebookresearch/detectron2@v0.4'
- kind: RUN
value: curl -O https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/retinanet_R_101_FPN_3x/190397697/model_final_971ab9.pkl
- kind: RUN
value: ln -s /usr/bin/pip3 /usr/local/bin/pip

triggers:
myHttpTrigger:
maxWorkers: 1
kind: 'http'
workerAvailabilityTimeoutMilliseconds: 10000
attributes:
maxRequestBodySize: 33554432 # 32MB

resources:
limits:
nvidia.com/gpu: 1

platform:
attributes:
restartPolicy:
name: always
maximumRetryCount: 3
mountMode: volume
Loading

0 comments on commit 0baf794

Please sign in to comment.