Skip to main content
Version: 4.4

Opencv

This example provides the steps to deploy a simple opencv solution from scratch using Katonic Deploy.

Introduction

spaCy is a free, open-source library for NLP in Python written in Cython. spaCy is designed to make it easy to build systems for information extraction or general-purpose natural language processing

By the end of this guide you’ll have an API endpoint that can handle any scale of traffic by running inference on serverless CPU’s/GPU’s

Deploy Opencv custom model

Katonic simplifies the deployment of spacy models, offering a user-friendly interface and streamlined workflow. Deploy your models as a api service with ease, and make them available for use in various applications. 

Step 1. Prepare a model

In the first phase of the tutorial, we need to prepare a model for deployment. In order to do it in the right way you can follow a checklist below:

1. Train your model

Before you start model preparation you need to train your model. For the purpose of the following tutorials, we will use one that is already trained.

2. Create a Git repository

Your main model files need to be placed in a git repository. However, if you have large files (e.g. with model weights) you can store them in separate storage or drive. You can write code to download it in loadmodel method.

As far as our face blurrer is concerned, all files are placed in the git repository since the model is a small ssd-resnet model(only used to detect faces, blurring is done using opencv).

Prepare for the deployment 

In order to deploy a model with Katonic Deploy, you need to add the following pre-requisite files to the git repository: 

requirement.txt - Text file containing all your required packages along with the version.

opencv-python-headless==4.7.0.72
imageio==2.9.0
pandas
Pillow==7.2.0 

schema.py - This file will contain the schema of input that will be excepted by your endpoint API. You can modify the below code to match your input schema. For the below code the schema is List.

from typing import List, Any, Dict, Union
from pydantic import BaseModel

# sample Predict Schema
# Make sure key is data and your data can be of anytype
class PredictSchema(BaseModel):
data: List[Any]

launch.py - This is the most important file, containing loadmodel, preprocessing and prediction functions. The template for the file is shown below:

Note: Please don't change the method names. If you are not using preprocessing method return False.

Note: You can change the below codes inside the function with your necessary codes.

loadmodel: Any model you have in github or somewhere on the internet should be loaded here and return it in the end.

preprocessing: If there is any preprocessing required before calling predict write the code for it here and return the features in the end.

predict: Final prediction with the data is performed here and return the result in desired format in the end.

Note: Don't call any on the 3 methods inside the file.

loadmodel takes logger object, please do not define your own logging object. preprocessing takes the data and logger object, prediction takes preprocessed data, model and logger object 

# import necessary packages

import os
from typing import Any, Union,Dict, List
import numpy as np
import io
import base64

import cv2
from PIL import Image
from imageio import imread

def loadmodel(logger):
"""Get model from cloud object storage."""
logger.info("Loading face detector model...")
prototxtPath = os.path.sep.join(["face_detector", "deploy.prototxt"])
weightsPath = os.path.sep.join(["face_detector",
"res10_300x300_ssd_iter_140000.caffemodel"])
net = cv2.dnn.readNet(prototxtPath, weightsPath)
logger.info("Face detector model loaded")
return net

def preprocessing(df:np.ndarray,logger):
""" Applies preprocessing techniques to the raw data"""

return False
## else if no preprocesssing is required by data then just return False as below
##return False

def predict(features: List,net:Any,logger):
"""Predicts the results for the given inputs"""
try:
logger.info("Blurring the faces in the image.")
args = {
"method": "simple",
"blocks": 20,
"confidence": 0.5
}
def anonymize_face_simple(image, factor=3.0):
# automatically determine the size of the blurring kernel based
# on the spatial dimensions of the input image
(h, w) = image.shape[:2]
kW = int(w / factor)
kH = int(h / factor)

# ensure the width of the kernel is odd
if kW % 2 == 0:
kW -= 1

# ensure the height of the kernel is odd
if kH % 2 == 0:
kH -= 1

# apply a Gaussian blur to the input image using our computed
# kernel size
return cv2.GaussianBlur(image, (kW, kH), 0)

def anonymize_face_pixelate(image, blocks=3):
# divide the input image into NxN blocks
(h, w) = image.shape[:2]
xSteps = np.linspace(0, w, blocks + 1, dtype="int")
ySteps = np.linspace(0, h, blocks + 1, dtype="int")

# loop over the blocks in both the x and y direction
for i in range(1, len(ySteps)):
for j in range(1, len(xSteps)):
# compute the starting and ending (x, y)-coordinates
# for the current block
startX = xSteps[j - 1]
startY = ySteps[i - 1]
endX = xSteps[j]
endY = ySteps[i]

# extract the ROI using NumPy array slicing, compute the
# mean of the ROI, and then draw a rectangle with the
# mean RGB values over the ROI in the original image
roi = image[startY:endY, startX:endX]
(B, G, R) = [int(x) for x in cv2.mean(roi)[:3]]
cv2.rectangle(image, (startX, startY), (endX, endY),
(B, G, R), -1)

# return the pixelated blurred image
return image

img = imread(io.BytesIO(base64.b64decode(features[0]))) # numpy array (width, hight, 3)
image = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
orig = image.copy()
(h, w) = image.shape[:2]

# construct a blob from the image
blob = cv2.dnn.blobFromImage(image, 1.0, (300, 300),(104.0, 177.0, 123.0))

# pass the blob through the network and obtain the face detections
logger.info("computing face detections...")
net.setInput(blob)
detections = net.forward()

# loop over the detections
for i in range(0, detections.shape[2]):
# extract the confidence (i.e., probability) associated with the
# detection
confidence = detections[0, 0, i, 2]

# filter out weak detections by ensuring the confidence is greater
# than the minimum confidence
if confidence > args["confidence"]:
# compute the (x, y)-coordinates of the bounding box for the
# object
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(startX, startY, endX, endY) = box.astype("int")

# extract the face ROI
face = image[startY:endY, startX:endX]

# check to see if we are applying the "simple" face blurring
# method
if args["method"] == "simple":
face = anonymize_face_simple(face, factor=3.0)

# otherwise, we must be applying the "pixelated" face
# anonymization method
else:
face = anonymize_face_pixelate(face,blocks=args["blocks"])

# store the blurred face in the output image
image[startY:endY, startX:endX] = face

image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
img = Image.fromarray(image)

im_file = io.BytesIO()
img.save(im_file, format="PNG")
im_bytes = base64.b64encode(im_file.getvalue()).decode("utf-8")

except Exception as e:
logger.info(e)
return(e)
return {"image": im_bytes}

Deploy the Model

To deploy the model, the user needs to go to the deployment section of the platform and follow the below steps: 

Note: You can place the model files into GitHub repository before starting the deployment processing.

  1. Navigate to Deploy section from sidebar on the platform.

  1. Click on Model Deployment.

  1. Fill the model details in the dialog box.
  • Provide Name of the deployment, for example opencv_model or image-classification-model in this case.

  • Select Custom Model as deployment type.

  • Select Model Type as Image Classification.

  • Provide the GitHub token.

  • Your username will appear once the token is passed.

  • Select the Account Type.

  • Select the Organization Name, if account type is Organization.

  • Select the Repository Name.

  • Select the Revision Type.

  • Select the Branch Name, If revision type is Branch.

    regression

Note: your GitHub repository must contain opencv model file, requirements.txt, schema.py and launch.py files whose templates are discussed above.

  • Select Python Version.

  • Select Resources.

  • Enable or Disable Autoscaling.

  • Select Pods Range, if the user Enabled Autoscaling.

  • Click on Environment Variables to add environment variables (if any).

  • Click on Deploy.

  1. Once your Custom Model API is created you will be able to view it in the Deploy section where it will be in "Processing" state in the beginning. Click on Refresh to update the status.

  1. You can also check out the logs to see the progress of the current deployment using Logs option.

  1. Once your Model API is in the Running state you can check consumption of the hardware resources from Usage option.

  1. You can access the API endpoints by clicking on API.

There are two APIs under API URLs:

  • Model Prediction API endpoint: This API is for generating the prediction from the deployed model Here is the code snippet to use the predict API:

    MODEL_API_ENDPOINT = "Prediction API URL"
SECURE_TOKEN = "Token"
data = {"data": "Define the value format as per the schema file"}
result = requests.post(f"{MODEL_API_ENDPOINT}", json=data, verify=False, headers = {"Authorization": SECURE_TOKEN})
print(result.text)
  • Model Feedback API endpoint: This API is for monitoring the model performance once you have the true labels available for the data. Here is the code snippet to use the feedback API. The predicted labels can be saved at the destination sources and once the true labels are available those can be passed to the feedback URL to monitor the model continuously.

MODEL_FEEDBACK_ENDPOINT = "Feedback API URL"
SECURE_TOKEN = "Token"
true = "Pass the list of true labels"
pred = "Pass the list of predicted labels"
data = {"true_label": true, "predicted_label": pred}
result = requests.post(f"{MODEL_API_ENDPOINT}", json=data, verify=False, headers = {"Authorization": SECURE_TOKEN})
print(result.text)
  • Click on the Create API token to generate a new token in order to access the API

    • Give a name to the token.

    • Select the Expiration Type

    • Set the Token Expiry Date

    • Click on Create Token and generate your API Token from the pop-up dialog box.

Note: A maximum of 10 tokens can be generated for a model. Copy the API Token that was created. As it is only available once, be sure to save it.

  • Under the Existing API token section you can manage the generated token and can delete the no longer needed tokens.

  • API usage docs briefs you on how to use the APIs and even gives the flexibility to conduct API testing.

  • To know more about the usage of generated API you can follow the below steps:

    • This is a guide on how to use the endpoint API. Here you can test the API with different inputs to check the working model. In order to test API you first need to Authorize yourself by adding the token as shown below. Click on Authorize and close the pop-up.

    • Once it is authorise you can click on Predict_Endpoint bar and scroll down to Try it out.

    • If you click on the Try it out button, the Request body panel will be available for editing. Put some input values for testing and the number of values/features in a record must be equal to the features you used while training the model.

    • If you click on execute, you would be able to see the prediction results at the end. If there are any errors you can go back to the model card and check the error logs for further investigation.

  1. You can also modify the resources, version and minimum & maximum pods of your deployed model by clicking the Edit option and saving the updated configuration.

  1. Click on Monitoring, and a dashboard would open up in a new tab. This will help to monitor the effectiveness and efficiency of your deployed model. Refer the Model Monitoring section in the Documentation to know more about the metrics that are been monitored.

  1. To delete the unused models use the Delete button.