Skip to main content
Version: 4.4

Deploy a Custom Classification Model

This document will help you on how to deploy a custom classification model.

Before we start the deployment process, please follow the below formats for every file:

Your custom model must contain the following pre-requisite file:

  • requirement.txt - Text file containing for all your required packages along with the version.

    scikit-learn==x.x.x
    matplotlib==x.x.x
  • schema.py - This file will contain the type of input that will be excepted by your endpoint API. The template for the file is shown below.

from typing import List, Any
from pydantic import BaseModel

# sample Predict Schema
# Make sure key is data and your data can be of anytype
class PredictSchema(BaseModel):
data: List[List[Any]]
  • launch.py - This is the most important file, containing load_model, preprocessing and prediction functions. The template for the file is shown below:

Note: load_model and prediction functions are compulsory. preprocessing function is optional based on the data you are passing to the system. By default false is returned from preprocessing function.

load_model takes logger object, please do not define your own logging object. preprocessing takes the data and logger object prediction takes preprocessed data, model and logger object.

# import necessary packages

import os
from typing import Any, Union, Dict
import numpy as np
import pandas as pd
import pickle

def loadmodel(logger):
"""Get model from cloud object storage."""
logger.info("loading model")
# if you are you are defining any specific files in the code, you must just directly pass the the file name
TRAINED_MODEL_FILEPATH = "model_pkl"
with open(TRAINED_MODEL_FILEPATH , 'rb') as f:
clfdt = pickle.load(f)
return clfdt

# df can have any data type of your choice. Here we used as np.ndarray
def preprocessing(df: np.ndarray, logger):
""" Applies preprocessing techniques to the raw data"""
logger.info("no preprocessing")

# Apply all preprocessing techniques inside this to your input data, if required and return data as below
logger.info("applying standardard scaler")
scaler = StandardScaler()
data_df = scaler.fit_transform(df)
logger.info("applied scaling successfully")
return data_df
## else if no preprocesssing is required by data then just return False as below
##return False

def predict(features: np.ndarray, model: Any, logger):
"""Predicts the results for the given inputs"""
try:
logger.info("model prediction")
prediction = model.predict(features)
probabilities = model.predict_proba(features)[0]
except Exception as e:
logger.info(e)
return(e)
return prediction

Once you prepared the required files you can proceed with the deployment.

Note: Before Deployment the prepared required files should be inside the GitHub Repository.

How to deploy your binary classification model using Custom Deployment

  1. Navigate to Deploy section from sidebar on the platform.

classification

  1. Select the Model Deployment option from the bottom.

classification

  1. Fill the model details in the dialog box.
  • Give a Name to your deployment, for example audio-to-speech and proceed to the next field.

classification

  • Select Custom Model under Model Source option.

classification

  • Select Model type for eg., in this case it is Binary Classification.

classification

  • Provide the GitHub token.

classification

  • Your username will appear once the token is passed.

  • Select the Account type.

classification

  • Select Organization Name, if account type is Organization.

classification

  • Select the Repository.

classification

  • Select the Revision Type.

classification

  • Select the Branch Name, if revision type is Branch.

classification

Note: your GitHub repository must contain requirements.txt, schema.py and launch.py files whose templates are discussed above.

  • Select Python version.

classification

  • Select Resources - CPU/GPU.

classification

  • Enable or Disable Autoscaling.

classification

  • Select Pods range if the user enable Autoscaling.

classification

  • Select +Add Environment Variables if your model contains any OS dependent parameters

Note: after adding variable name and value, don't forget to click on [+] button beside of it, that'll add variables to your deployment.

classification

  • Click on Deploy.

classification

  1. Once your Custom Model API is created you will be able to view it in the Deploy section where it will be in "Processing" state in the beginning. Click on Refresh to update the status.

classification

  1. You can also check out the logs to see the progress of the current deployment using Logs option.

classification

  1. Once your Model API is in the Running state you can check consumption of the hardware resources from Usage option.

classification

classification

  1. You can access the API endpoints by clicking on API.

classification

  • There are two APIs under API URLs:

  • Model Prediction API endpoint: This API is for generating the prediction from the deployed model Here is the code snippet to use the predict API:

classification

MODEL_API_ENDPOINT = "Prediction API URL"
SECURE_TOKEN = "Token"
data = {"data": "Define the value format as per the schema file"}
result = requests.post(f"{MODEL_API_ENDPOINT}", json=data, verify=False, headers = {"Authorization": SECURE_TOKEN})
print(result.text)
  • Model Feedback API endpoint: This API is for monitoring the model performance once you have the true labels available for the data. Here is the code snippet to use the feedback API. The predicted labels can be saved at the destination sources and once the true labels are available those can be passed to the feedback url to monitor the model continuously.

classification

MODEL_FEEDBACK_ENDPOINT = "Feedback API URL"
SECURE_TOKEN = "Token"
true = "Pass the list of true labels"
pred = "Pass the list of predicted labels"
data = {"true_label": true, "predicted_label": pred}
result = requests.post(f"{MODEL_API_ENDPOINT}", json=data, verify=False, headers = {"Authorization": SECURE_TOKEN})
print(result.text)
  • Click on the Create API token to generate a new token in order to access the API.

classification

  • Give a name to the token.

classification

  • Select the Expiration Type

  • Set the Token Expiry Date

  • Click on Create Token and generate your API Token from the pop-up dialog box.

classification

classification

Note: A maximum of 10 tokens can be generated for a model. Copy the API Token that was created. As it is only available once, be sure to save it.

  • Under the Existing API token section you can manage the generated token and can delete the no longer needed tokens.

classification

  • API usage docs briefs you on how to use the APIs and even gives the flexibility to conduct API testing.

classification

classification

  • To know more about the usage of generated API you can follow the below steps:
    • This is a guide on how to use the endpoint API. Here you can test the API with different inputs to check the working model.
    • In order to test API you first need to Authorize yourself by adding the token as shown below. Click on Authorize and close the pop-up.
    • Once it is authorise you can click on Predict_Endpoint bar and scroll down to Try it out.
    • If you click on the Try it out button, the Request body panel will be available for editing. Put some input values for testing and the number of values/features in a record must be equal to the features you used while training the model.
    • If you click on execute, you would be able to see the prediction results at the end. If there are any errors you can go back to the model card and check the error logs for further investigation.
  1. You can also modify the resources, version and minimum & maximum pods of your deployed model by clicking the Edit option and saving the updated configuration.

classification

classification

  1. Click on Monitoring, and a dashboard would open up in a new tab. This will help to monitor the effectiveness and efficiency of your deployed model. Refer the Model Monitoring section in the Documentation to know more about the metrics that are been monitored.

classification

classification

  1. To delete the unused models use the Delete button.

classification

classification