Skip to main content

Model Deployment

Once the pipeline is created, we have everything to deploy a model, that is a machine learning life cycle pipeline, the best model and artifacts related to model. So, we can deploy the model that we had chosen before and we move its tag from None to production stage.

Once we deploy the model, we get an API endpoint, which we can use in several ways like using it for the Inference purpose or using in Web Applications.

To deploy the model, you need to go to the Deploy section from the Katonic platform and click on create a deployment and choose model API. Because we're just deploying the model to get an API endpoint.

deploy model

Once you click on the Model API option, you'll get a window to choose which model you need to deploy and to choose the configuration for the model.

You need to provide some of the details to configure:

  • Name: A name for your model that is going to be deployed.

  • Model Source: Choose model source as Katonic Registry.

  • Model: Model name that we had given while registering it.

  • Model Version: Which version of the model you want to deploy.

  • Model Type: Choose the model whether it was a Classification, Regression or any other like time series etc.

  • Hardware Type: Choose from CPU or GPU

  • Resources: The amount of resources that you want to allocate to your model based on your requirement.

  • Autoscaling: Choose to Enable or Disable it. And select the minimum and maximum pod range once enabled.

Then you need to click on Deploy to deploy your model.

configure deploy model

It will take few minutes in order to deploy the model, because it will configure the environment to run the model continuously and also allocate the Resources. It will also assign a Monitoring Dashboard in order to find the Model Drift or Deterioration. You also get option for Swagger documentation to test your model APIs under API option.

Once the model is Deployed and is under running state, you can see the status of the model as RUNNING. There you can get an option to access the API endpoint.

model API

You can use the above API for inference and in Web Applications. But in order to use it, you need an API TOKEN for the Authorization.

It will provide the Security element to your API endpoint. No one can access it without the TOKEN. To obtain it, Go to API section and then click on Create API Token tab. You need to provide the name, expiry type and date for the API Token and click on Create Token. This will give you a String-type value that you can use.