Taking a machine learning pipeline from conceptualisation to production can be a long and complicated process. From developing production-ready code to incorporating models into existing data and code frameworks, machine learning pipelines can take significant time and effort to be deployed.
In order to streamline and expedite the machine learning deployment pipeline as much as possible, evoML offers three deployment options to users. This article covers the three deployment methods and their advantages.
Figure 1: evoML deployment options
With code-based deployment, evoML provides a code snippet of the machine learning model in Python. Users can embed this code in their existing architectures to use the model directly to draw predictions. The model code takes appropriate inputs, processes them within the model, and outputs the result. This method allows maximum flexibility as users can customise the code according to their needs.
- Ability to customise model code as necessary
- Easy and direct integration into code of existing systems and pipelines
- Higher explainability, especially useful for highly regulated industries
- Easy to reproduce
evoML provides the option for users to access models via a RESTful API, for both on-premise and cloud-based deployment. User queries are sent to the API as web requests, and the API will return model predictions as responses. For cloud-based deployment, in addition to using proprietary TurinTech servers, users can also use managed services of popular cloud services such as AWS SageMaker, Azure ML, and Google Vertex AI to host their models and draw predictions.
- Provides a unified interface to interact with the model, where users benefit from the features of RESTful APIs such as having a streamlined and standardised way of exchanging data
- Convenience in connecting with existing services
- Easy deployment from evoML
evoML allows integration with databases such as Starburst, Exasol, and MindsDB where users can use the database to feed the data into the model, make predictions, and store predictions back in the database. Database integration is particularly useful in scenarios where the model has to interact with large data infrastructures such as in ETL (extract, transform, and load) processes or batch processing tasks.
- Easy input and manipulation of data, especially for large datasets
- Automated update of models with integrated databases, leading to better model predictions over time
- Higher data security