Accelerate ML Code for Your Hardware

Bring your own model code or build a new one with evoML. Our platform will optimise its speed and memory usage for the targeted hardware while reducing deployment costs. Deploy anywhere.

How evolutionary code optimisation works on ML model code?

Inspired by Darwin’s theory of evolution, TurinTech uses evolutionary algorithms, meta-learning and search-based software engineering, to automatically find and tune the most suitable version of model code for:

  • lower latency and higher throughput
  • less memory and energy usage

ML-based Model Code Acceleration

A data science workflow is a data-intensive process that includes many steps, such as data transformation and model code usage. Each step of the workflow can be optimised to better utilise targeted hardware. We use proprietary code optimisation techniques combined with the latest open-source acceleration libraries (such as Apache TVM ) to accelerate models code during training and prediction.

evoML is the only platform that can optimise the whole end-to-end data science pipeline.

Model Code Input

Analyze Code

Optimise Code

Model Code Output

step 1

Optimise your data transformation

Code-level acceleration

Identify the most expensive lines of code and automatically optimise them with more efficient code alternatives.

Model code-level acceleration

Optimise the speed of your autoencoders or transformers by automatically converting them to more efficient formats (such as Pytorch, ONNX, etc) for your hardware (such as GPU)

ML-based acceleration

Continuous active learning from previous experiments to predict the most efficient data transformations. Use AI to optimise your AI.

step 2

Optimise your Model Training and Prediction Time

Code-level acceleration

Identify the most expensive lines of code that affect the speed of a model code during training and prediction.

Model code-level acceleration

Optimise the prediction speed of your model code by leveraging more efficient runtimes (such as Pytorch, ONNX, etc) for your hardware (such as GPU).

Distributed Computation

Leverage the distributed architecture of evoML to optimise speed by parallelising the most performant critical components.

Sign up for your free trial!

Our team will guide you through a demo of how you can achieve optimal model code and accelerate implementation with evoML.

Why TurinTech?

Cutting-edge Research Built into Product

TurinTech is a research-driven company, with over 10 years of experience in the code optimisation area, and backed by leading investors.

Trusting Partners from Day One

Our expert researchers, data scientists, and software engineers will work closely with your team, building the roadmap to AI success for your business.

Future-proof your Business

We are future-oriented, constantly developing technology for today and tomorrow. Working with us helps you future-proof your business.

Trusting Partners from Day One

Our expert researchers, data scientists, and software engineers will work closely with your team, building the roadmap to AI success for your business.