From a $10 Optimisation to $400K Savings: Artemis AI Boosts QuantLib Runtime by 32.72% in Five Clicks

Unlock Cost Savings in Your Code

Imagine saving up to $400,000 annually in compute costs with just a $10 investment in code optimisation. With Artemis AI, our GenAI-powered code optimisation platform, entire codebases can be quickly optimised for as little as $10, ensuring significant cost savings by avoiding the expenses associated with running inefficient code and spending valuable developer time to spot and fix code inefficiencies.

In this blog article, we will walk you through a recent project involving the QuantLib C++ library, where our engineer used Artemis AI to achieve a 32.72% faster runtime. Our pull request was successfully implemented, meaning all financial firms leveraging QuantLib will benefit from this optimisation.

For example, a bank spending $100,000 monthly on cloud computing resources for QuantLib-based financial applications could save up to $32,720 per month – $392,640 annually – with a mere $10 investment in achieving a 32.72% runtime improvement.

Project Overview

QuantLib is an open-source library extensively used by financial institutions for quantitative finance tasks such as modelling, trading, and risk management. It powers a wide range of financial applications, including financial software platforms, research tools, and custom applications developed by companies.

The Challenge

Slow and inefficient code in critical libraries like QuantLib can significantly impact the speed and efficiency of financial applications, reducing profitability and competitiveness for businesses.

A single codebase like QuantLib can have hundreds of thousands of lines of code. Identifying inefficiencies in such codebases is a time-consuming task for engineers. Even experienced performance engineers can take days or weeks to write and validate improved code versions, making code optimisation a cumbersome process.

Solution

With Artemis AI, one of our engineers optimised the performance of QuantLib in just five clicks and three easy steps:

  1. Code Analysis: Artemis AI’s automated code analysis feature utilised large language models (LLMs), static analysis, and custom profilers like Intel Vtune, and identified multiple performance bottlenecks in the codebase. Our engineer completed this analysis in just 2 minutes.
  2. LLM-based Code Recommendations: Our engineer selected several LLMs (e.g. ArtemisLLM, GPT-4 Turbo, and Claude Opus) on the Artemis AI platform to generate over a hundred code recommendations that could potentially boost the performance of QuantLib. Artemis AI automatically scored and validated each recommendation, helping our engineer decide on the most effective and secure code changes.
  3. Code Optimisation: Artemis AI identified the most optimal combination of code changes from 700 options. The platform also provided performance metrics (e.g., runtime, CPU usage, memory usage) for informed decision-making in implementing the code changes.

The figures below compare the runtime of the original code version with the optimised version by Artemis AI.

*The 32.72% runtime improvement was calculated by averaging the results of 20 unit test runs before and after optimisation by Artemis AI.

Benefits

  • Performance Improvements: Achieved a 32.72% runtime acceleration with one pull request.
  • Developer Productivity: Developers are freed from weeks of manual optimisation, allowing them to focus on more strategic tasks.
  • Business Impact: Faster analysis and responsiveness to financial market changes, substantial cloud cost savings, and reduced carbon emissions.

By leveraging Artemis AI, businesses can operate faster, greener, and more efficiently. Our platform’s ability to quickly optimise codebases allows your developers to focus on innovation, boosting productivity and enhancing your competitive edge.

For more insights, check out our previous blog on how financial services can leverage Artemis AI for code upgrades and refactoring to achieve significant performance improvements and cost savings.

This is a staging enviroment