Problem
Our client was heavily involved in the development and use of AI applications in various sectors. As their AI models became more complex, the cost of inference—running models to generate results—became a critical issue for the company. The client, highly experienced with AI models, sought a way to reduce these costs by optimising their use of graphics processing units (GPUs). They wanted to better understand the relationship between different GPU topologies and their impact on performance, including factors like clock speeds and ray tracing capabilities.
The client was particularly concerned about the efficiency of their machine learning models. They were running multiple models across a wide range of GPU architectures, including dedicated graphics cards and discrete GPUs. Each type of GPU had different strengths and weaknesses, and the client needed to optimise its resources strategically. They wanted a way to predict the inference performance of various models on different GPU topologies to reduce running costs without sacrificing performance.
Solution
Our task was to model the performance of various AI operations on different GPU architectures and provide the client with clear insights into the performance implications of each. We needed to examine popular AI model operations, such as convolutions, which are central to tasks like image recognition and video analysis. Our approach involved recreating several of these operations and modelling them on a low-level GPU system.
We used Python and OpenCL for this task. Python provided flexibility in coding and testing, while OpenCL gave us the ability to work closely with the underlying GPU hardware. This allowed us to model the exact behaviours of the GPU as it executed complex machine learning tasks.
The core of our solution involved creating a performance model that could predict how well certain GPU topologies would perform with different types of AI workloads. This model took into account various GPU parameters such as:
-
Clock speeds: Higher clock speeds typically lead to faster processing, but they can also increase power consumption and heat generation.
-
Memory bandwidth: This determines how quickly data can be transferred between the GPU and the system’s main memory.
-
Parallel processing: Many AI models, particularly deep learning models, require large amounts of data to be processed simultaneously. GPUs excel at this because they can handle multiple calculations in parallel.
-
Compute units: These are the individual processing units inside the GPU, which determine how many tasks it can handle at once.
We also designed a tool to measure the characteristics of any OpenCL-capable GPU the client was using. This tool could analyse the GPU’s performance on specific tasks and provide detailed feedback on how it would handle different AI models.
Performance Modelling in GPUs
Performance modelling of GPUs is an important part of optimising AI systems. Modern GPUs are highly specialised hardware designed to handle tasks like 3D graphics, virtual reality, and machine learning. They are far more efficient at these tasks than central processing units (CPUs) because they have hundreds or even thousands of cores that can process data simultaneously.
In this case, we focused on discrete GPUs, which are separate from the system’s main CPU and memory. These dedicated graphics cards have their own memory and processing power, making them ideal for high-intensity tasks like AI inference. However, discrete GPUs vary in their ability to handle different AI models, and understanding which GPU was best suited for the client’s needs was critical to optimising their system.
For instance, the client had a variety of video cards at their disposal, including models that supported advanced features like ray tracing for 3D graphics. However, these features, while useful in areas like virtual reality, didn’t always provide a performance boost for their specific AI tasks. Our model allowed the client to identify which features were essential for their work and which were unnecessary, saving them valuable resources.
Predicting GPU Performance for AI Models
The predictive aspect of the performance model was key to helping the client reduce costs. By analysing the characteristics of a GPU—such as its clock speeds, memory bandwidth, and parallel processing capabilities—the client could predict how efficiently it would run their AI models.
For example, the client often used machine learning algorithms that involved multiple layers of convolution and matrix multiplication. These operations are highly parallelisable, meaning they run best on GPUs with a large number of cores and high memory bandwidth. On the other hand, certain types of tasks, such as training models with very large datasets, may require GPUs with high memory capacity rather than just raw processing power.
With the model we developed, the client was able to forecast how different AI models would perform on various GPU architectures. This allowed them to choose the most cost-effective GPU for each specific task, significantly reducing their inference costs. Additionally, by knowing which features were essential for their work, they could avoid purchasing more expensive GPUs with unnecessary capabilities.
Results
The final result of our work was a detailed performance model that not only helped the client predict how well their AI models would perform on different GPU architectures, but also provided them with valuable insights into how their graphics cards worked on a low level. This knowledge was crucial for their development team, enabling them to optimise their use of GPUs in the long term.
The model we provided was sophisticated enough to predict performance across a wide range of GPU architectures. The client could now test various AI models on GPUs with different configurations, identifying the best possible setup for their needs.
The tools we developed also helped the client measure the performance of their discrete GPUs. By analysing the clock speeds, memory usage, and other parameters, the client was able to make informed decisions about which GPU to use for different types of tasks.
The most significant benefit, however, was the cost savings. By optimising their use of GPU resources, the client reduced the amount of time and money they spent on AI inference. This not only improved the performance of their models but also allowed them to reallocate resources to other areas of their business.
Educational Value
One of the key takeaways from this project was its educational value for the client’s team. While the performance model was primarily designed to optimise their AI systems, the insights it provided were invaluable for understanding how GPUs functioned at a fundamental level.
Through our reports and workshops, the client’s development team gained a deeper understanding of how their GPUs worked, enabling them to better utilise these powerful tools in future projects. The client appreciated this internal educational purpose, which helped them enhance their AI capabilities over time.
Conclusion
Our performance modelling project helped the client tackle the growing costs associated with AI inference by optimising their use of GPUs. By building a model that could predict the performance of various AI models on different GPU architectures, we enabled the client to make better-informed decisions and save on GPU resources.
In the long run, the performance model proved to be not just a tool for improving efficiency, but also a valuable educational resource for the client’s team. This project highlighted the importance of understanding the intricate relationship between AI workloads and GPU performance, enabling the client to build more cost-effective, high-performance systems for the future.
At TechnoLynx, we specialise in helping businesses optimise their AI workflows. Whether you’re looking to improve your GPU performance, reduce costs, or develop new AI solutions, our team can provide the tools and expertise you need to succeed.