Automated Generation of Optimized Code Implementing SVM models on GPU's

Authors

  • Oscar Jesus Castro Universidad Autonoma de Sinaloa
  • Ines Fernando Vega Universidad Autónoma de Sinaloa

Keywords:

GPU, code implementation, SVM

Abstract

The deployment of Support Vector Machine (SVM) models is a challenging task. These models contain complex math and logic. Furthermore, they can be composed of thousands of floating point values. Therefore, when performed by humans, the deployment process is a slow and error-prone process. We argue that the process of translating a machine learned predictive model into source code is deterministic and, therefore, can be automated. In this paper, we present guidelines for the automatic source code generation and efficient execution of SVM models. These guidelines cover code generation for specialized architectures such as Graphic Processing Units (GPU) using the Computed Unified Device Architecture (CUDA) platform/language. We also provide experimental evidence showing that the resulting source code implements these models efficiently. In addition, a detailed description of the generated source code from sequential up to an optimized version of parallel code is presented. We conducted experiments with a large data set of up to 9 GB in size to show both the feasibility and the scalability of the proposal. The presented experiments show an average speed-up of 112.71 times with respect to the sequential, CPU-based, execution of SVMs. Additionally, the experiments demonstrate a speed-up of 6.2 with respect to other SVM modeling tools that run in a GPU.

Published

2020-11-04
Bookmark and Share