Applying Parallelization Strategies for Inference Mechanisms Performance Improvement

Authors

  • Sandro José Rigo UNISINOS

Keywords:

RETE Algorithm, Inference, GPU, Parallel Computing, Threads.

Abstract

The use of semantics technologies for system development is increasing nowadays. The knowledge representation in ontologies can be used in a lot of applications, ranging from knowledge-based recommendation systems until the Semantic Web applications. The core component of the semantic applications is the logical inference engine, which process and generates new facts into the knowledge base from production rules. The inference engine´s performance is directly related to the length of the knowledge base and the necessities of the nowadays knowledge bases are becoming a challenge. This paper presents a knowledge base´s search algorithm for the RETE Algorithm which uses the intrinsic parallel structures from the modern computers, augmenting the performance of the inference engine. We implemented a Threads based and a GPU based search engine and compared their performance. We point as main contributions of this paper the parallel system that implements the search engine and the algorithm for vectorization of the knowledge base.

Downloads

Download data is not yet available.

Published

2019-08-16

How to Cite

Rigo, S. J. (2019). Applying Parallelization Strategies for Inference Mechanisms Performance Improvement. IEEE Latin America Transactions, 16(12), 2881–2887. Retrieved from https://latamt.ieeer9.org/index.php/transactions/article/view/918