Applying Parallelization Strategies for Inference Mechanisms Performance Improvement
Keywords:
RETE Algorithm, Inference, GPU, Parallel Computing, Threads.Abstract
The use of semantics technologies for system development is increasing nowadays. The knowledge representation in ontologies can be used in a lot of applications, ranging from knowledge-based recommendation systems until the Semantic Web applications. The core component of the semantic applications is the logical inference engine, which process and generates new facts into the knowledge base from production rules. The inference engine´s performance is directly related to the length of the knowledge base and the necessities of the nowadays knowledge bases are becoming a challenge. This paper presents a knowledge base´s search algorithm for the RETE Algorithm which uses the intrinsic parallel structures from the modern computers, augmenting the performance of the inference engine. We implemented a Threads based and a GPU based search engine and compared their performance. We point as main contributions of this paper the parallel system that implements the search engine and the algorithm for vectorization of the knowledge base.