Faster-BNI: Fast Parallel Exact Inference on Bayesian Networks

Jiantong Jiang, Zeyi Wen, Atif Mansoor, Ajmal Mian

Research output: Contribution to journalArticlepeer-review

Abstract

Bayesian networks (BNs) have recently attracted more attention, because they are interpretable machine learning models and enable a direct representation of causal relations between variables. However, exact inference on BNs is time-consuming, especially for complex problems, which hinders the widespread adoption of BNs. To improve the efficiency, we propose a fast BN exact inference named Faster-BNI on multi-core CPUs. Faster-BNI enhances the efficiency of a well-known BN exact inference algorithm, namely the junction tree algorithm, through hybrid parallelism that tightly integrates coarse- and fine-grained parallelism. Moreover, we identify that the bottleneck of BN exact inference methods lies in recursively updating the potential tables of the network. To reduce the table update cost, Faster-BNI employs novel optimizations, including the reduction of potential tables and re-organizing the potential table storage, to avoid unnecessary memory consumption and simplify potential table operations. Comprehensive experiments on real-world BNs show that the sequential version of Faster-BNI outperforms existing sequential implementation by 9 to 22 times, and the parallel version of Faster-BNI achieves up to 11 times faster inference than its parallel counterparts. Faster-BNI source code is freely available at <uri>https://github.com/jjiantong/FastPGM</uri>.

Original languageEnglish
Pages (from-to)1444-1455
Number of pages12
JournalIEEE Transactions on Parallel and Distributed Systems
Volume35
Issue number8
Early online date2024
DOIs
Publication statusPublished - Aug 2024

Fingerprint

Dive into the research topics of 'Faster-BNI: Fast Parallel Exact Inference on Bayesian Networks'. Together they form a unique fingerprint.

Cite this