abstract: | A typical application of probabilistic inference involves constructing a probabilistic model, such as a Bayesian network, and then applying algorithms to the model to answer probabilistic queries. To compute an exact answer to a typical query, mainstream algorithms require time and space that is exponential in a measure known as the treewidth of the network. In recent years, larger networks have yielded larger treewidths, causing many in the field to dismiss exact inference in favor of approximate techniques. In this thesis, we argue that exact inference can handle much larger networks than commonly believed. The key to attacking the treewidth barrier is a type of network structure called local structure or parametric structure, which unlike topological structure, is not effectively utilized by inference algorithms. More specifically, this thesis focuses on new techniques for compiling Bayesian networks and for encoding local structure into logic. On many networks already accessible to mainstream algorithms, the approach defined will result in an ability to compute exact answers, orders of magnitude more efficiently. More importantly, on many networks having treewidth into the hundreds, effectively employing local structure in this way will allow us to compute exact answers for the first time. |