Selected Publications

 

spn

Learning Deep Mixtures of Gaussian Process Experts Using Sum-Product Networks
While Gaussian processes (GPs) are the method of choice for regression tasks, they also come with practical difficulties, as inference cost scales cubic in time and quadratic in memory. In this paper, we introduce a natural and expressive way to tackle these problems, by incorporating GPs in sum-product networks (SPNs), a recently proposed tractable probabilistic model allowing exact and efficient inference. In particular, by using GPs as leaves of an SPN we obtain a novel flexible prior over functions, which implicitly represents an exponentially large mixture of local GPs. Exact and efficient posterior inference in this model can be done in a natural interplay of the inference mechanisms in GPs and SPNs. Thereby, each GP is — similarly as in a mixture of experts approach — responsible only for a subset of data points, which effectively reduces inference cost in a divide and conquer fashion. We show that integrating GPs into the SPN framework leads to a promising probabilistic regression model which is: (1) computational and memory efficient, (2) allows efficient and exact posterior inference, (3) is flexible enough to mix different kernel functions, and (4) naturally accounts for non-stationarities in time series. In a variate of experiments, we show that the SPN-GP model can learn input dependent parameters and hyper-parameters and is on par with or outperforms the traditional GPs as well as state of the art approximations on real-world data.

Martin Trapp, Robert Peharz, Carl E. Rasmussen, Franz Pernkopf
Presented at ICML Workshop on Tractable Probabilistic Models [arXiv] – 2018

spn

Probabilistic Deep Learning using Random Sum-Product Networks
Probabilistic deep learning currently receives an increased interest, as consistent treatment of uncertainty is one of the most important goals in machine learning and AI. Most current approaches, however, have severe limitations concerning inference. Sum-Product networks (SPNs), although having excellent properties in that regard, have so far not been explored as serious deep learning models, likely due to their special structural requirements. In this paper, we make a drastic simplification and use a random structure which is trained in a “classical deep learning manner” such as automatic differentiation, SGD, and GPU support. The resulting models, called RAT-SPNs, yield comparable prediction results to deep neural networks, but maintain well-calibrated uncertainty estimates which makes them highly robust against missing data. Furthermore, they successfully capture uncertainty over their inputs in a convincing manner, yielding robust outlier and peculiarity detection.

Robert Peharz, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Kristian Kersting, and Zoubin Ghahramani
Presented at UAI Workshop on Uncertainty in Deep Learning [arXiv] – 2018

spn
Safe Semi-Supervised Learning of Sum-Product Networks
In several domains obtaining class annotations is expensive while at the same time unlabelled data are abundant. While most semi-supervised approaches enforce restrictive assumptions on the data distribution, recent work has managed to learn semi-supervised models in a non-restrictive regime. However, so far such approaches have only been proposed for linear models. In this work, we introduce semi-supervised parameter learning for Sum-Product Networks (SPNs). SPNs are deep probabilistic models admitting inference in linear time in the number of network edges. Our approach has several advantages, as it (1) allows generative and discriminative semi-supervised learning, (2) guarantees that adding unlabelled data can increase, but not degrade, the performance (safe), and (3) is computationally efficient and does not enforce restrictive assumptions on the data distribution. We show on a variety of data sets that safe semi-supervised learning with SPNs is competitive compared to state-of-the-art and can lead to a better generative and discriminative objective value than a purely supervised approach.

Martin Trapp
, Tamas Madl, Robert Peharz, Franz Pernkopf, and Robert Trappl

nlp

Retrieving Compositional Documents using Position-Sensitive Word Mover’s Distance

Retrieving similar compositional documents which consist of ranked sub-documents, such as threads of healthcare web fora containing community voted comments, has become increasingly important. However, approaches for this task have not exploited the semantic relationships between words so far and therefore do not use the effective generalization property present in semantic word embeddings. In this work, we propose an extension of the Word Mover’s Distance for compositional documents consisting of ranked sub-documents. In particular, we derive a Position-sensitive Word Mover’s Distance, which allows retrieving compositional documents based on the semantic properties of their sub-documents. Additionally, we introduce a novel benchmark dataset for this task, to facilitate other researchers to work on this relevant problem. The results obtained on the novel dataset and on the well-known MovieLense dataset indicate that our approach is well suited for retrieving compositional documents. We conclude that incorporating semantic relations between words and sensitivity to the position and presentation bias is crucial for effective retrieval of such documents.

Martin Trapp, Marcin Skowron, and Dietmar Schabus
International Conference on the Theory of Information Retrieval [paper] [bibtex] – 2017

brain
Adaptive and Background-Aware GAL4 Expression Enhancement of Co-registered Confocal Microscopy Images
GAL4 gene expression imaging using confocal microscopy is a common and powerful technique used to study the nervous system of a model organism such as Drosophila melanogaster. Recent research projects focused on high throughput screenings of thousands of different driver lines, resulting in large image databases. The amount of data generated makes manual assessment tedious or even impossible. The first and most important step in any automatic image processing and data extraction pipeline is to enhance areas with a relevant signal. However, data acquired via high throughput imaging tends to be less than ideal for this task, often showing high amounts of background signal. Furthermore, neuronal structures and in particular thin and elongated projections with a weak staining signal are easily lost. In this paper, we present a method for enhancing the relevant signal by utilizing a Hessian-based filter to augment thin and weak tube-like structures in the image. To get optimal results, we present a novel adaptive background-aware enhancement filter parametrized with the local background intensity, which is estimated based on a common background model. We also integrate recent research on adaptive image enhancement into our approach, allowing us to propose an effective solution for known problems present in confocal microscopy images. We provide an evaluation based on annotated image data and compare our results against current state-of-the-art algorithms. The results show that our algorithm clearly outperforms the existing solutions. 

Martin Trapp
, Florian Schulze, Alexey A. Novikov, Laslo Tirian, Barry J. Dickson and Katja Bühler
Neuroinformatics 14(2) [paper] [bibtex] – 2016.

spn
Structure Inference in Sum-Product Networks using Infinite Sum-Product Trees
Sum-Product Networks (SPNs) are a highly efficient type of a deep probabilistic model that allows exact inference in time linear in the size of the network. In previous work, several heuristic structure learning approaches for SPNs have been developed, which are prone to overfitting compared to a purely Bayesian model. In this work, we propose a principled approach to structure learning in SPNs by introducing infinite Sum-Product Trees (SPTs). Our approach is the first correct and successful extension of SPNs to a Bayesian nonparametric model. We show that infinite SPTs can be used successfully to discover SPN structures and outperform infinite Gaussian mixture models in the task of density estimation. 

Martin Trapp
, Robert Peharz, Marcin Skowron, Tamas Madl, Franz Pernkopf, and Robert Trappl
Presented at NIPS Workshop on Practical Bayesian Nonparametrics [paper] – 2016.


bayes
BNP.jl: Bayesian nonparametrics in Julia
BNP.jl is a Julia package implementing state-of-the-art Bayesian nonparametric models for medium-sized unsupervised problems. The software package brings Bayesian nonparametrics to non-specialists allowing the widespread use of Bayesian nonparametric models. Emphasis is put on consistency, performance and ease of use allowing easy access to Bayesian nonparametric models inside Julia.

Martin Trapp
Presented at NIPS Workshop on Bayesian Nonparametrics [paper] [software] – 2015


brain3D Object Retrieval in an Atlas of Neuronal Structures

Circuit neuroscience tries to solve one of the most challenging questions in biology: How does the brain work? An important step toward an answer to this question is to gather detailed knowledge about the neuronal circuits of the model organism Drosophila melanogaster. Geometric representations of neuronal objects of the Drosophila are acquired using molecular genetic methods, confocal microscopy, nonrigid registration and segmentation. These objects are integrated into a constantly growing common atlas. The comparison of new segmented neuronal objects to already known neuronal structures is a frequent task, which evolves with a growing amount of data into a bottleneck of the knowledge discovery process. Thus, the exploration of the atlas by means of domain-specific similarity measures becomes a pressing need. To enable similarity-based retrieval of neuronal objects, we defined together with domain experts tailored dissimilarity measures for each of the three typical neuronal structures cell body, projection, and arborization. Moreover, we defined the neuron enhanced similarity for projections and arborizations. According to domain experts, the developed system has big advantages for all tasks, which involve extensive data exploration.

Martin Trapp, Florian Schulze, Katja Bühler, Tianxiao Liu and Barry J. Dickson
The Visual Computer 29(12) [paper] [bibtex] – 2013.