
Learning Deep Mixtures of Gaussian Process Experts Using Sum-Product Networks
While Gaussian processes (GPs) are the method of choice for regression tasks, they also come with practical difficulties, as inference cost scales cubic in time and quadratic in memory. In this paper, we introduce a natural and expressive way to tackle these problems, by incorporating GPs in sum-product networks (SPNs), a recently proposed tractable probabilistic model allowing exact and efficient inference. In particular, by using GPs as leaves of an SPN we obtain a novel flexible prior over functions, which implicitly represents an exponentially large mixture of local GPs. Exact and efficient posterior inference in this model can be done in a natural interplay of the inference mechanisms in GPs and SPNs. Thereby, each GP is — similarly as in a mixture of experts approach — responsible only for a subset of data points, which effectively reduces inference cost in a divide and conquer fashion. We show that integrating GPs into the SPN framework leads to a promising probabilistic regression model which is: (1) computational and memory efficient, (2) allows efficient and exact posterior inference, (3) is flexible enough to mix different kernel functions, and (4) naturally accounts for non-stationarities in time series. In a variate of experiments, we show that the SPN-GP model can learn input dependent parameters and hyper-parameters and is on par with or outperforms the traditional GPs as well as state of the art approximations on real-world data.

Probabilistic Deep Learning using Random Sum-Product Networks
Probabilistic deep learning currently receives an increased interest, as consistent treatment of uncertainty is one of the most important goals in machine learning and AI. Most current approaches, however, have severe limitations concerning inference. Sum-Product networks (SPNs), although having excellent properties in that regard, have so far not been explored as serious deep learning models, likely due to their special structural requirements. In this paper, we make a drastic simplification and use a random structure which is trained in a “classical deep learning manner” such as automatic differentiation, SGD, and GPU support. The resulting models, called RAT-SPNs, yield comparable prediction results to deep neural networks, but maintain well-calibrated uncertainty estimates which makes them highly robust against missing data. Furthermore, they successfully capture uncertainty over their inputs in a convincing manner, yielding robust outlier and peculiarity detection.

Martin Trapp, Tamas Madl, Robert Peharz, Franz Pernkopf, and Robert Trappl
Retrieving similar compositional documents which consist of ranked sub-documents, such as threads of healthcare web fora containing community voted comments, has become increasingly important. However, approaches for this task have not exploited the semantic relationships between words so far and therefore do not use the effective generalization property present in semantic word embeddings. In this work, we propose an extension of the Word Mover’s Distance for compositional documents consisting of ranked sub-documents. In particular, we derive a Position-sensitive Word Mover’s Distance, which allows retrieving compositional documents based on the semantic properties of their sub-documents. Additionally, we introduce a novel benchmark dataset for this task, to facilitate other researchers to work on this relevant problem. The results obtained on the novel dataset and on the well-known MovieLense dataset indicate that our approach is well suited for retrieving compositional documents. We conclude that incorporating semantic relations between words and sensitivity to the position and presentation bias is crucial for effective retrieval of such documents.
International Conference on the Theory of Information Retrieval [paper] [bibtex] – 2017

Martin Trapp, Florian Schulze, Alexey A. Novikov, Laslo Tirian, Barry J. Dickson and Katja Bühler
Neuroinformatics 14(2) [paper] [bibtex] – 2016.

Martin Trapp, Robert Peharz, Marcin Skowron, Tamas Madl, Franz Pernkopf, and Robert Trappl
Presented at NIPS Workshop on Practical Bayesian Nonparametrics [paper] – 2016.

Martin Trapp
Presented at NIPS Workshop on Bayesian Nonparametrics [paper] [software] – 2015
3D Object Retrieval in an Atlas of Neuronal Structures
Circuit neuroscience tries to solve one of the most challenging questions in biology: How does the brain work? An important step toward an answer to this question is to gather detailed knowledge about the neuronal circuits of the model organism Drosophila melanogaster. Geometric representations of neuronal objects of the Drosophila are acquired using molecular genetic methods, confocal microscopy, nonrigid registration and segmentation. These objects are integrated into a constantly growing common atlas. The comparison of new segmented neuronal objects to already known neuronal structures is a frequent task, which evolves with a growing amount of data into a bottleneck of the knowledge discovery process. Thus, the exploration of the atlas by means of domain-specific similarity measures becomes a pressing need. To enable similarity-based retrieval of neuronal objects, we defined together with domain experts tailored dissimilarity measures for each of the three typical neuronal structures cell body, projection, and arborization. Moreover, we defined the neuron enhanced similarity for projections and arborizations. According to domain experts, the developed system has big advantages for all tasks, which involve extensive data exploration.