摘要:
Reality is noisy and messy, and there is no grand simulator of things in sight. What is more, our models can only restrict their scope to increasingly smaller empirical nooks, solving, at best, tiny fractions of an infnite jigsaw puzzle. As we, researchers in need of cognition, go through the process of building new models and discarding old ones, we require the right tools to foster our epistemic achievements. At the time of writing, deep learning continues to enjoy a vibrant hype, refurbishing the methodological equipment of many quantitative sciences. The behavioral sci- ences, despite being more resistant to change than fellow disciplines, are also enjoying their fair share of the rapidly expanding trend. When it comes to model-based inference, deep learning innovations are currently transforming the way models are ft to data and employed for draw- ing substantive conclusions or deriving reliable forecasts. Moreover, uncertainty (an ancient con- cept) and its quantifcation are becoming more and more important in deep learning theory and practice. Bayesian methods, deeply rooted in probability theory, are currently viewed by many researchers as the gold-standard for uncertainty-aware inference, but other approaches or gener- alizations might push through in the not-so-distant future. In a way, this thesis ventured into a discourse between deep learning and scientifc modeling with a focus on cognitive science and mathematical psychology. It brought together ideas for dramatically accelerating Bayesian infer- ence by using non-Bayesian neural networks designed to deal with the data types encountered by researchers working in various areas of knowledge. The general idea of using black-box estimators to learn white-box scientifc models from computer simulations is certainly not new, but is still largely underutilized in the behavioral and cognitive sciences. Most importantly, future research should further foster the discourse between deep learning (or artificial int