In this research work, we contribute with a behaviour learning process for a hierarchical Bayesian framework for multimodal active perception, devised to be emergent, scalable and adaptive. This framework is composed by models built upon a common spatial configuration for encoding perception and action that is naturally fitting for the integration of readings from multiple sensors, using a Bayesian approach devised in previous work. The proposed learning process is shown to reproduce goal-dependent human-like active perception behaviours by learning model parameters (referred to as "attentional sets") for different free-viewing and active search tasks. Learning was performed by presenting several 3D audiovisual virtual scenarios using a head-mounted display, while logging the spatial distribution of fixations of the subject (in 2D, on left and right images, and in 3D space), data which are consequently used as the training set for the framework. As a consequence, the hierarchical Bayesian framework adequately implements high-level behaviour resulting from low-level interaction of simpler building blocks by using the attentional sets learned for each task, and is able to change these attentional sets "on the fly," allowing the implementation of goal-dependent behaviours (i.e., top-down influences).