Visualization of inter-document similarities is widely used for the exploration of document collections and interactive retrieval [1, 2]. However, similarity relationships between documents are multifaceted and measured distances by a given metric often do not match the perceived similarity of human beings. Furthermore, the user's notion of similarity can drastically change with the exploration objective or task at hand. Therefore, this research proposes to investigate online adjustments to the similarity model using feedback generated during exploration or exploratory search. In this course, rich visualizations and interactions will support users to give valuable feedback. Based on this, metric learning methodologies will be applied to adjust a similarity model in order to improve the exploration experience. At the same time, trained models are considered as valuable outcomes whose benefits for similarity-based tasks such as query-by-example retrieval or classification will be tested. The measurement of inter-document similarities has been extensively studied in the past. There are various distance metrics using different representations such as weighted term vectors (e.g. TF-IDF, BM25) [9], distributions from topic models [7] or distributed representations from pre-trained language models [5]. Learning a metric can create improved similarity measures that fit specific domain characteristics or the requirements of a task at hand. Learning to rank has attracted much research towards this matter in the IR community. Related works form, together with other findings regarding metric learning, the groundwork for this research. In total, highly diverse approaches can be found: linear projections of term vectors [10]; pattern matching in sequences of word embeddings using convolutional neural networks [8]; word sequence learning using siamese recurrent neural networks [6]; to name a few. Approaches using online feedback are particularly relevant to this research. There, collecting implicit feedback based on result lists such as observing clicks [3] or dwell times [4] are common feedback modalities. However, there is only little research on metric learning using feedback from interactions with rich visualizations of inter-document similarities such as proposed in [1]. We hypothesize that users can generate more valuable feedback while interacting with an explorable visualization than with a simple list of best hits. This can be argued with a more comprehensive understanding of underlying similarity relationships such visualizations can give and with the greater range of possible feedback modalities. In a spatial visualization, for example, feedback could be given by correcting datapoint positions, drawing lines as borders for desired clusters or rating the desirability of similarity relationships between result documents. Following the above-mentioned considerations, the research questions we intend to pursue are: (i) Which feedback modalities enable users to express the desired similarity measure and how can interactive visualizations support users to generate feedback effectively? (ii) Which metric learning methodologies are applicable to improve a similarity model using the feedback from the proposed modalities? (iii) Can a visual exploratory search using the outcome of (i) and (ii) demonstrate arguable benefits over classic searches using result list presentations?