During scientifc explorations, scientists often hold multiple, conficting objectives. Understanding how scientists prioritize and balance these objectives is crucial for developing cognitively-compatible robotic teammates and fostering efective human-robot collaboration. In this study, we seek to improve the cognitive compatibility of robotic algorithms by modelling human' decision making processes under multiple objectives. Collected human decision data from 141 sampling steps indicate that the majority of scientists adopt one of the following objective balancing strategies: (i) A Focus mode, where experts select sampling location to primarily optimize their primary objective; (ii) A Hierarchy mode, where experts hierarchically satisfy foremost their primary objective, then, to a lesser extent, their secondary objective; and (iii) A Trade-of mode, where experts select sampling locations to satisfy all objectives, even the location was not ideal for either objective. To understand how experts choose among the diferent modes, we quantitatively characterize the three types of strategies, by representing the decision data from each sampling step in an objective function space. Analysis of the strategy types reveals that, experts' adaptation of multi-objective coordinating strategies are primarily governed by two key decision factors: current stages of sampling, and outstanding reward values. Based on this discovery, we developed a highly simplifed decision algorithm that connects experts' high-level objectives to desired sampling locations under multiple objectives. Deployment of this algorithm at a planetary-analogue feld exploration mission on Mt. Hood demonstrates its potential in enabling robots to participate in the decision making process and suggest sampling plans that align well with scientists' high-level goals.