This paper is particularly concerned with enhancing human differentiation of information in an information-rich environment over time. It examines some current time-based sonification techniques and limitations and proposes some strategies to enhance accessible sonification design. Sonification is the process of representing non-auditory, non-speech visual and other abstract data in an auditory or bi-modal (audio-visual) format to enhance our understanding of the information. Especially time-based based data, e.g. trends in meteorological data or stock market trading, internet traffic flow, have great potential for auditory representation because sonification is dynamic and linear. While much research has focused on methodologies for representing data, human perception and earcons (warning, alert and emergency erratic sounds) there is a gap in current research. The gap lies with representing extremely large data sets, differentiating information and scaling it to better aid comprehension and with continuous, time-based ambient information display which provides a contiguous stream of data-determined representation at the periphery of working contexts. This paper 1) proposes a sonification engine using parallel mapping (representation) and scaling (differentiation) process to better represent and understand large-scale sonification and the increasing scale of data assimilation required in modem society, 2) and addresses listenability, clear differentiation and representation of information to facilitate rapid, intuitive comprehension of auditory display.