What types of representations support our ability to integrate information acquired during one eye fixation with information acquired during the next fixation? In Experiment 1, transsaccadic integration was explored by manipulating whether or not the relative position of a picture of an object was maintained across a saccade. In Experiment 2, the degree to which visual details of a picture are coded in a position-specific representational system was explored by manipulating whether or not both the relative position and the left—right orientation of the picture were maintained across a saccade. Position-specific and nonspecific preview benefits were observed in both experiments. Only the position-specific benefits were influenced by the number of task-relevant pictures presented in the preview display (Experiment 1) and the left—right orientation of the picture presented in the preview display (Experiment 2). The results support a model of transsaccadic integration based on two independent representational systems. One system codes abstract, prestored object types, and the other codes episodic tokens consisting of stimulus properties linked to scene- or configuration-based position markers.