Existing deep hashing algorithms fail to achieve satisfactory results from unseen data owing to the out-of-sample problem. Graph-embedding-based hashing methods alleviate this by learning the distance between samples. However, they focus on the first-order proximity, neglecting to learn the second-order proximity, which effectively preserves the global relationships between samples. We thus integrate the second-order proximity into a deep-hashing framework and propose a self-taught image-hashing approach using deep graph embedding (GE) for image retrieval consisting of two stages: the generation of a hash label and hash function learning. In the first stage, to promote the perceptibility of deep image hashing for unseen data in real large-scale scenes, we integrate a deep GE method into our model to learn both the first- and second-order proximities between samples. In the hash-function learning stage, using hash labels that contain distance information from the previous stage, we learn the hash function by applying a convolution neural network to achieve an end-to-end hash model. We designed representative experiments on the CIFAR-10, STL-10, and MS-COCO datasets. Experimental results show that our method not only performs well on standard datasets it can also obtain better retrieval results, thereby solving the out-of-sample problem compared with other deep hashing-based methods. © 2020 SPIE and IS&T.