In recent years, cross-modal retrieval technology has attracted extensive attention with the massive growth of multimedia data. However, most cross-modal hashing methods mainly focus on exploring the retrieval of seen classes, while ignoring the retrieval of unseen classes. Therefore, traditional cross -modal hashing methods cannot achieve satisfactory performances in zero-shot retrieval. To mitigate this challenge, in this paper, we propose a novel zero-shot cross-modal retrieval method called discrete asym-metric zero-shot hashing(DAZSH), which fully exploits the supervised knowledge of multimodal data. Specifically, it integrates pairwise similarity, class attributes and semantic labels to guide zero-shot hash-ing learning. Moreover, our proposed DAZSH method combines the data features with the class attributes to obtain a semantic category representation for each category. Therefore, the relationships between seen and unseen classes can be effectively captured by learning a category representation vector for each instance. Therefore, the supervised knowledge can be transferred from the seen classes to the unseen classes. In addition, we develop an efficient discrete optimization strategy to solve the proposed model. Massive experiments on three benchmark datasets show that our proposed approach has achieved promising results in cross-modal retrieval tasks. The source code of this paper can be obtained from https://github.com/szq0816/DAZSH.(c) 2022 Elsevier B.V. All rights reserved.