Transfer learning has attracted great attention to facilitate the sparsely labeled or unlabeled target learning by leveraging previously well-established source domain through knowledge transfer. Recent activities on transfer learning attempt to build deep architectures to better tight off cross-domain divergences by extracting more effective features. However, its generalizability would decrease greatly due to the domain mismatch enlarges, particularly at the top layers. In this paper, we develop a novel deep transfer low-rank coding based on deep convolutional neural networks, where we investigate multilayer low-rank coding at the top task-specific layers. Specifically, multilayer common dictionaries shared across two domains are obtained to bridge the domain gap such that more enriched domain-invariant knowledge can be captured through a layerwise fashion. With rank minimization on the new codings, our model manages to preserve the global structures across source and target, and thus, similar samples of two domains tend to gather together for effective knowledge transfer. Furthermore, domain/classwise adaption terms are integrated to guide the effective coding optimization in a semisupervised manner, so the marginal and conditional disparities of two domains will be alleviated. Experimental results on three visual domain adaptation benchmarks verify the effectiveness of our proposed approach on boosting the recognition performance for the target domain, by comparing it with other state-of-the-art deep transfer learning.