Occluded and partial person re-identification (re-ID) problems have emerged as challenging research topics in the area of computer vision. Existing part-based models, with complex designs, fail to properly tackle these problems. The reasons for their failures are two-fold. First, individual body part appearances are not discriminative enough to distinguish between two closely appearing persons. Second, re-ID datasets typically lack detailed human body part annotations. To address these challenges, we present a lightweight yet accurate solution for partial person re-ID. Our proposed approach consists of two key components, namely, design of a lightweight unary-binary projective dictionary learning (UBDL) model, and construction of a similarity matrix for distilling knowledge from the deep omni-scale network (OSNet) to UBDL. The unary dictionary (UD) pair encodes patches horizontally, ignoring the viewpoints. The binary dictionary (BD) pairs, on the other hand, are learned between two views, giving more weight to less occluded vertical patches for improving the correspondence across the views. We formulate appropriate convex objective functions for unary and binary cases by incorporating the above knowledge similarity matrix. Closed form solutions are obtained for updating unary and BD components. Final matching scores are computed by fuzing unary and binary matching scores with adaptive weighting of relevant cross-view patches. Extensive experiments and ablation studies on a number of occluded and partial re-ID datasets like occluded-REID (O-REID), partial-REID (P-REID), and partial-iLIDS (P-iLIDS), clearly showcase the merits of our proposed solution. © 2020 IEEE.