Currently, attention mechanism receives enormous interest and has been extensively employed in the fields of Person Re-Identification (RE-ID), as it gains superior performance in learning discriminative feature representations. However, most off-the-shelf attention methods are still vulnerable to cross-view inconsistency problem. Besides, they merely exploit imprecise channel attention information and coarse-grained spatial attention of homogeneous scales, being insufficient to capture subtle differences among highly-similar individuals. To this end, we propose a novel Attention-Aligned Network (AANet) to address the aforementioned problems, in which a novel Omnibearing Foreground-aware Attention (OFA) module, Attention Alignment Mechanism (AAM) and an improved triplet loss with hard mining are proposed to learn foreground attentive features for RE-ID. Specifically, AANet firstly leverages OFA module to exploit heterogeneous-scale spatial attention and foreground-aware channel attention information. Then AANet further reduces the impact of background clutter and learns camera-invariant and background-invariant representations by virtue of AAM. Last but not least, an improved triplet loss with hard mining is also introduced to enhance the feature learning capability, which can jointly minimize the intra-class distance and maximize the inter-class distance in each triplet unit. Extensive experiments are carried out to demonstrate that the proposed method outperforms most current methods on three main RE-ID benchmarks.