The problem of multi-view feature selection, a kind of feature learning pattern, has raised considerable interests in the past decade. It is crucial for feature selection to maintain both the overall structure and locality of the original features. The existing unsupervised feature selection methods mostly preserve either global or local structures, and compute the sparse representation for each view individually. Besides, several methods introduce a predefined similarity matrix among different views and fix it in the learning process, which consider less correlation between each single view. Thus, we focus on the multi-view feature selection and propose a new method. Multi-view Embedding with Adaptive Shared Output and Similarity (ME-ASOS). This method introduces embedding directly into multi-view learning, mapping the high-dimensional data to a shared subspace with the view-wise multi-output regular projections and learns a common similarity matrix through an improved algorithm to characterize structures across different views. A regulation parameter is used to largely eliminate the adverse effect of noisy and unfavorable features for global structures and another regularization term is used in local structure to avoid the trivial solution and add a prior of uniform distribution. Compared with 5 existing algorithms, the experimental results on 4 real-world datasets has shown that method ME-ASOS captures more related information between different views, selects better discriminative features and obtains superior accuracy and higher efficiency. (C) 2018 Elsevier B.V. All rights reserved.