Digital materials have become a popular medium for information access, and attracted a diverse group of users, such as college students, who benefit from the low cost of, and portable access to, the materials. However, college students with disabilities may have difficulty accessing electronic materials if the materials were not developed appropriately. Laws and standards provide guidance on making digital documents accessible, but these regulations are implemented slowly. As a result, published materials on the market may have accessibility issues. Efforts have been made to produce evaluation methods for eBooks. For example, automated tools have been used to check for accessibility aspects in multiple studies, but using automated tools to evaluate accessibility of electronic materials is not enough due to the complexity of the checkpoints. Thus, human evaluators are needed. This study assessed a newly developed accessibility evaluation methodology that was designed for e-textbooks, and examined whether books that were rated as higher in accessibility versus books that were rated lower in accessibility resulted in differences in user experience and performance. This study consisted of 6 students with visual impairments and 6 students with normal or corrected-to-normal vision, who read and interacted with eBooks. User experience and performance were measured using subjective questionnaires, reading time, and accuracy to content-related questions. We found differences in user experience ratings for eBooks that were rated as high or low in accessibility; however, we found no differences in users' task performance as a function of the accessibility level of the eBook.