Advanced Driver Assistance Systems (ADAS) have experienced major advances in the past few years. The main objective of ADAS includes keeping the vehicle in the correct road direction, and avoiding collision with other vehicles or obstacles around. In this paper, we address the problem of estimating the heading direction that keeps the vehicle aligned with the road direction. This information can be used in precise localization, road and lane keeping, lane departure warning, and others. To enable this approach, a large-scale database (+1 million images) was automatically acquired and annotated using publicly available platforms such as the Google Street View API and OpenStreetMap. After the acquisition of the database, a CNN model was trained to predict how much the heading direction of a car should change in order to align it to the road 4 meters ahead. To assess the performance of the model, experiments were performed using images from two different sources: a hidden test set from Google Street View (GSV) images and two datasets from our autonomous car (IARA). The model achieved a low mean average error of 2.359 degrees and 2.524 degrees for the GSV and IARA datasets, respectively; performing consistently across the different datasets. It is worth noting that the images from the IARA dataset are very different (camera, FOV, brightness, etc.) from the ones of the GSV dataset, which shows the robustness of the model. In conclusion, the model was trained effortlessly (using automatic processes) and showed promising results in real-world databases working in real-time (more than 75 frames per second).