Damages to crops happen due to natural calamities, irregular fertilization, improper treatment, etc. Estimation of this damage is important in order to plan and execute corrective action strategies. To perform this estimation with high accuracy, both satellite and near-field images are needed. Satellite images assists in evaluation of damages due to natural calamities, while near-field images assist in evaluation of damage due to plant diseases. Separate models are designed for processing these images, which limits their correlative analysis; and thereby reduces overall accuracy of damage detection. To remove this drawback, this text proposes a deep convolutional network (DCN) design that integrates both near-field and far-field images in order to perform effective correlation. Moreover, design of an integrated model would assist researchers to identify and resolve dataset-specific performance gaps. The proposed model analyses different datasets, and estimates performance of context-sensitive classification methods. These are integrated to improve efficiency for multimodal augmented image classification deployments via correlative analysis. This analysis allows the system to predict crop-damages with higher efficiency than individual models. The model is trained for detection of areas which are infected by natural calamities, thereby assisting farm experts to undertake corrective measures based on specific area. Results of proposed model are compared with some of the recently developed state-of-the-art methods, and it is observed that the former model achieves 10% better accuracy, 8% better precision and 5% better recall performance. This evaluation is done on a large number of datasets, thereby assisting in model validation and estimating its usage for multiple type of crops. This text also recommends future research directions that can be undertaken for performance improvement of the underlying model. © 2022, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.