The robustness of computer vision models against adversarial attacks is a critical matter in machine learning that is often overlooked by researchers and developers. A contributing factor to this oversight is the complexity involved in assessing model robustness. This paper introduces RobustCheck, a Python package designed for evaluating the adversarial robustness of computer vision models. Utilizing black-box adversarial techniques, it allows for the assessment of model resilience without internal model access, reflecting real-world application constraints. RobustCheck is distinctive for its rapid integration into development workflows and its efficiency in robustness testing. The tool provides an essential resource for developers to enhance the security and reliability of computer vision systems.