Sign language is an optical language where the formation of signs are done using hands or body gestures. Although, communication using signs are quite understandable among hearing-impaired persons, yet the translation for generic use by other persons remains to be a major challenge. Various techniques are available to convert Sign Languages to Readable text. This paper analyses specifically vision-based techniques for Indian Sign Language Recognition (ISLR). Machine Learning and Deep Learning based techniques for ISLR have been presented here to achieve better accuracy. An experimental setup has been created to analyze the performance of various ML and DL techniques on three different static ISL datasets: First, One-handed with uniform background, second, two-handed with complex background and third one is a mixture of one-handed and two-handed datasets with uniform background. These three datasets have been used to compare the accuracy. The highest accuracy achieved among ML techniques is with the SVM classifier i.e. 99.17%, 81.41% and 99.96% respectively on the three datasets. It is further suggested that by creating Ensemble ML Classifiers or vision based transformers accuracy can be further enhanced. Using DL techniques, highest accuracy achieved on Dataset-I is with ResNet50 model is 100%, on Dataset-II is with MobileNetV2 is 99.96% and on Dataset-III 100% accuracy is achieved by using ResNet50, MobileNetV2 and InceptionResNetV2.