AUGMENTED REALITY INTEGRATED ROBUST FUSION MODEL FOR SIGN LANGUAGE RECOGNITION USING COMPUTER VISON AND MACHINE LEARNING
DOI:
https://doi.org/10.35631/JISTM.1040011Keywords:
Computer Vision, Machine Learning, Augmented Reality, Sign Language RecognitionAbstract
Sign language recognition (SLR) interprets sign language into text, bridging the communication gap between the deaf-mute community who use sign language and those who do not. Recent advancements in computer vision, deep learning, and augmented reality have shown significant progress in the field of motion and gesture recognition, however, large variations in hand actions, facial, and body postures, and the absence of region-specific datasets prevent universally accurate effective sign language recognition. This research developed an efficient model for SLR which includes an RGB-MHI attention module, and the Faster R-CNN deep learning architecture integrated with augmented reality. The proposed model was validated on two benchmark datasets, achieving an accuracy of 98.97% on the AUTSL dataset and 96.7% on the BosphorusSign22k dataset. Furthermore, the model was tested on a self-created dataset named "Amar Vasha" based on Bangla Sign Language (BdSL) to ensure cross-domain adaptability. Experimental results demonstrate that the proposed model achieves state-of-the-art performance on all three benchmarks.