Talksign Launches AI Model Translating ASL to Speech in Under 100ms
The bidirectional system recognises 250 ASL signs, translating signed input via webcam into spoken or written language, and converting spoken or typed words into sign language video.
Talksign, a Nigeria- and UK-based AI company, has launched Talksign-1, a sign language translation model that converts American Sign Language (ASL) into speech and text in under 100 milliseconds. The bidirectional system recognises 250 ASL signs, translating signed input via webcam into spoken or written language, and converting spoken or typed words into sign language video.
The model was trained on the WLASL2000 dataset and achieves 84.7% accuracy on single-sign recognition. It analyses about one second of signing to balance speed and accuracy but does not yet support continuous sentence-level translation or fingerspelling, limiting use to isolated signs.
Founded in November 2025 by Edidiong Ekong and Kazi Mahathir Rahman, Talksign aims to address accessibility gaps faced by the deaf and hard-of-hearing community, particularly as many digital tools still assume users can hear and speak. The technology is designed to enable more direct communication between deaf and hearing individuals without always relying on human interpreters.
Potential applications include education, healthcare, workplaces, and public spaces such as transport systems, emergency alerts, and live broadcasts. The company worked with deaf educators, native ASL users, and accessibility advocates during development.
For privacy, gesture analysis is performed in the user’s browser, with only processed data sent to servers. Talksign notes the tool should not be used as the sole authority in medical, legal, or safety-critical situations.
Currently limited to ASL, Talksign plans to expand support to other sign languages, increase vocabulary size, and add continuous signing and fingerspelling in future versions.

