MUSSLAP is an acronym from the english project name Multimodal Human Speech and Sign Language Processing for Human-Machine Communication.
The aim of the project is to support speech recognition systems by visual recognition and to apply information retrieval methods on the results. Multimodal audio-visual recognition will be applied also in the task of sign language recognition and support of information retrieval in audio-visual recordings. In recognition there are three main subtasks: acoustic speech recognition, visual speech recognition, and combination of acoustic and visual recognizers. In case of sign language recognition the aim is also synthesis of sign language and translation of sign language into speech. Recognition methods are followed by information retrieval methods for identification of the content of a speech even from inaccurately recognized word sequence. In task of translation methods of computational linguistics are used.