Enable the voice

NLP and voice enhancement deep learning algorithms could be used to enable those with speech disabilities to live a more fulfilled life. For example, someone with cerebral palsy (CP), multiple sclerosis (MS), ataxia, or other neurological disorders can be misunderstood. I feel AI could help, so I wish to use NLP to this end. The trouble is, I don’t know where to start.

I have heard of SpeechBrain voice enhancement AI, but it is not used for what the vocally disabled community needs. If this is the corect avanue to take, would anyone kindly be able to help me create a new model of SpeechBrain Enhancement technology to more accurately capture the voices of the vocally disabled community? Creating a proof of concept.

Additionally, I wanted to share information about the research being done by Google. This is called Project Euphonia: a Google Research initiative focused on helping people with non-standard speech be better understood by AI assistants. The approach is centered on analyzing speech recordings to better train speech recognition models.

This project aims to make it easier for the vocally disabled community to use Google Assistant or Google Home. Although this may not be considered an AI enhancement, it is still interesting to see how others are improving AI voice recognition for this community.