NLP and voice enhancement deep learning algorithms could be used to enable those with speech disabilities to live a more fulfilled life. For example, someone with cerebral palsy (CP), multiple sclerosis (MS), ataxia, or other neurological disorders can be misunderstood. I feel AI could help, so I wish to use NLP to this end. The trouble is, I don’t know where to start.
I have heard of SpeechBrain voice enhancement AI, but it is not used for what the vocally disabled community needs. If this is the corect avanue to take, would anyone kindly be able to help me create a new model of SpeechBrain Enhancement technology to more accurately capture the voices of the vocally disabled community? Creating a proof of concept.