For disabled people who can’t type or talk, and using eye tracking computer can I have conversational ai the reply with options to choose from each time (with eye gaze software)? Thx
I tried searching for it. It seems that it doesn’t exist, but it may not be very well developed yet…
If there is a clear relationship between gaze and command, and if there is a dataset that pairs them, I think there are probably quite a few models that can potentially be realized through training. The problem is probably speed. If it can be realized with as small a model as possible, it can be expected to be faster.
There also seem to be several papers on the subject.
https://www.reddit.com/r/computervision/comments/1b6ih7f/eye_tracking_is_there_a_computer_vision_model_to/
Even a small VLM can recognize Gaze in this way.
Was thinking vercel simple UI design that each prompt response are with options to choose from (instead of usual open question of chatgpt “would you like me to…”)
I see, so before you look into using large-scale or cutting-edge technology, you’re thinking of ways to make do with lightweight improvements.
If that’s the case, I think there are two ways you could go about it: either create your own GUI template that’s even simpler than existing chatbots and has auxiliary functions, or consult with the author of the GUI library and ask them to incorporate accessibility features.
I think there’s a good chance that the latter would be considered if you raised an issue on github.
GUI library widely used in Hugging Face
Hugging Chat
Great thx.
I posted on HF/chat-ui
Added example done with V0 - https://gkql4cudipyu7tvz.vercel.app/
Happy to hear any thoughts or inputs . Ty