>
> Thanks! I think this is interesting:
>
> http://www-2.cs.cmu.edu/afs/cs/project/ai-
> repository/ai/areas/speech/systems/lotec/0.html
>
> A starting point could be a trained small dictionary (maybe 20 to 100
> words) and a set of predefines scripts to control the most common
> programs (implemented as state machines where each spoken command causes
> an action and leads to a new state).
>
I think that we could tell the application wich words to expect based on
what's opened and visible on the screen. So if getFrontApp (or getFrontkey)
returns Avi's backdrop, it can expect words for a few of Avi's features or
some shortcuts to programs. If it's notes what's opened, "new", "route" or
"down" will be expected, etc.
This approach will make things easy for the user to add new commands. You
open an application, read its childrens and assign a voice command to those
you want. When that application is opened again, the recognizer looks in a
table of recorded commands.
I have sent emails to a couple of people who could give us some code.
I will start working with DragonDemo, using only numbers. That should
give us some ideas.
Daniel Padilla
-- This is the NewtonTalk list - http://www.newtontalk.net/ for all inquiries List FAQ/Etiquette/Terms: http://www.newtontalk.net/faq.html Official Newton FAQ: http://www.chuma.org/newton/faq/
This archive was generated by hypermail 2.1.2 : Wed Jan 01 2003 - 10:02:30 EST