I think, after using it on my iPhone 4S for a few weeks that the answer is definitely YES. And, this is especially true for CAD applications. All CAD apps require a complex series of user interactions, usually performed in a rigid manner to proceed through the process. How nice would it be to just speak what you want done and actually have the computer do the dirty work of interpreting the commands?
Just think. No more menus needed. No more searching multiple menu windows for the precise command needed. No more focusing in on tiny graphics on commands. No need to worry about sequencing the command.
On my iPhone I can just say “make an appointment with Bob for tomorrow at 2:00.” not too different than “draw an infinite horizontal line tangent to circle A” when Siri needs more info it asks for it. In my query above it might say “do you mean Bob Albert, Bobby Jones, or Bob Smith?” How cool is that?
I suggest vendors immediately get busy finding smart speech recognition software.
What do you think?
I like it! Could you imagine if you could gracefully blend input to combine voice and touch input? Something like: “Trim this using this and this as boundaries”
I can’t imagine trying to program it, but it sounds fantastic! Neat idea, Ray.
Ray, I think, it is more about future PLM Virtual Assistant. Here is my idea (still love it :)) from 2009 –
What are your questions to PLM Virtual Assistant?