Re: [NTLK] BumpTop

From: Karel Jansens <>
Date: Sat Jun 24 2006 - 17:54:10 EDT

Jon Glass schreef:
> On 6/24/06, Joel M. Sciamma <> wrote:
>>I have played around with various 3D Finder replacements on the Mac
>>over the years but I am still not convinced they make it easier to
>>deal with large numbers of objects - the sorts of tasks computers are
>>generally most useful for.
> the problem of trying to convert a 3D real world workspace to a
> non-physical computer-screen representation ought to be obvious to
> anyone. Which is why they worked so hard to help with the
> illusion--and it's a pretty good one, in fact, the best I've seen, but
> I don't see how it would work, and for me there is one, simple reason.
> I got the computer to try to free me from the jumbled mess that is my
> desk! Why would I want to bring that jumble and mess to my computer
> desktop???!!! :-) Besides that, the most important plus of real paper,
> etc. is the tactile feedback, and the genuine 3D cues. You can only
> hope to approximate it at best, but again, you are back to duplicating
> the non-structured mess that existed in the first place! IMO, the key
> is not how it looks, but how it runs under the hood. For example, the
> Spotlight searching on the Mac OS, and QuickSilver. I can keep a
> fairly simple file structure--across multiple disks, and allow these
> two utilities to do the finding for me. Yes, interface for such a tool
> is important, but the nuts and bolts that do the finding are
> essential, in that they need to find the right items, and then present
> it in a way that makes it simple to drill to the very file you need,
> out of the potentially thousands of results.

Personally, I'd like my computer to be a facsimile, not of my desktop,
but rather of a smart slave that will execute my commands in an
intelligent fashion and maybe, on occasion, be able to anticipate on my
needs (but not too often: no-one likes a smart-ass slave!).

Which is why I'm pining for some form of NLUI, a Natural Language User
Interface. It doesn't have to be based on speech recognition (in fact,
in most cases I'd prefer it not to be speech-orientated), but the UI
should be able to understand and interpret commands spoken, written or
even typed in normal human language and know how to execute them. The UI
should also be self-learning, gradually adapting to my whims and

It should, in short, relate to present-day UIs as the Newton handwriting
recognition engine relates to Graffiti.

I know, it's a "I have a dream" scenario, but hey...

Karel Jansens

This is the NewtonTalk list - for all inquiries
Official Newton FAQ:
WikiWikiNewt for all kinds of articles:
Received on Sat Jun 24 17:54:52 2006

This archive was generated by hypermail 2.1.8 : Sat Jun 24 2006 - 18:30:00 EDT