>You mentioned Neural Nets, is this how most HWR's work???
What are the other methods?
- snakes: not particularly adapted
- standard classification: well trained MLP give better results & are
easier to setup
>and if so which
>Neural Net are best for this field of work....?
As I said, MLP. This is really standard today.
The transition function is usually the logistic sigmoid function
(1/1+exp(-x)) (used into Rosetta) or tanh.
Apple released papers about that (in fact in the Rosetta project):
Their success is more practical than technical, but they rely on the
technology advances of 1996. The real advance of Newton OS is the HWR
integration into the OS.
They (well, in fact, Richard F. Lyon and Larry S. Yaeger) discuss the
advantage of MLP over segment classification.
The real problem with MLP is the structure: how many layers and
neurons? Which learning epoch? Which transition function?
My own researches are based on modified MLPs in order to improve the
quality of the answer while maintaining a reasonable learning time.
Additionally, there is a very important question in HWR: the networks
integration into the global recognition, i.e. how many networks, how
to segment the sismogram, etc.
Rosetta's solution (in a few lines, because I feel like being
off-topic) is a pen-up segmentation: hence printing recognition. The
recognizer tries one, two, three, etc... following (geographical
instead of chronological like paragraph) segments into all forms
networks. Each network gives a probability as the unique output.
Hence, if you write d in two segments, Rosetta will ask for the c
part of the d to each recognizers to say what they think about it,
and the c will get the biggest answer, then l, then it will ask for c
& l (partly because they are close), d will give the best answer. It
will compare c & l versus d, using other parameters (as I said, the
distance between the c & the l).
If you write c, it will also try to guess if it is c or C.
In this process, it may use the dictionary. (clog is not in it, while dog is).
Besides, this explains while é gets worse results than e, just
because é is in two segments. (but I think that accent treatment is
separated, and that there is a recognition for the accent and one for
The problems of this integration (a form per network) is the learning
process. (hence Rosetta & paragraph, don't learn at the network
level). That is, when learning, you put data to networks telling them
what the proper answer should be. The question then is, what should
be said to a for example when you learn d? if you tell a that the
proper answer is 0, then it may recognize the a which is like a d but
with a smaller vertical segment with less efficiency.
Rosetta can be improved in three directions
a/ preprocess: it treats the data uniformly, it is the network job to
make the difference, for example between x coordinates and y
b/ learning (i.e. you can either change forms or select some by
frequency (as in Paragraph)
c/ postprocess: dictionary, language recognition, etc...
alt.rec does its best to improve post-process & integration (well,
Newton OS integration is already nice). Now, I think there is a way
to register another recognizer, hence there may be as promised a
complete alternative to paragraph & rosetta (& graffiti, free-style)
in several months. (first priority of the bowels project: the ATA
driver. Development will start within a month).
I can give you more online links and books references if you are
interested in HWR recognition or neural networks.
-- P&M Consulting Newton Program http://www.pnm-consulting.com/newton/ *************************************** NewtonTalk brought to you by:
EVOTE.COM -- the ESPN of politics on the Internet! All the players, all the news, and the hottest analysis and features (plus 'toons!) anywhere.... visit http://www.evote.com today!
*************************************** Need Subscribe/Unsubscribe info?
Visit the NewtonTalk section at http://www.planetnewton.com
This archive was generated by hypermail 2b29 : Sat Jul 01 2000 - 00:00:05 CDT