From: Larry Yaeger (lsynewtal_at_beanblossom.in.us)
Date: Sat Aug 27 2005 - 00:47:28 PDT
At 8:23 PM -0400 8/26/05, John Charlton wrote:
>I'd imagine (and I know just enough to be dangerous about Chinese and=A0
>neural nets) that it wouldn't be much more difficult than roman,=A0
>simpler in some ways.
>
>Since each character is a neat block and there's typically less
>overlap than you might see in sloppy handwritten english...
>
>On top of stroke count which should be pretty consistent...
These are the common conceptions. However, data=20
gathered from native writers suggests a different=20
picture. There's actually the equivalent of=20
cursive, in which the pen is not lifted between=20
strokes or radicals. And different people use=20
different stroke orders, connect different=20
strokes, connect different radicals, and,=20
basically, present at least as much variability=20
as seen in Roman languages. I'm not saying the=20
problem can't be solved, but it won't be as easy=20
as some people believe.
>How much time and how many people does it take to train a neural net?
Depends on the size of the net. The rule of=20
thumb is that you need at least as many training=20
samples as you have weights in the network. I=20
think our current network is around 100K weights,=20
so we need at least 100K samples, more to provide=20
maximum generalization. I think we have more=20
like 250K to 500K individual character samples,=20
and use special "stroke warping" techniques to=20
synthesize more. (I haven't revisited these=20
numbers in a while, but they're not *too* far=20
off; you can get gory details about the=20
recognition algorithms from the papers on my web=20
site <http://pobox.com/~larryy/>.)
>Is it even possible to train it to your own writing from scratch and
>have good recognition?
Yes. For a single individual you'd use a smaller=20
net, thus requiring less data. You can also=20
adapt a user-independent net to work better for a=20
specific user with a fairly modest amount of=20
data. I always intended to let individual users=20
train the net on the fly, but that's another=20
thing that never made it to the top of the heap.
>I imagine that to release a product that works=20
>from the get-go you need a large population's=20
>information.
Yep.
>Recruit the Chinese Newton Users Group?
Is there such a thing? In any event, you'd=20
probably need more than that. Several hundred=20
people at least, to be very general.
>I would have thought that some clever Japanese or Chinese would have
>tackled this already.
Oh, they have. There are Kanji recognizers.=20
There are Hiragana and Katakana (sp?) recognizers=20
and special translators that convert those=20
results to Kanji. Problem is none of them work=20
terribly well, or so I'm told.
There was an Apple Kanji recognizer that actually=20
got ported to the Newton, but I don't think it=20
ever saw the light of day. It was called many=20
things, including Li Bai, and even Bubba (never=20
did know why). It was done in conjunction with a=20
Singapore office, and used HMM (Hidden Markov=20
Model) technology at the core, rather than a=20
neural net. Supposedly it worked quite well,=20
compared to anything else that was available. If=20
the Newton had persisted a bit longer, I'm=20
reasonably sure it would have shipped on Japanese=20
and possibly even Chinese versions. But not in=20
this timeline, I'm afraid.
- larryy
-- This is the NewtonTalk list - http://www.newtontalk.net/ for all inquiries Official Newton FAQ: http://www.chuma.org/newton/faq/ WikiWikiNewt for all kinds of articles: http://tools.unna.org/wikiwikinewt/
This archive was generated by hypermail 2.1.5 : Sat Aug 27 2005 - 01:30:01 PDT