Re: [NTLK] Ripping whole sites into newton books.

From: Steve Weyer (weyer_at_kagi.com)
Date: Sat Apr 13 2002 - 09:47:38 EDT


> Date: Sat, 13 Apr 2002 18:37:49 +1000
> From: David Brunacci <dbrunacc_at_utas.edu.au>
>
> I was just wondering if there was functionality in Newtscape to be able to
> download a webpage, with one or 2 levels of recursive links and save them as
> a newton book (with the links intact) to view later.

you can currently download a page and several level of links via the
Schedule option. you can then later (via Process option) save each of these
as a book.

(there is an experimental version that will automatically save each as a
book package w/o the separate Process step, but there was a glitch where
login wasn't occurring automatically which I'll have to track down before
releasing).

there is also a capability (described in a recent message here) for
combining multiple HTML docs or existing books into a single book package,
but this involves some separate setup of a "master document" using LINK, and
while this does preserve/fixup interdocument links, it works in limited
cases.

> Is there a newton script which does this..
>
> If I can't use Newtscape is there something else availible, or do I need to
> rip on my mac and then transfer the HTML to the newt.?

if you can combine what you want on desktop into a single HTML doc, you
could feed that to Newt's Cape; or if .doc, to Press.

-- 
Steve
  weyer_at_kagi.com
Newton apps/tools: Newt's Cape, newtVNC, NewtDevEnv, Sloup, Crypto,...
  http://www.kagi.com/weyer/

-- Read the List FAQ/Etiquette: http://www.newtontalk.net/faq.html Read the Newton FAQ: http://www.guns-media.com/mirrors/newton/faq/ This is the NewtonTalk mailing list - http://www.newtontalk.net



This archive was generated by hypermail 2.1.2 : Sun May 05 2002 - 14:03:37 EDT