[rescue] Mounting and Dumping
Sheldon T. Hall
shel at cmhcsys.com
Wed Jan 14 18:32:21 CST 2004
Jonathan C. Patschke writes ...
> On Wed, 14 Jan 2004, Sheldon T. Hall wrote:
> > In 1983 I wrote a backup program for CP/M, in BASIC, that would back up
> > any size of hard drive, to any number of floppy disks, and arrange the
> > files to fill the floppies as full as possible. If I could do that, why
> > can't the guys at [Sun|Microsoft|wherever] do something better twenty
> > years later?
> So, how did you solve the contention between multiple users ...
It was a single user machine.
> ... and processes
> of differing priority (possibly exceeding the backup
> tool's) ...
It was a single-tasking machine.
> reading from
> and writing to a journalized filesystem under CP/M?
It didn't have a journalized filesystem.
The program was written in BASIC.
This was also 20 years ago ... which was probably about the time dump was
written, come to think of it.
My point was not how clever _my_ program was, but how little progress Unix
has made in the backup arena in the past 20 years. It really ought to be
> If you just want to copy files, tar works great.
I'd rather have "just" a copy of a file than nothing.
However, I've read that tar is not very good at noticing the problems with
its output, and that any problem with a tar tape renders the rest of the
tape unreadable. There also seem to be "issues" with path name lengths (in
Sun's tar) and others with restoring to other than the original directory.
I'd love to be disabused of these notions; I got them from O'Reilly's "Unix
Backup and Recovery," 1999 edition.
> If you want filesystem
> metadata, and you want to ensure integrity and consistency on at least a
> file-by-file basis (if not on a filesystem-wide basis), life gets
> I dump online filesystems without a problem all the time. Sun
> -recommends- it, but it's really not a big deal if you run your backups
> while the system during periods of low load.
Well, that's what I'm trying to do. I run ufsdump from cron at 3:00 AM, and
I've even gone through and re-done all the crontabs, to be sure no scheduled
activities take place during the hours the dump scripts get run. As far as
I can tell, the only files that would be changed at that hour are ones the
OS changes, like logs.
> Now, if you're backing up huge files that may change at any point in
> time (like, say, a database table file), you're SOL, which is why you
> typically use a SQL-dump utility or perform DB backups with the database
Nah, nothing like that here. Just the regular OS stuff, and some user
files. I'm sure some of the user files are of the datafile+index variety,
and thus subject to synchronization failures if dumped out-of-phase, but
they aren't getting changed at that hour.
More information about the rescue