[rescue] Block size and the single DD - some test results

Joshua Boyd jdboyd at jdboyd.net
Thu Feb 12 10:44:28 CST 2004

On Thu, Feb 12, 2004 at 12:18:46AM -0500, Sheldon T. Hall wrote:

> Raw disk speeds
> ===============
> Executing dd to read 2048 64k blocks (134,217,728 bytes) from a raw disk
> partition and write them to /dev/null took ...
> 	rtfm		40 seconds
> 	blinky	20 seconds
> 	tandem	11 seconds
> Raw tape speed
> ==============
> Using the same dd command but directing the output directly to the
> appropriate local tape device:
> 	rtfm		1:35 - mostly streamed
> 	tandem	1:48 - can't tell about streaming, but it must not have

On RTFM, the DLT and the harddrive are on two different channels,
correct?  If not, that could explain the troubles, but I seem to recall
you saying that it was already set that way.

So dd if=somefile of=/dev/null bs=64k gives you a read speed of about
3.2 megs per second.  That isn't too bad.  It should be fast enough to
feed the DLT.  It would be a worthwhile exercise to see if different
block sizes can help.

Going from disk to tape averages 1.35megs/second.

That you said it was mostly streamed makes me think that with a bit of
tuning you should be able to make it stream the whole time.  For some
reason, I keep thinking you should give 128k and maybe even 256k a try
for ibs, although on my much faster Suns 128k only gave me a 1 or 2%
boost when going to of=/dev/null (although that would put you really
close to 1.4megs second if the 1-2% carried through when doing it out to
tape), and 256 was actually slower.

Also, for the local disk to tape test, you still might want to try
fiddling with the obs seperately from the ibs.  The tape drive has gotta
have a prefered block size, and if you aren't already using it, then
switching to might get you another few percent, and at this point it
seems a few percentage here and there is all you need to make it work
correctly.  Solstice Backup uses a tape block size of 32k.  The Solstice
documentation also mentions using conv=sync for doing speed tests.  I
don't know if that makes it faster or not. You might consider playing
with it.  Elsewhere, I've seen people recommend a obs=32k as well.

BTW, you can benchmark just take speeds by doing something like:
dd if=/dev/zero of=$TAPE_DEVICE bs=$WHATEVER.  This will allow you to
make certain that you are getting enough bandwidth going to tape, and
also allow you to more easily experiment to find the best block size for
the tape drive.

Also, there is a whole work of kernal configuration tweaks that could be
tried.  Additionally, there may be ways to tune the file system.  There
is still a lot that can be done yet.

> Conclusion
> ==========
> As several have suggested, it looks like the SPARCclassic isn't fast enough
> to stream the DLT tape from its local filesystems, even with DD.

I don't know of any reason what so ever that a MicroSparc shouldn't be
able to move 3 megs a second from one chain to another.  Sure, it isn't
the greatest chip in the world, but it should be plenty fast enough,
plus you should have DMA helping you.  It's not like you are trying to
stream 12 megs a second (where I could see you possibly having trouble). 

Maybe I'm too stuborn about such things, but it seems you should be able
to make it work.  

On the other hand, trading up to a faster Sun (SS20 or U1 in particular)
would probably mean it would "Just Work".
> All this was with a 10baseT network, and that looks like a limiting factor.
> I'll have to do some more experimenting to see if changing the networking to
> 100baseT will make enough difference with the remote backups, or if moving
> the backups to the Challenge will improve things.

Err, do you have a fast ethernet board for the Challenge?  Because, the
challenge doesn't have fast ethernet built in.  

More information about the rescue mailing list