[rescue] Small servers (was Re: WTT: 1.5G of PC2700 for 1G of PC100)
shannon at widomaker.com
Tue May 13 11:49:47 CDT 2008
On May 8, 2008, at 07:11 , Phil Stracchino wrote:
> Shannon Hendrix wrote:
>> One of the best performance charts I have ever seen is in
>> "Programming Pearls".
>> They show a TRS-80 and a DEC Alpha doing the same job, but the
>> TRS-80 code is better written.
>> The DEC is hundreds of times faster... until the problem size
>> increases to a point at which the DEC would take 400 years to do
>> what the TRS-80 does in fifteen minutes.
> My first reaction is that the code being run on the Alpha had to
> have been frighteningly, perhaps even implausibly, bad to produce
> that vast of a throughput difference between hardware so hugely
> different in speed and capability while the problem size was still
> in a range the TRS-80 could handle at all. (And I suspect the
> problem was very carefully chosen.)
Actually, I've seen far worse cases in every day work, so the example
was not in any way out of line.
When I worked for Bank of America, it was not uncommon for me to
rewrite a simple C data filter and have it run 20 times faster.
In SQL, I frequently found queries so badly written, that I once got a
program to run well over 1000 times faster with relatively simple
It's actually pretty easy to make serious algorithm mistakes.
I've found them in my own code, particularly when working with high
level tools where mis-use is pretty easy.
I had a shell script which sorted photos from my camera into
directories and renamed the files by date and sequence.
After using it for 6 months, I made a change in how I used exiftags
and sped the script up about 10 times faster.
All I did was change from a loop to a pipe, and it made worlds of
"Where some they sell their dreams for small desires."
More information about the rescue