[rescue] Mozilla Firefox
Charles Shannon Hendrix
shannon at widomaker.com
Thu Apr 22 16:10:00 CDT 2004
Thu, 22 Apr 2004 @ 00:35 -0400, Dave McGuire said:
> But a big issue isn't OO itself, but how people are *taught* OO
Right. OO itself is not bad. It's also not a big revelation. You can
do OO programming in assembler.
C++ and Java mostly just automate the handling of objects so that
it looks syntactically pretty. It's not doing very much that is
fundamentally different from OO code in languages like C. It just looks
uglier in C, and is harder to write.
Although... some OO techniques are pretty easy in C. In fact, with just
a few changes, C could do about 90% of what is worthwhile in C++.
> OO is *too* high-level for most applications to perform well when
> written by 90% of the worlds [educated but inexperienced] programmers,
> in my opinion.
True. The OO approach, IMHO, doesn't teach code-reuse with an awareness
of what it means to the computer. I think Scheme does a far better job
of that myself. You build structures that do tasks, and call them
from others, allowing you to build a program from the ground up rather
easily. However, it never hides what is going on from you.
The way OO is taught, the price of the code is often taken out of the
Worse, I've seen professors try to alleviate this by putting O(n)
notation on top of object code, which is blindly stupid. O(n) isn't
always accurate even at the level of C. All bets are off on the highly
overloaded reality of Java or C++.
> The same problem exists with Perl...
I might not agree with you there. Consider:
Perl doesn't hide the price you pay for each feature it has any more
than the C library does. Perl is basically a byte-code compiler
interface to a set of C routines. Generally speaking, Perl does
what the Perl code says it does.
Implicit functions are probably where a lot of people kill performance.
Either by using inefficiently, or not using them when they are the best
choice. I'm not sure that's Perl's fault though, as it is particularly
well documented, and that includes its problems.
I think a bigger problem is that programmers are taught to use building
blocks, which is a good thing, but not be aware of the cost, which is a
bad thing. They don't differentiate between a low level construct and a
function, for example. Or in Perl, between a coded loop and an implicit
Take a look at this:
rows = snoop_query();
In C, what is the cost of that? For that matter:
...what is the cost of that in assembler? You can't know unless you
know what "snoop_query" does.
Too many programmers are never taught to look.
All languages can hide things, but OO languages overload even what are
basic functions and operators in other languages, and languages like
Java have so many layers below even its lowest level functions. You are
lost trying to determine the real costs, or you are aware and they are
very high no matter what you do (Java).
> resources. "C sucks because it doesn't have associative arrays!" some
> of these kids scream. They Just Don't Get It.
I do wish embedding functions into C structs was a bit easier. I
could get about 98% of what I want from OO in C, with just a few small
> However...Gnome in particular, I've been told several times, is
> written in C...not C++. I've not verified this due to lack of
> motivation, but I have to wonder, assuming it's true...how on Earth do
> they manage to get C (not C++) code to be that big and that slow?
A lot of it is data. Each application uses a lot of data. The memory
use is somewhat misleading because a great deal of each apps memory use
is shared among all of them. Also, Gnome programs are rather I/O heavy.
Also on that note, I wish memory tools on UNIX systems were a little
more sophisticated in terms of really telling you where your
memory is going.
For example, Linux tools show shared memory for each process, but do not
show how this memory is shared, and with whom, or even if it is truly
shared. You cannot assume that if you see this:
rss vsz shared
55 100 34
That 34 MB is shared. This application might well have loaded 34MB
of library code or data, and isn't sharing with any other process.
Or maybe it is sharing it with 300 of them.
There is a huge difference between the following possible scenarios:
- it loaded a shared library, but is the only process using it, which
means in reality nothing is being shared
- it shares 30MB with X, 4MB with some other program
- it shares 34MB with 1000 other processes
- it shared 8, 4, and 4MB chunks with 3 other processes, and 8MB
is shared but only it uses it
...and so on. You could sit here and come up with virtually unlimited
shannon "AT" widomaker.com -- ["There are nowadays professors of
philosophy, but not philosophers." ]
More information about the rescue