[geeks] Solaris resiliency to crashing w/full root partition?
velociraptor at gmail.com
Wed Sep 28 17:52:52 CDT 2005
On 9/28/05, Jonathan C. Patschke <jp at celestrion.net> wrote:
> On Wed, 28 Sep 2005, velociraptor wrote:
> > Anyone have any references to Solaris's resiliency to crashing due to
> > a full root partitions? Googling is not being all that helpful. Even
> > Sun marketing foo would be a start.
> No, but Solaris will go to hell if /var gets full. Logins won't
> complete. Maybe 9 and 10 are better, but I had that bite me all the
> time under 7 and 8.
Well "going to hell" and crashing are two different things in my
mind. As in, running slowly, can't fork any processes, etc. vs.
"look, it's at the ok prompt". Logins are (relatively) irrelevent
here--these are service-type machines.
<knocks wood> If the stupid sys admin had bothered to educate
himself marginally more about Veritas Cluster  (ex-coworker, not
missing that guy) the last time we had swap/memory exhausted
due to the evil that is Enterprise Tripwire , we wouldn't have had
an outage from that, even.
>On Wed, 28 Sep 2005, Michael Horton wrote:
>> sun recommends one large root partition and a swap partition!
and Jonathan's comment:
>Indeed. If one ever needed proof that Sun doesn't know how to run their
>own operating system....
If I'm stuck using 2-4GB disks, I would agree, but with 36+GB disks,
where root is 10GB, and web logs are not in var nor any mail spools
of note, I just don't think it's necessary to split out /var. With 9 or
I'd say, "it depends".
With larger disks, you run into partition use constraints if you are using
Solaris LVM (partition-based LVM, not disk-based like Veritas) b/c you
have a limit of 6 useable partitions (8 -1 (s2) for the "backup" slice
and -1 slice for your metadbs...)
 As in, if you are going to try to hand the disks off to the node that
is working, you don't initiate the handoff from the node that is out of
 Enterprise Tripwire and Oracle do not play well with each other.
More information about the geeks