[SunHELP] Solaris losing zpool when cloning a drive (and a fix)
shannon at widomaker.com
Thu Apr 24 10:11:03 CDT 2008
Yesterday I replaced the 80GB OS drive in my Solaris X86 server with a
new one. The new drive was basically an updated/better version,
I booted a live CD and used DD to clone the entire drive.
After putting the new drive in place I rebooted, and Solaris came up,
but for some reason the zpool on my OS drive was marked offline due to
a drive fault.
My OS drive has a 30GB root, but /opt, /local, /home, etc are in a
zpool named sys.
zpool reported the root drive c0d0 as FAULTED.
The fix for this was to run two commands:
zpool export sys
zpool import -f sys
Does anyone know why cloning a drive would cause zpool to show it as
faulted after rebooting?
Why did the export/import fix it?
I was thinking that maybe the drive has a UUID of some kind that zpool
uses and it was changed, but there was nothing to indicate the problem
if that was in. In fact, zpool gave absolutely no useful or accurate
information on what the problem was.
No harm was done, I just don't understand why it happened and I'd like
to know if there was some procedure I was supposed to follow that
would avoid it.
"Where some they sell their dreams for small desires."
More information about the SunHELP