This past week, I had the need to install opensolaris on a host using a USB thumb drive. To create a bootable USB drive, I first needed to snag the distribution constructor tools via mercurial (I ran these commands from an OpenSolaris host):
$ pkg install SUNWmercurial
$ hg clone ssh://email@example.com/hg/caiman/slim_source
The caiman slim source Mercurial repository contains a script named usbcopy, which you can use to copy a USB image from the genunix site to your USB drive:
$ usbcopy /nfs/images/osol-0811.usb
Found the following USB devices:
0: /dev/rdsk/c9t0d0p0 7.6 GB Patriot Memory PMAP
Enter the number of your choice: 0
WARNING: All data on your USB storage will be lost.
Are you sure you want to install to
Patriot Memory PMAP, 7600 MB at /dev/rdsk/c9t0d0p0 ? (y/n) y
Copying and verifying image to USB device
Finished 824 MB in 336 seconds (2.4MB/s)
0 block(s) re-written due to verification failure
Installing grub to USB device /dev/rdsk/c9t0d0s0
Completed copy to USB
After the image was copied, I plugged the drive into my machine and it booted to the opensolaris desktop without issue. From there I did an install and everything appears to be working flawlessly! Nice.
I attempted to upgrade my ISC DHCP installation to dhcp-4.1.1b1 this past weekend, and ran into the following configure error:
$ ./configure –prefix=/bits/software/dhcp-4.1.1b1
checking for a BSD-compatible install... /bin/ginstall -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... no
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... unsupported
checking for style of include used by make... none
checking dependency style of gcc... none
checking how to run the C preprocessor... /lib/cpp
configure: error: C preprocessor "/lib/cpp" fails sanity check
See `config.log' for more details.
Th config.log had a number of errors similar to the following:
conftest.c:11:19: stdio.h: No such file or directory
conftest.c:12:23: sys/types.h: No such file or directory
conftest.c:13:22: sys/stat.h: No such file or directory
Which are due to missing system headers. I reviewed the list of packages that were installed, and sure enough SUNWhea (this package contains the various header files) was missing. I installed this package as well as a number of others:
$ pkgadd -d . SUNWhea SUNWbinutils SUNWarc SUNWgcc SUNWgccruntime
$ pkgadd -d . SUNWlibsigsegv SUNWgm4 SUNWgnu-automake-110 SUNWaconf
Any everything compiled and installed perfectly.
Being able to boot a machine from SAN isn’t exactly a new concept. Instead of having local hard drives in thousands of machines, each machine logs into the fabric and boots the O/S from a LUN exported via fiber on the SAN. This requires a little bit of configuration on the Fiber HBA, but it has the advantage of no longer dealing with local disk failure.
In OpenSolaris Navada build 104 on x86 platforms, iSCSI boot was incorporated.
If you have a capable NIC, you can achieve the same results of “boot from SAN” as fiber, but without the additional costs of an expensive fiber SAN network. Think of the possibilities here —
Implement a new AmberRoad Sun Storage 7000 series NAS device like the 7410 exporting hundreds iSCSI targets for each of your machines, implement ZFS Volumes on the backend, and leverage the capability of ZFS snapshots, clones, etc with your iSCSI root file system volumes for your machines. Even if your “client” machine mounts a UFS root filesystem over iSCSI, the backend would be a ZFS volume.
Want to provision 1000 machines in a day? Build one box, ZFS snapshot/clone the volume, and create 1000 iSCSI targets. Now the only work comes in configuring the OpenSolaris iSNS server with initiator/target parings. Instant O/S provisioning from a centrally managed location.
Implement two Sun Storage 7410 with clustering, and now you have a HA solution to all O/Ses running in your datacenter.
This is some pretty cool technology. Now, you have only one machine to replace disk failures at, instead of thousands, at a fraction of the cost it would take to implement this on Fabric! Once this technology works out the kinks and becomes stable, this could be the future of server provisioning and management.