Speeding up Solaris zone creation with cloning

One really neat feature that was recently added to Solaris is the ability to clone zones. Cloning allows you to create a new zone from an existing zone, which can reduce provisioning time, and ensure that all zones are created consistently (e.g., all zones that will act as web servers can be cloned from a zone that was setup to act as a web server). If the zone you are cloning from resides on a UFS or VxFS file system, the clone operation will copy all of the files from the source zone to the new zone. If the zone that you are cloning from lives on a ZFS file system, the clone operation will create a writeable ZFS snapshot, and use that as the backing store for the new zone. When zones are cloned with the ZFS method, the new zone is created almost instantaneously (it typically takes .5 seconds), and little to no storage is required to initially provision the zone.

To show you how this works, I created a Linux branded zone named “centostemplate.” The zone centostemplate lives on it’s own ZFS file system, as you can see from the output of the “zoneadm” and “zfs” utilities:

$ /usr/sbin/zoneadm list -vc

  ID NAME             STATUS         PATH                           BRAND     
   0 global           running        /                              native    
   - centostemplate   installed      /app/zones/centostemplate      lx        

$ zfs list

NAME                       USED  AVAIL  REFER  MOUNTPOINT
app                       3.05G  8.67G  27.5K  /app
app/zones                 1.05G  8.67G  26.5K  /app/zones
app/zones/centostemplate  1.05G  8.67G  1.05G  /app/zones/centostemplate

To create a new zone named centos_37_1 from a clone of the zone named centostemplate, the zone cetos_37_1 first needs to be configured in the zone configuration shell:

$ zonecfg -z centos_37_1

centos_37_1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:centos_37_1> create
zonecfg:centos_37_1> set zonepath=/app/zones/centos_37_1
zonecfg:centos_37_1> commit
zonecfg:centos_37_1> exit

Once the new zone is configured the way you want it, the zoneadm utilily can be run with the “-z” option, the name of the new zone, the “clone” command, and the name of the zone you would like use as the source of the clone operation (in the following example, the centostemplate zone is the source for the clone operation):

$ timex zoneadm -z centos_37_1 clone centostemplate

Cloning snapshot app/zones/centostemplate@SUNWzone1
Instead of copying, a ZFS clone has been created for this zone.

real          0.63
user          0.07
sys           0.10

As you can see from the output, it look just over .5 seconds to create a brand spanking new zone! Nice! If I run the zoneadm and zfs utilities again, you can see that a writeable snapshot was created to act as the backing store for the zone named centos_37_1:

$ /usr/sbin/zoneadm list -vc

  ID NAME             STATUS         PATH                           BRAND     
   0 global           running        /                              native    
   - centostemplate   installed      /app/zones/centostemplate      lx        
   - centos_37_1      installed      /app/zones/centos_37_1         lx        

$ zfs list

NAME                                 USED  AVAIL  REFER  MOUNTPOINT
app                                 3.05G  8.67G  27.5K  /app
app/zones                           1.05G  8.67G  27.5K  /app/zones
app/zones/centos_37_1                   0  8.67G  1.05G  /app/zones/centos_37_1
app/zones/centostemplate            1.05G  8.67G  1.05G  /app/zones/centostemplate
app/zones/centostemplate@SUNWzone1  87.5K      -  1.05G  -

This is some amazingly cool stuff, and Sun keeps adding more and more nifty stuff to Solaris!

Renaming a Solaris zone

While reviewing the list of zones on one of my Solaris hosts, I noticed that I accidentally assigned the name “cone3” to a zone:

$ zoneadm list -vc

  ID NAME             STATUS         PATH                           BRAND
   0 global           running        /                              native
  11 zone1            running        /zones/zone1                   native
   - centos           installed      /zones/centos                  lx
   - template         installed      /zones/template                native
   - cone3            configured     /zones/zone3                   native
   - centostest       configured     /zones/centostest              lx

While the name cone3 sounds interesting (for some reason cone3 reminds me of the coneheads movie), I originally intended for the zone to be called “zone3.” To assign the correct name to the zone, I fired up the zone configuration utility (zonecfg), and set the ‘zonename’ variable to the correct value:

$ zonecfg -z cone3
zonecfg:cone3> set zonename=zone3
zonecfg:zone3> commit
zonecfg:zone3> exit

$ zoneadm list -vc

  ID NAME             STATUS         PATH                           BRAND
   0 global           running        /                              native
  11 zone1            running        /zones/zone1                   native
   - centos           installed      /zones/centos                  lx
   - template         installed      /zones/template                native
   - zone3            configured     /zones/zone3                   native
   - centostest       configured     /zones/centostest              lx

Much better!

Moving Solaris zones

While messing around with zones and ZFS last weekend, I decided to create a new ZFS file system called /zones to store my zones. The /zones file system would be created as a striped ZFS file system, allowing me to take advantage of ZFS’s data checksumming, compression, snapshots and zone cloning features. Since I already had one zone installed in /zones, I needed to migrate the zone to another location while I setup the new ZFS file system. There are numerous ways to do this, but I decided to use the zoneadm “move” option since it was designed for this purpose.

To begin the migration, I ran the zoneadm utility to halt the zone I wanted to migrate:

$ zoneadm -z centos halt

Once the zone was halted, I executed zoneadm with the “move” option and the location where I wanted the zone moved:

$ zoneadm -z centos move /zones/centos

Moving across file systems; copying zonepath /zones/centos...
Cleaning up zonepath /zones/centos...

Now that the zone was in a safe temporary place, I ran the zpool utility to create a 2 disk striped pool (the underlying storage is RAID protected, so there is no need to protect the pool a second time):

$ zpool create zones c1d0 c1d1

$ zpool status -v

  pool: zones
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zones       ONLINE       0     0     0
          c1d0      ONLINE       0     0     0
          c1d1      ONLINE       0     0     0

errors: No known data errors

Now that I had a file system to store all of my zones, I once again used the zoneadm “move” option to move the zone back to it’s rightful home:

$ zoneadm -z centos move /zones/centos

cannot create ZFS dataset zones/centos: 'sharenfs' must be a string
Moving across file systems; copying zonepath /centos...
Cleaning up zonepath /centos...

I am a big fan of zones, and they are an ideal solution for server consolidation projects!

Running Linux applications in Solaris Linux branded zones

While playing around with the latest version of Nevada this week, I decided to see how well Linux branded zones work. In case your not following the Sun development efforts, Linux branded zones allow you to run Linux ELF executables unmodified on Solaris hosts. This is pretty interesting, and I definitely wanted to take this technology for a test drive. After reading through the documentation in the brandz community, I BFU’ed my Nevada machine to the latest nightly build, and installed the packages listed on the brandz download page. Since brandz currently only supports CentOS 3.0 – 3.7 and the Linux 2.4 kernel series, I first had to download the three CentoS 3.7 iso images (branded zones currently don’t support CentOS 3.8 without some hacking):

$ cd /home/matty/CentOS

$ wget http://www.gtlib.gatech.edu/pub/centos/3.7/isos/i386/CentOS-3.7-i386-bin1of3.iso

$ wget http://www.gtlib.gatech.edu/pub/centos/3.7/isos/i386/CentOS-3.7-i386-bin2of3.iso

$ wget http://www.gtlib.gatech.edu/pub/centos/3.7/isos/i386/CentOS-3.7-i386-bin3of3.iso

After I retrieved the ISO images, I needed to create a branded zone. Creating Linux branded zones is a piece of cake, and is accomplished by running the zonecfg utility with the “-z” option and a name to assign to your zone, and then specifying one or more parameters inside the zone configuration shell. Here is the configuration I used with my test zone:

$ zonecfg -z centostest

centostest: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:centostest> create -t SUNWlx
zonecfg:centostest> add net
zonecfg:centostest:net> set physical=ni0
zonecfg:centostest:net> set address=192.168.1.25
zonecfg:centostest:net> end
zonecfg:centostest> set zonepath=/zones/centostest
zonecfg:centostest> set autoboot=true
zonecfg:centostest> verify
zonecfg:centostest> commit
zonecfg:centostest> exit

This zone configuration is pretty basic. It contains one network interface (when you boot the zone, a virtual interface is configured on that interface with the address passed to the address attribute), a location to store the zone data, and it is configured to automatically boot when the system is bootstrapped. Next I needed to install the CentOS binaries in the zone. To install the CentOS 3.7 binaries in the new zone I created, I ran the zoneadm utility with the ‘install’ option, and passed the directory with the CentOS ISO images as an argument:

$ zoneadm -z centostest install -v -d /home/matty/CentOS

Verbose output mode enabled.
Installing zone "centostest" at root "/zones/centostest"
  Attempting ISO-based install from directory:
    "/home/matty/CentOS"
Checking possible ISO
  "/home/matty/CentOS/CentOS-3.7-i386-bin1of3.iso"...
    added as lofi device "/dev/lofi/1"
Attempting mount of device "/dev/lofi/1"
     on directory "/tmp/lxisos/iso.1"... succeeded.
Checking possible ISO
  "/home/matty/CentOS/CentOS-3.7-i386-bin2of3.iso"...
    added as lofi device "/dev/lofi/2"
Attempting mount of device "/dev/lofi/2"
     on directory "/tmp/lxisos/iso.2"... succeeded.
Checking possible ISO
  "/home/matty/CentOS/CentOS-3.7-i386-bin3of3.iso"...
    added as lofi device "/dev/lofi/3"
Attempting mount of device "/dev/lofi/3"
     on directory "/tmp/lxisos/iso.3"... succeeded.
Checking for distro "/usr/lib/brand/lx/distros/centos35.distro"...
Checking iso file mounted at "/tmp/lxisos/iso.1"...
read discinfo file "/tmp/lxisos/iso.1/.discinfo"
  ISO "/tmp/lxisos/iso.1": Serial "1144177644.47"
    Release "CentOS [Disc Set 1144177644.47]" Disc 1
Checking iso file mounted at "/tmp/lxisos/iso.2"...
read discinfo file "/tmp/lxisos/iso.2/.discinfo"
  ISO "/tmp/lxisos/iso.2": Serial "1144177644.47"
    Release "CentOS [Disc Set 1144177644.47]" Disc 2
Checking iso file mounted at "/tmp/lxisos/iso.3"...
read discinfo file "/tmp/lxisos/iso.3/.discinfo"
  ISO "/tmp/lxisos/iso.3": Serial "1144177644.47"
    Release "CentOS [Disc Set 1144177644.47]" Disc 3
Checking for distro "/usr/lib/brand/lx/distros/centos36.distro"...
Checking iso file mounted at "/tmp/lxisos/iso.1"...
read discinfo file "/tmp/lxisos/iso.1/.discinfo"
  ISO "/tmp/lxisos/iso.1": Serial "1144177644.47"
    Release "CentOS [Disc Set 1144177644.47]" Disc 1
Checking iso file mounted at "/tmp/lxisos/iso.2"...
read discinfo file "/tmp/lxisos/iso.2/.discinfo"
  ISO "/tmp/lxisos/iso.2": Serial "1144177644.47"
    Release "CentOS [Disc Set 1144177644.47]" Disc 2
Checking iso file mounted at "/tmp/lxisos/iso.3"...
read discinfo file "/tmp/lxisos/iso.3/.discinfo"
  ISO "/tmp/lxisos/iso.3": Serial "1144177644.47"
    Release "CentOS [Disc Set 1144177644.47]" Disc 3
Checking for distro "/usr/lib/brand/lx/distros/centos37.distro"...
Checking iso file mounted at "/tmp/lxisos/iso.1"...
read discinfo file "/tmp/lxisos/iso.1/.discinfo"
  ISO "/tmp/lxisos/iso.1": Serial "1144177644.47"
    Release "CentOS [Disc Set 1144177644.47]" Disc 1
Added ISO "/tmp/lxisos/iso.1" as disc 1
Checking iso file mounted at "/tmp/lxisos/iso.2"...
read discinfo file "/tmp/lxisos/iso.2/.discinfo"
  ISO "/tmp/lxisos/iso.2": Serial "1144177644.47"
    Release "CentOS [Disc Set 1144177644.47]" Disc 2
Added ISO "/tmp/lxisos/iso.2" as disc 2
Checking iso file mounted at "/tmp/lxisos/iso.3"...
read discinfo file "/tmp/lxisos/iso.3/.discinfo"
  ISO "/tmp/lxisos/iso.3": Serial "1144177644.47"
    Release "CentOS [Disc Set 1144177644.47]" Disc 3
Added ISO "/tmp/lxisos/iso.3" as disc 3
Installing distribution 'CentOS [Disc Set 1144177644.47]'...
Installing cluster 'desktop'
Installing zone miniroot.
Installing miniroot from ISO image 1 (of 3)
RPM source directory: "/tmp/lxisos/iso.1/RedHat/RPMS"
Attempting to expand 30 RPM names...
Installing RPM "SysVinit-2.85-4.4.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "basesystem-8.0-2.centos.0.noarch.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "bash-2.05b-41.5.centos.0.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "beecrypt-3.0.1-0.20030630.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "bzip2-libs-1.0.2-11.EL3.4.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "coreutils-4.5.3-28.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "elfutils-0.94-1.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "elfutils-libelf-0.94-1.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "filesystem-2.2.1-3.centos.1.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "glibc-2.3.2-95.39.i586.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "glibc-common-2.3.2-95.39.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "gpm-1.19.3-27.2.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "initscripts-7.31.30.EL-1.centos.1.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "iptables-1.2.8-12.3.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "iptables-ipv6-1.2.8-12.3.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "kernel-utils-2.4-8.37.14.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "laus-libs-0.1-70RHEL3.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "libacl-2.2.3-1.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "libattr-2.2.0-1.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "libgcc-3.2.3-54.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "libtermcap-2.0.8-35.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "ncurses-5.3-9.4.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "pam-0.75-67.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "popt-1.8.2-24_nonptl.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "rpm-4.2.3-24_nonptl.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "rpm-libs-4.2.3-24_nonptl.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "setup-2.5.27-1.noarch.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "termcap-11.0.1-17.1.noarch.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "zlib-1.1.4-8.1.i386.rpm" to miniroot at 
    "/zones/centostest"...
Installing RPM "centos-release-3-7.1.i386.rpm" to miniroot at 
    "/zones/centostest"...
Setting up the initial lx brand environment.
System configuration modifications complete!
Duplicating miniroot; this may take a few minutes...

Booting zone miniroot...
Miniroot zone setup complete.

Installing zone 'centostest' from ISO image 1.
RPM source directory: "/zones/centostest/root/iso/RedHat/RPMS"
Attempting to expand 667 RPM names...
Installing 433 RPM packages; this may take several minutes...

Preparing...                ##################################################
libgcc                      ##################################################
setup                       ##################################################
filesystem                  ##################################################
hwdata                      ##################################################
redhat-menus                ##################################################
mailcap                     ##################################################
XFree86-libs-data           ##################################################
basesystem                  ##################################################
gnome-mime-data             ##################################################

[.....]

After the brandz installer finished installing the CentOS 3.7 RPMs, I used the zoneadm ‘boot’ option to start the zone:

$ zoneadm -z centostest boot

To view the console output while the zone was booting, I immediately fired up the zlogin utility to console into the new Linux branded zone, and ran a few commands to see what the environment looked like after the zone was booted:

$ zlogin -C centostest

[Connected to zone 'centostest' console]  [  OK  ]
Activating swap partitions:  [  OK  ]
Checking filesystems [  OK  ]
Mounting local filesystems:  [  OK  ]
Enabling swap space:  [  OK  ]
modprobe: Can't open dependencies file /lib/modules/2.4.21/modules.dep (No such file or directory)
INIT: Entering runlevel: 3
Entering non-interactive startup
Starting sysstat:  [  OK  ]
Starting system logger: [  OK  ]
Starting kernel logger: [  OK  ]
Starting automount: No Mountpoints Defined[  OK  ]
Starting cups: [  OK  ]
Starting sshd:[  OK  ]
Starting crond: [  OK  ]
Starting atd: [  OK  ]
Rotating KDC list [  OK  ]


CentOS release 3.7 (Final)
Kernel 2.4.21 on an i686

centostest login: root

$ uname -a

Linux centos 2.4.21 BrandZ fake linux i686 i686 i386 GNU/Linux

$ cat /proc/cpuinfo

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 6
model name      : Intel Celeron(r)
stepping        : 5
cpu MHz         : 1662.136
cache size      : 2048 KB
fdiv_bug        : no
hlt_bug         : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
flags           : fpu pse tsc msr mce cx8 sep mtrr pge cmov mmx fxsr sse sse2 ss

Yum works swell in a branded zone, and most of the tools you typically use work out of the box. Linux branded zones are wicked cool, and I can see tons of uses for them. Some folks are dead set on running Linux instead of Solaris, which means they can’t take advantage of things like ZFS, FMA and DTrace. If you need to better understand your application and the way it interacts with the system, or if you want to take advantage of the stability the Solaris kernel brings to production system, you can fire up a branded zone and run your application transparently on a Solaris system. You can also easily transport your applications between a CentOS server and a branded zone, use DTrace to profile the application, and then take any performance wins back to your Linux server. Who can argue with that? :)

Exporting Solaris zone configurations

I have been using Solaris 10 zone technology for the past 4 – 5 months, and just recently came across the zonecfg(am) “export” option. This option allows you to export the configuration from a specific zone, which can be used to recreate zones, or as a template when adding additional zones (with some adjustments of course). The following example prints the zone configuration for a domain called “irc”:

$ zonecfg -z irc export

create -b
set zonepath=/export/home/zone_irc
set autoboot=false
add net
set address=192.168.1.4
set physical=hme0
end

This is super useful, and can make creating 1000s of zones a snap!

Running commands across zones

While reading through Brendan Gregg’s Solaris zones tutorial, I ran across his zonerun script. This nifty little script allows you to run a command across several Solaris zones (the script uses zlogin to accomplish the task):

$ zonerun “uname -a; echo”

SunOS irc 5.10 Generic_118822-11 sun4u sparc SUNW,Ultra-5_10

SunOS sunonews 5.10 Generic_118822-11 sun4u sparc SUNW,Ultra-5_10

$ zonerun “uptime ; echo”

12:08pm up 1 day(s), 10:55, 1 user, load average: 0.59, 0.41, 0.18

12:08pm up 2 min(s), 0 users, load average: 0.59, 0.41, 0.18

This nifty script kinda reminds me of Sun cluster’s ctelnet utility :)