Blog O' Matty


The case of the missing SSH keys

This article was posted by Matty on 2009-03-22 15:09:00 -0400 -0400

I built a couple of new Solaris 10 hosts today using a stripped down image, and was greeted with the following error when I tried to log in:

$ ssh 192.168.1.20
Unable to negotiate a key exchange method

The server was spitting out “no kex alg” errors, which appear to be due to key exchange issues. I poked around my sshd_config file, and for some reason the host host keys weren’t generated when the ssh service initialized. To fix this, I ran the ssh service with the -c option (this generated the RSA and DSA host keys):

$ /lib/svc/method/sshd -c

added the host keys to my sshd configuration file:

# Paths to host keys
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key

And then ran ‘svcadm refresh ssh’ to restart the service. Once that completed, I was able to login to the host. Nice!

Getting e-mail updates when new CentOS packages are available

This article was posted by Matty on 2009-03-22 13:54:00 -0400 -0400

A while back I wrote about yumnotifier, and how I was using it to get notified when new packages were available for my CentOS installations. Dan posted a comment asking me why I wasn’t using the yum-updatesd e-mail notification support. I recently (CentOS 5.2) started using the yum updates daemon to get notifications, but didn’t update the previous post with this information. Configuring the daemon to generate e-mail notifications when new updates are available is as easy as adding the email_to and email_from settings to the /etc/yum/yum-updatesd.conf configuration file:

email_to = matty@prefetch.net

email_from = root@server1.prefetch.net

Since I use kickstart to build my hosts, I have a postinstall action that adds these items to the configuration file and adjusts the email_from with the name of the server. Thanks Dan for the comment, and thanks to the CentOS community for adding this super awesome daemon!

Speeding up md device rebuilds

This article was posted by Matty on 2009-03-22 13:17:00 -0400 -0400

I manage a large RAID5 array at home, and had one of my disks crap out over the weekend. Once I physically replaced the drive and told mdadm to reconstruct the array, I noticed that the rebuild was going to take days to complete. After a bit of digging, it appears that the mdrecovery process throttles itself to prevent the recovery process from consuming all I/O. I was most concerned about getting the RAID array back into a consistent state, so I decided to play around with the speed_limit_max setting to speed up the recovery. The speed_limit_max setting controls the maximum amount of data that can be written to each device in the RAID array, and bumping it up to a large value (the value appears to be the number of bytes written per second) definitely lowered the reconstruction time:

$ echo 400000 > /proc/sys/dev/raid/speed_limit_max

The rebuild went from days down to hours:

$ watch cat /proc/mdstat

Every 2.0s: cat mdstat Fri Mar 20 21:14:45 2009

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdf1[5] sde1[3] sdd1[2] sdc1[1] sdb1[0]
976751616 blocks level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
[=========>...........] recovery = 49.6% (121176712/244187904) finish=68.6min speed=29873K/sec

unused devices:

I really dig MD, and the fact that you can now expand RAID devices (previously you had to layer LVM on top of MD to expand RAID devices on the fly) is extremely cool! To ensure that my host automatically recovers in the future, I am in the process of adding a hot spare. This combined with a cron job to increase speed_limit_max during off hours seems like a great fit!

Finding hardware details on Solaris 10 hosts with SMBIOS

This article was posted by Matty on 2009-03-22 13:05:00 -0400 -0400

I have previously written about the Solaris smbios utility, and how the utility can be used to discover various items about the hardware platform you are running on. While reviewing one of my mailing lists over the weekend, I came across a post that describes the SMB_TYPE_BASEBOARD and SMB_TYPE_SYSTEM properties. In most cases these two properties will allow you to discover the hardware platform you are running on, and details about the motherboard in use:

$ /usr/sbin/smbios -t SMB_TYPE_BASEBOARD

ID SIZE TYPE
2 75 SMB_TYPE_BASEBOARD (base board)

Manufacturer: Intel Corporation
Product: 440BX Desktop Reference Platform
Version: None
Serial Number: None

Chassis: 0
Flags: 0x0
Board Type: 0x1 (unknown)

$ /usr/sbin/smbios -t SMB_TYPE_SYSTEM

ID SIZE TYPE
1 123 SMB_TYPE_SYSTEM (system information)

Manufacturer: VMware, Inc.
Product: VMware Virtual Platform
Version: None
Serial Number: VMware-56 4d 5f 40 6e ce 46 77-3d 47 9c 0f 50 c6 27 b0

UUID: 564d5f40-6ece-4677-3d47-9c0f50c627b0
Wake-Up Event: 0x6 (power switch)
SKU Number:
Family:

If you are building an inventory system, or just want to see what type of system is in use, this information will be of great value!

Viewing and changing network device properties on Solaris hosts

This article was posted by Matty on 2009-03-15 12:11:00 -0400 -0400

Project brussels from the OpenSolaris project revamped how link properties are managed, and their push to get rid of ndd and device-specific properties is now well underway! Link properties are actually pretty cool, and they can be displayed with the dladm utilities “show-linkprop” option:

$ dladm show-linkprop e1000g0

LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
e1000g0 speed r- 0 0 --
e1000g0 autopush -- -- -- --
e1000g0 zone rw -- -- --
e1000g0 duplex r- half half half,full
e1000g0 state r- down up up,down
e1000g0 adv_autoneg_cap rw 1 1 1,0
e1000g0 mtu rw 1500 1500 --
e1000g0 flowctrl rw bi bi no,tx,rx,bi
e1000g0 adv_1000fdx_cap r- 1 1 1,0
e1000g0 en_1000fdx_cap rw 1 1 1,0
e1000g0 adv_1000hdx_cap r- 0 1 1,0
e1000g0 en_1000hdx_cap r- 0 1 1,0
e1000g0 adv_100fdx_cap r- 1 1 1,0
e1000g0 en_100fdx_cap rw 1 1 1,0
e1000g0 adv_100hdx_cap r- 1 1 1,0
e1000g0 en_100hdx_cap rw 1 1 1,0
e1000g0 adv_10fdx_cap r- 1 1 1,0
e1000g0 en_10fdx_cap rw 1 1 1,0
e1000g0 adv_10hdx_cap r- 1 1 1,0
e1000g0 en_10hdx_cap rw 1 1 1,0
e1000g0 maxbw rw -- -- --
e1000g0 cpus rw -- -- --
e1000g0 priority rw high high low,medium,high

As you can see in the above output, the typical speed, duplex, mtu and flowctrl properties are listed. In addition to those, the “maxbw” and “cpus” properties that were introduced with the recent crossbow putback are visible. The “maxbw” property is especially useful, since it allows you to limit how much bandwidth is available to an interface. Here is an example that caps bandwidth for an interface at 2Mb/s:

$ dladm set-linkprop -p maxbw=2m e1000g0

To see how this operates, you can use your favorite data transfer client:

$ scp techtalk1 192.168.1.10:
Password: techtalk1.mp3 5% 2128KB 147.0KB/s 04:08 ETA

The read/write link properties can be changed on the fly with dladm, so increasing the “maxbw” property will allow the interface to consume additional bandwidth:

$ dladm set-linkprop -p maxbw=10m e1000g0

Once the bandwidth is increased, you can immediately see this reflected in the data transfer progress:

techtalk1.mp3 45% 17MB 555.3KB/s 00:38 ETA

Clearview rocks, and it’s awesome to see that link properties are going to be managed in a standard uniform way going forward! Nice! I incorrectly stated that the clearview project was responsible for this awesome work, when in fact network interface property unification is part of the brussels project. The original post was updated to reflect this.