When I was playing around with the LogAnalyzer Statistics page I received the following error in each of the display boxes:
Warning: strftime(): It is not safe to rely on the system's timezone
settings. You are *required* to use the date.timezone setting or the
date_default_timezone_set() function. In case you used any of those
methods and you are still getting this warning, you most likely
misspelled the timezone identifier. We selected 'America/New_York' for
'EST/-5.0/no DST' instead in
/var/www/html/log/classes/jpgraph/jpgraph.php on line 390 Warning:
strtotime(): It is not safe to rely on the system's timezone settings.
You are *required* to use the date.timezone setting or the
date_default_timezone_set() function. In case you used any of those
methods and you are still getting this warning, you most likely
misspelled the timezone identifier. We selected 'America/New_York' for
'EST/-5.0/no DST' instead in
/var/www/html/log/classes/jpgraph/jpgraph.php on line 391
In the error message it provides two solutions to address this issue:
Use the date.timezone php.ini entry.
Call date_default_timezone_set() to set the timezone.
I went with #1 and set date.timezone to the following:
[Date]
; Defines the default timezone used by the date functions
;
http://www.php.net/manual/en/datetime.configuration.php#ini.date.timezone
date.timezone = 'America/New_York'
I restarted Apache and everything is now working. I like easy fixes. :)
I recently installed the LogAnalyzer graphical syslog analysis tool. After the install completed I went to the “Show Events” page and noticed that no data was being displayed. I wanted to see which queries were being sent by LogAnalyzer to my MySQL database instance, so I enabled query logging by adding the following two statements to the [mysqld] block in the /etc/my.cnf configuration file:
general_log=1
general_log_file=/var/log/query.log
The first line enables logging, and the second line tells MySQL were to write the logs. Once enabled you can see the queries executed against your server by paging out the contents of /var/log/query.log. This will have one or more entries similar to the following:
233 Query Select FOUND_ROWS()
120212 13:15:07 233 Quit
120212 13:15:12 234 Connect rsyslog@localhost on
234 Init DB syslog
234 Query SHOW TABLES LIKE '%SystemEvents%'
234 Query SELECT SQL_CALC_FOUND_ROWS id, devicereportedtime, facility, priority, fromhost, syslogtag, processid, infounitid, message FROM SystemEvents ORDER BY id DESC LIMIT 100
Pretty cool, and definitely super useful for debugging problems and figuring out how restrictive you can be with your GRANT statements. Viva la MySQL!
As most of my long term readers know I am a huge Solaris fan. How can’t you love an Operating System that comes with ZFS, DTrace, Zones, FMA and Network Virtualization amongst other things? I use Linux during my day job, and I’ve been hoping for quite some time that Oracle would port one or more of these technologies to Linux. Well the first salvo has been fired, though it wasn’t from Oracle. It comes by way of the ZFS on Linux project, which is an in-kernel implementation of ZFS (this project is different from the FUSE ZFS port).
I had some free time this weekend to play around with ZFS on Linux, and my initial impressions are quite positive. The port on Linux is based on the latest version of ZFS that is part of OpenSolaris (version 28), so things like snapshots, de-duplication, improved performance and ZFS send and recv are available out of the box. There are a few missing items, but from what I can tell from the documentation there is plenty more coming.
The ZFS file system for Linux comes as source code, which you build into loadable kernel modules (this is how they get around the license incompatibilities). The implementation also contains the userland utilities (zfs, zpool, etc.) most Solaris admins are used to, and they act just like their Solaris counterparts! Nice!
My testing occurred on a CentOS 6 machine, specifically 6.2:
$ cat /etc/redhat-release
CentOS release 6.2 (Final)
The build process is quite easy. Prior to compiling source code you will need to install a few dependencies:
$ yum install kernel-devel zlib-devel libuuid-devel libblkid-devel
libselinux-devel parted lsscsi**
Once these are installed you can retrieve and build spl and zfs packages:
$ wget
http://github.com/downloads/zfsonlinux/spl/spl-0.6.0-rc6.tar.gz**
$ tar xfvz spl-0.6.0-rc6.tar.gz && cd spl6
$ ./configure && make rpm
$ rpm -Uvh .x86_64.rpm
Preparing... ########################################### [100%]
1:spl-modules-devel ########################################### [ 33%]
2:spl-modules ########################################### [ 67%]
3:spl ########################################### [100%]
$ wget
http://github.com/downloads/zfsonlinux/zfs/zfs-0.6.0-rc6.tar.gz**
$ tar xfvz zfs-0.6.0-rc6.tar.gz && cd zfs6
$ ./configure && make rpm
$ rpm -Uvh .x86_64.rpm
Preparing... ########################################### [100%]
1:zfs-test ########################################### [ 17%]
2:zfs-modules-devel ########################################### [ 33%]
3:zfs-modules ########################################### [ 50%]
4:zfs-dracut ########################################### [ 67%]
5:zfs-devel ########################################### [ 83%]
6:zfs ########################################### [100%]
If everything went as planned you now have the ZFS kernel modules and userland utilities installed! To begin using ZFS you will first need to load the kernel modules with modprobe:
$ modprobe zfs
To verify the module loaded you can tail /var/log/messages:
Feb 12 17:54:27 centos6 kernel: SPL: Loaded module v0.6.0, using hostid 0x00000000
Feb 12 17:54:27 centos6 kernel: zunicode: module license 'CDDL' taints kernel.
Feb 12 17:54:27 centos6 kernel: Disabling lock debugging due to kernel taint
Feb 12 17:54:27 centos6 kernel: ZFS: Loaded module v0.6.0, ZFS pool version 28, ZFS filesystem version 5
And run lsmod to verify they are there:
$ lsmod | grep -i zfs
zfs 1038053 0
zcommon 42478 1 zfs
znvpair 47487 2 zfs,zcommon
zavl 6925 1 zfs
zunicode 323120 1 zfs
spl 210887 5 zfs,zcommon,znvpair,zavl,zunicode
To create our first pool we can use the zpool utilities create option:
$ zpool create mysqlpool mirror sdb sdc
The example above created a mirrored pool out of the sdb and sdc block
devices. We can see this layout in the output of zpool status
:
$ zpool status -v
pool: mysqlpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
mysqlpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
errors: No known data errors
Awesome! Since we are at pool version 28 lets disable atime updates and enable compression and deduplication:
$ zfs set compression=on mysqlpool
$ zfs set dedup=on mysqlpool
$ zfs set atime=off mysqlpool
For a somewhat real world test, I stopped one of my MySQL slaves, mounted the pool on /var/lib/mysql, synchronized the previous data over to the ZFS file system and then started MySQL. No errors to report, and MySQL is working just fine. Next up, I trash one side of the mirror and verified that resilvering works:
$ dd if=/dev/zero of=/dev/sdb
$ zpool scrub mysqlpool
I let this run for a few minutes then ran zpool status
to verify the
scrub fixed everything:
$ zpool status -v
pool: mysqlpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scan: scrub repaired 966K in 0h0m with 0 errors on Sun Feb 12 18:54:51 2012
config:
NAME STATE READ WRITE CKSUM
mysqlpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdb ONLINE 0 0 175
sdc ONLINE 0 0 0
I beat on the pool pretty good and didn’t encounter any hangs or kernel oopses. The file systems port is still in its infancy, so I won’t be trusting it with production data quite yet. Hopefully it will mature in the coming months, and if we’re lucky maybe one of the major distributions will begin including it! That would be killer!!
I have been experimenting with ways to better manage the logs my servers generate. Depending on who you ask, folks will recommend sending your logs to a remote syslog server that writes the logs to disk, some may recommend sending it to a log analysis tool similar to splunk, and others would recommend feeding it to a SQL database. I’ve talked before about setting up syslog-ng for remote logging, and in this case I wanted to experiment with something new. I also didn’t have money to buy a tool like splunk, so I decided to start experimenting with funneling syslog data into a MyQSL database.
Setting up syslog to write messages to a MySQL database is crazy easy to do on CentOS 6. The built-in syslog daemon (rsyslog) has database plug-ins for several opensource databases, which can be installed with the yum package manager:
$ yum install rsyslog-mysql
Once the plug-in is installed you can run the provided createDB.sql script (this is part of the rsyslog-mysql package) to create a database (the default database will be named Syslog, though you can edit the createDBL.sql file if you want to call it something else) as well as the tables the log entries will be stored in:
$ rpm -q -l rsyslog-mysql-4.6.2-12.el6.x86_64 | grep createDB.sql
/usr/share/doc/rsyslog-mysql-4.6.2/createDB.sql
$ mysql -u root -h localhost --password
mysql> **source /usr/share/doc/rsyslog-mysql-4.6.2/createDB.sql**
If this completes successfully you should have two brand spanking new
tables:
mysql> use Syslog;
mysql> show tables;
+------------------------+
| Tables_in_Syslog |
+------------------------+
| SystemEvents |
| SystemEventsProperties |
+------------------------+
The SystemEvents table is where log data is stored and has the following
structure:
mysql> desc SystemEvents;
+--------------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------------+------------------+------+-----+---------+----------------+
| ID | int(10) unsigned | NO | PRI | NULL | auto_increment |
| CustomerID | bigint(20) | YES | | NULL | |
| ReceivedAt | datetime | YES | | NULL | |
| DeviceReportedTime | datetime | YES | | NULL | |
| Facility | smallint(6) | YES | | NULL | |
| Priority | smallint(6) | YES | | NULL | |
| FromHost | varchar(60) | YES | | NULL | |
| Message | text | YES | | NULL | |
| NTSeverity | int(11) | YES | | NULL | |
| Importance | int(11) | YES | | NULL | |
| EventSource | varchar(60) | YES | | NULL | |
| EventUser | varchar(60) | YES | | NULL | |
| EventCategory | int(11) | YES | | NULL | |
| EventID | int(11) | YES | | NULL | |
| EventBinaryData | text | YES | | NULL | |
| MaxAvailable | int(11) | YES | | NULL | |
| CurrUsage | int(11) | YES | | NULL | |
| MinUsage | int(11) | YES | | NULL | |
| MaxUsage | int(11) | YES | | NULL | |
| InfoUnitID | int(11) | YES | | NULL | |
| SysLogTag | varchar(60) | YES | | NULL | |
| EventLogType | varchar(60) | YES | | NULL | |
| GenericFileName | varchar(60) | YES | | NULL | |
| SystemID | int(11) | YES | | NULL | |
+--------------------+------------------+------+-----+---------+----------------+
The column descriptions are pretty much self explanatory, and you can reference the rsyslog documentation to get more specifics on the purpose of each column. In order for rsyslog to be able to write to the database, you will need to create a user (using root is not recommended) and grant the privileges to allow the user to INSERT new data. For my purposes I created an rsyslog user and restricted them to INSERT’ing data into just the Syslog database:
mysql> grant INSERT on Syslog.* to rsyslog identified by "PASSWORD_HERE";
This completes the MySQL configuration. To tell rsyslog to start sending messages to the Syslog database, you will need to add directives similar to the following to /etc/rsyslog.conf. You will also need to restart the rsyslog service:
$ ModLoad ommysql
The first line loads the MySQL plug-in, and the second line tells rsyslog to send all of the log entries (you can pair this down to specific facilities and priorities) to the Syslog database on rsyslogdb.prefetch.net. It will use the provided user (rsyslog in this example) and password (PASSWORD_HERE in this example) to login to the database. If everything worked as expected you should be able to view the log entries in the SystemEvents table:
$ mysql -u root -h localhost --password
mysql> use Syslog;
mysql> select blog blog-posts blog-posts.orig cleanup drafts prefetch.net from SystemEvents limit 5;
+----+------------+---------------------+---------------------+----------+----------+-----------+--------------------------------------------------------------------------------------------------------+------------+------------+-------------+-----------+---------------+---------+-----------------+--------------+-----------+----------+----------+------------+-----------+--------------+-----------------+----------+
| ID | CustomerID | ReceivedAt | DeviceReportedTime | Facility | Priority | FromHost | Message | NTSeverity | Importance | EventSource | EventUser | EventCategory | EventID | EventBinaryData | MaxAvailable | CurrUsage | MinUsage | MaxUsage | InfoUnitID | SysLogTag | EventLogType | GenericFileName | SystemID |
+----+------------+---------------------+---------------------+----------+----------+-----------+--------------------------------------------------------------------------------------------------------+------------+------------+-------------+-----------+---------------+---------+-----------------+--------------+-----------+----------+----------+------------+-----------+--------------+-----------------+----------+
| 1 | NULL | 2012-02-11 15:42:01 | 2012-02-11 15:42:01 | 0 | 6 | centos6-1 | imklog 4.6.2, log source = /proc/kmsg started. | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 1 | kernel: | NULL | NULL | NULL |
| 2 | NULL | 2012-02-11 15:42:01 | 2012-02-11 15:42:01 | 5 | 6 | centos6-1 | [origin software="rsyslogd" swVersion="4.6.2" x-pid="3891" x-info="http://www.rsyslog.com"] (re)start | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 1 | rsyslogd: | NULL | NULL | NULL |
| 3 | NULL | 2012-02-11 15:42:25 | 2012-02-11 15:42:25 | 0 | 6 | centos6-2 | imklog 4.6.2, log source = /proc/kmsg started. | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 1 | kernel: | NULL | NULL | NULL |
| 4 | NULL | 2012-02-11 15:42:25 | 2012-02-11 15:42:25 | 5 | 6 | centos6-2 | [origin software="rsyslogd" swVersion="4.6.2" x-pid="5932" x-info="http://www.rsyslog.com"] (re)start | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 1 | rsyslogd: | NULL | NULL | NULL |
| 5 | NULL | 2012-02-11 15:42:22 | 2012-02-11 15:42:22 | 1 | 5 | centos6-1 | test | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 1 | matty: | NULL | NULL | NULL |
+----+------------+---------------------+---------------------+----------+----------+-----------+--------------------------------------------------------------------------------------------------------+------------+------------+-------------+-----------+---------------+---------+-----------------+--------------+-----------+----------+----------+------------+-----------+--------------+-----------------+----------+
5 rows in set (0.00 sec)
Awesome! Once you have all of your hosts pointing to your database you can use the power of SQL to sift through the data and correlate events (I wonder how useful this would be when applied to digital computer forensics). You can also use tools like LogAnalyzer to visualize your log data. I’ve thought of hundreds of things I can do with my log data, I just need to spend some time coding them up! :)
MySQL is configured through the my.cnf configuration file, which typically resides in /etc. There are dozens of configuration settings that can be added to this file, and you can view the full list by running mysqld with the “–help” and “–verbose” options:
$ /usr/libexec/mysqld --help --verbose | grep -i ^relay
relay-log slave-relay-bin.index
relay-log-index slave-relay-bin
relay-log-info-file relay-log.info
relay_log_purge TRUE
relay_log_space_limit 0
The configuration directive will be printed on the left, and the current value of the directive will be displayed on the right. When I get a “how do I do X” thought I typically will cross reference directives with the official documentation to see how to configure the server to do what I need it to. It’s also just plain useful to know what you can do with the server.