One of my friends reached out to me earlier this week to help him with an elasticsearch issue. He was trying to bring up a new cluster to see how ES compares to splunk and was getting a “bootstrap checks failed” error at startup. This was causing his elasticsearch java processes to bind to localhost instead of the hostname he assigned to the network.host value. Here is a snippet of what I saw when I reviewed the logs:
[2017-08-19T11:31:25,457][ERROR][o.e.b.Bootstrap ] [elastic01] node validation exception
 bootstrap checks failed
: max file descriptors  for elasticsearch process is too low, increase to at least 
: max virtual memory areas vm.max_map_count  is too low, increase to at least 
Elasticsearch has two modes of operation: development and production. In development mode elasticsearch will bind to localhost allowing you to tinker with settings, test features and break things without impacting other nodes on your network. In production mode elasticsearch will bind to an external interface allowing it to communicate with other nodes and form clusters. Elasticsearch runs a number of bootstrap checks to help it figure out which mode to operate in. These checks are put in place to protect you server from data corruption and network partitions which the developers have seen more than once:
“Collectively, we have a lot of experience with users suffering unexpected issues because they have not configured important settings. In previous versions of Elasticsearch, misconfiguration of some of these settings were logged as warnings. Understandably, users sometimes miss these log messages. To ensure that these settings receive the attention that they deserve, Elasticsearch has bootstrap checks upon startup.”
The settings the documentation is referring to are described in the important settings and system settings documentation. In my friends case he didn’t increase the vm.max_map_count or the number of file descriptors available to the elasticsearch Java process. Once he got these fixed up his test cluster fired right up.
I am a big fan of the ELK stack and use it daily for various parts of my job. All of the logs from my physical systems, VMs and containers get funneled into elasticsearch, indexed and are available for me to slice and dice with Kibana. In addition to syslog data I also like to funnel the systemd journal into elasticsearch. This is easily accomplished by changing the journald.conf ForwardToSyslog configuration directive to yes:
$ sed -i.bak ‘s/#ForwardToSyslog/ForwardToSyslog\=yes/’ /etc/systemd/journald.conf
This small change will cause all journal entries to get routed to the local syslog daemon. Once they are there you can set up your favorite log shipping solution to get them into your elasticsearch cluster.
I’ve been around Linux and UNIX for quite some time and one thing that has always piqued my interests is debugging broken software. Bryan Cantrill made some excellent points on why postmortem debugging is needed at DOCKERCON and the following video is a must watch:
His points on restarting a broken container w/o root causing the source of the failure is SPOT ON! I also love his mad cow analogy. I’ve had the same mind set since I started managing infrastructure and I find the whole root cause process exciting and fun. Who doesn’t love looking at backtraces, registers and memory on the stack?!!?! Most admins I’ve met like debugging but they dread seeing the following:
I on the other hand start to drool when a piece of software I manage (but didn’t write) encounters a fatal condition that leads to its demise. If I can’t locate a bug report with a fix I’ll grab a cup of coffee, ensure debugging symbols are present and fire up gdb to root cause the failure. My first experience root causing a segmentation violation was with snort. This was an extremely valuable learning experience and at the time the Internet had limited resources explaining stack layouts, memory organization and how gdb can be used to locate problems. Now that conferences and individuals are posting high quality material to Youtube two clicks will get you access to amazing gdb resources like this (all three videos are definitely worth watching):
We also have access to step-by-step software debugging guides like the one Brendan Gregg posted to his blog last year. This coming weekend I will be immersing myself in another epic debugging session and I can’t wait to see what I find (and learn). We all need to learn to embrace the unhappy signals that take down our applications. You learn a TON by doing so and make the opensource world better at the same time.
Over the past month I have been rewriting some cron scripts to enhance monitoring and observability. I’ve also been refactoring my ansible
playbooks to handle deploying these scripts in a consistent fashion. Ansible ships with the cron module which makes this process a breeze.
The cron module has all of the familiar cron attributes (hour, minute, second, program to run, etc.) and takes the following form:
When I first played around with this module I noticed that each playbook run would result in a cron entry being added. So instead of getting one curator log cleanup job when the play is executed I would get a one entry per run. This is obviously very bad. When I read back through the cron module documentation I came across this little nugget for the “name” parameter:
“Note that if name is not set and state=present, then a new crontab entry will always be created, regardless of existing ones.”
Ansible uses the name to tag the entry and if the tag already exists a new cron job won’t be added to the system (in case your interested this is implemented by the find_job() method in cron.py). Small subtleties like this really bring to light the importance of a robust test environment. I am currently using vagrant to solve this problem but there are also a number of solutions documented in the Ansible testing strategies guide.
One of my friends reached out to me earlier this week to ask if there was an easy way to run multiple Linux processes in parallel. There are several ways to approach this problem but most of them don’t take into account hardware cores and threads. My preferred solution for CPU intensive operations is to use the xargs parallel option (“-P”) along with the CPU cores listed in lscpu. This allows me to run one process per core which is ideal for CPU intensive applications. But enough talk, let’s see an example.
Let’s say you need to compress a directory full of log files and want to run one compression job on each CPU core. To locate the number of cores you can combine lscpu and grep:
$ CPU_CORES=`lscpu -p=CORE,ONLINE | grep -c ‘Y’`
To generate a list of files we can run find and pass the output of that to xargs:
During the development of the dns-domain-expiration-checker script I needed a way to test SMTP mail delivery w/o relying on an actual mail exchanger. While reading through the smtplib and snmpd documentation I came across the SMTP debugging server. This nifty module allows you to run a local mail relay which will print the messages it receives to standard out. To enable it you can load the smtpd module and instruct it to run the DebuggingServer command on the IP and port passed as arguments:
---------- MESSAGE FOLLOWS ----------
Content-Type: multipart/mixed; boundary="===============3200155514135298957=="
Subject: The DNS Domain prefetch.net is set to expire in 1041 days
Content-Type: text/plain; charset="us-ascii"
Time to renew prefetch.net
------------ END MESSAGE ------------
Super useful module for troubleshooting SMTP and e-mail communications!