Over the past month I’ve been evaluating metricbeat. Metricbeat along with the ELK stack are incredibly powerful tools for deriving meaning from metrics and unstructured log data. Metricbeats allows you to funnel system and application metrics (e.g., CPU utilization, number of HTTP GET requests, number of SQL queries, HTTP endpoint response times, etc.) into elasticsearch and the powerful kibana visualization tool can then be used to make sense of it.
To get up and running with metricbeats you will first need to configure your logstash infrastructure to support incoming beats. Once logstash is accepting beats you will need to install the metricbeats daemon on each system you want to collect metrics from. To configure metricbeats you will need to modify the YAML file in /etc/metricbeat/metricbeat.yml.
The first section in metricbeat.yml tells metricbeat which metricsets to collect. Currently there are metricsets for load average, CPU, disk, memory, network and process utilization. To enable a metricset you need to make sure the metric isn’t commented out. The following snippet tells metricbeat to collect CPU, memory and network metrics every 10 seconds:
- module: system metricsets: # CPU stats - cpu # Memory stats - memory # Network stats - network enabled: true period: 10s processes: ['.*']
Getting the collection period correct is definitely an art. Collecting metrics too frequently will increase system load and can skew the meaning of the metrics you are collecting. Not sampling data often enough can hide short lived problems. You will definitely need to experiment to find the collection interval that is optimal for your environment.
The next section in the file contains one or more outputs. These control where metrics are sent. Metrics can be sent directly to elasticsearch if you don’t need to do any processing. You can also route them to logstash and apply one or more filters to the metrics prior to placing them in an elasticsearch index. The following snippet shows how to ship metrics to elasticsearch over SSL with authentication:
output.elasticsearch: hosts: ["https://elastic.my.domain:9200"] username: "metricdata" password: "WOULDNTYOULIKETOKNOW" index: "metricbeat" ssl.certificate_authorities: ["/elk/certs/ca.pem"] ssl.certificate: "/elk/certs/cert.pem" ssl.key: "/elk/certs/cert.key"
Once the configuration is in place you can start metricbeat with systemctl:
$ systemctl enable metricbeat && systemctl start metricbeat
If the daemon starts up you will see metric data in the metricbeat index (assuming this is the index you are using for beat data) of the Kibana display. In the next couple of posts I’ll show some of the visualizations I’ve used to track down some really weird problems. It’s AMAZING how easy it is to find issues once all of your metric data is in a single location.