Learning how Ansible modules work with execsnoop


I’ve been digging into Kubernetes persistent volumes and storage classes and wanted to create a new cluster to storage my volume claims. Ansible version 1.9 ships with the gluster_volume module which makes creating a new cluster crazy easy. The module is written in Python and you can find it in /usr/lib/python2.7/site-packages/ansible/modules/system/gluster_volume.py on Fedora-derived distributions. To create a new cluster you need to create a new file system and then define a gluster_volume task to actually create the cluster:

- name: create gluster volume 
  gluster_volume:
    state: present
    name: "{{ gluster_volume_name }}"
    bricks: "{{ gluster_filesystem/brick_name }}"
    rebalance: no
    start_on_create: yes
    replicas: "{{ gluster_replica_count }}"
    transport: tcp
    options:
      auth.allow: "{{ gluster_cal }}"
    cluster: "{{ groups.gluster | join(',') }}"
    host: "{{ inventory_hostname }}"
  run_once: true

Jeff Geerling describes how to build a cluster in gory detail and I would definitely recommend checking that out if you want to learn more about the module options listed above (his Ansible for DevOps book is also a must have). Gluster is a complex piece of software and I wanted to learn more about what was happening under the covers. While I could have spent a couple of hours reading the source code to see how the module worked I decided to fast track my learning with execsnoop. Prior to running my playbook I ran the bcc execsnoop utility and grep’ed for the string gluster:

$ execsnoop | grep cluster

This produced 25 lines of output:

chmod            8851   8844     0 /usr/bin/chmod u+x /root/.ansible/tmp/ansible-tmp-1519562230.06-193658562566672//root/.ansible/tmp/ansible-tmp-1519562230.06-193658562566672/gluster_volume.py
bash             8852   6539     0 /bin/bash -c /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1519562230.06-193658562566672/gluster_volume.py; rm -rf "/root/.ansib
sh               8852   6539     0 /bin/sh -c /usr/bin/python /root/.ansible/tmp/ansible-tmp-1519562230.06-193658562566672/gluster_volume.py; rm -rf "/root/.ansible/tmp/ansib
python           8865   8852     0 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1519562230.06-193658562566672/gluster_volume.py
python           8866   8865     0 /usr/bin/python /tmp/ansible_2rTonm/ansible_module_gluster_volume.py
gluster          8867   8866     0 /usr/sbin/gluster --mode=script peer status
gluster          8873   8866     0 /usr/sbin/gluster --mode=script volume info
gluster          8879   8866     0 /usr/sbin/gluster --mode=script peer probe gluster01.prefetch.net
gluster          8885   8866     0 /usr/sbin/gluster --mode=script volume create kubernetes-volume1 replica 3 transport tcp gluster01.prefetch.net:/bits/kub1 gluster02.prefetch.net:/bits/kub1 gluster03.prefetch.net:/bits/kub1
S10selinux-labe  8891   6387     0 /var/lib/glusterd/hooks/1/create/post/S10selinux-label-brick.sh --volname=kubernetes-volume1
gluster          8892   8866     0 /usr/sbin/gluster --mode=script volume info
gluster          8898   8866     0 /usr/sbin/gluster --mode=script volume start kubernetes-volume1
glusterfsd       8904   6387     0 /usr/sbin/glusterfsd -s gluster01.prefetch.net --volfile-id kubernetes-volume1. gluster01.prefetch.net.bits-kub1 -p /var/run/gluster/vols/kubernetes-volume1/gluster01.prefetch.net-bits-kub1.pid -S /var/run/gluster/e2eaf5fee7c4e7d6647dc6ef94f8a342.socket --brick-name /bits/kub1 -l /var/log/glusterfs/bricks/bits-kub1.log --xlator-option *-posix.glusterd-uuid=6aa04dc6-b578-46a5-9b9e-4394c431ce42 --brick-port 49152 --xlator -option kubernetes-volume1-server.listen-port=49152
glusterfs        8925   8924     0 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/6eb5edea939911f91690c1181fb07f5d.socket --xlator-option *replicate*.node-uuid=6aa04dc6-b578-46a5-9b9e-4394c431ce42
S29CTDBsetup.sh  8935   6387     0 /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh --volname=kubernetes-volume1 --first=yes --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd
getopt           8937   8935     0 /usr/bin/getopt -l volname: -name ctdb --volname=kubernetes-volume1 --first=yes --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd
S30samba-start.  8938   6387     0 /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=kubernetes-volume1 --first=yes --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd
getopt           8939   8938     0 /usr/bin/getopt -l volname:,gd-workdir: -name Ssamba-start --volname=kubernetes- volume1 --first=yes --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd
grep             8945   8944     0 /usr/bin/grep user.smb /var/lib/glusterd/vols/kubernetes-volume1/info
gluster          8947   8866     0 /usr/sbin/gluster --mode=script volume set kubernetes-volume1 auth.allow 192.168.2.*
S30samba-set.sh  8956   6387     0 /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=kubernetes-volume1 -o auth.allow=192.168.2.* --gd-workdir=/var/lib/glusterd
getopt           8959   8956     0 /usr/bin/getopt -l volname:,gd-workdir: --name Ssamba-set -o o: -- --volname=kubernetes-volume1 -o auth.allow=192.168.2.* --gd-workdir=/var/lib/glusterd
grep             8965   8964     0 /usr/bin/grep status /var/lib/glusterd/vols/kubernetes-volume1/info
S32gluster_enab  8967   6387     0 /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=kubernetes-volume1 -o auth.allow=192.168.2.* --gd-workdir=/var/lib/glusterd
gluster          8974   8866     0 /usr/sbin/gluster --mode=script volume info

The first five lines are used to setup up the ansiballz module and the remaining lines contain the commands ansible runs to create the cluster on one of my gluster noes. While I love reading code sometimes you just don’t have time and this was one of them.

This article was posted by Matty on 2018-02-25 09:02:18 -0500 -0500