Quote:
Originally Posted by mag_black
Linux System Engineer; Fvck windows! I kid, I kid. I actually have to work in Windows from time to time. Our entire infrastructure is RedHat and most of my work involves Postgres, Tomcat, Java (eek!), cfengine, aws.
I use to be a linux sysadmin. Used git, subversion, apache, cfengine and all that other good stuff.
edit:
@houkouonchi, what are you using for automation?
|
For package/configuration stuff on machines I am using chef. It sets up everything all the servers have (like nagios and its custom perl plugins I wrote for raid/smart status/error checking) makes sure they all have the same package/etc..
I also did a lot of automation for virtual machines with libvirt. A utility was initially written by a previous employee but I have added a lot to it. It originally only supported ubuntu 12.04 and had a lot of hard-set stuff in it and depends on ubuntu's cloud-init. I basically had to make cloud-init images for debian, fedora, rhel, centos, and suse. Sles was atleast decent in that it had a utility for creating images which was quite helpfull but at the time cloud-init package didn't exist for a lot of the distro's but things are a lot better these days. I still have to make my own images but atleast its easier now.
Its basically designed so you can spinup a VM and have it running/booted/sshable in ~1 min via libvirt/qemu/kvm. It provisions them, sets their hostname, ram/disk as well as installs SSH keys from the user who runs the utility. I also recently modified it to be able to create LXC containers as well. I also made a server/client program via python that uses an API over HTTP to create DNS records in powerdns powered MySQL db as all the guests are DHCP to create DNS records for newly created vms.
I also work on the test suite (python) used on the nightly runs that test out new builds of the file-system for testing as well as setup a bunch of gitbuilders that are basically machines that build the code everytime a new commit is added to a repo so tracking can be done when builds fail (which commit caused the failure, etc..) and packages can be built off these builds (rpm/.deb) for nightly tests as well to test the newest code using package based installs like a real user would do.
I would say one of the most annoying things is simply all the distro's I have to deal with. At the previous web-hosting company I worked at it was 3000+ servers (all debian) but I had to automate vm and baremetal to pretty much any distro under the sun that we want to support and do testing on (which is quite a few). I couldn't really find good utilities that really supported *all* the distro's we needed so I ended up writing my own PXE based imaging system as well for quickly re-imaging baremetal machines.
Probably a lot of what I said doesn't mean anything to most on this forum =)
I also have to go to the data center every so often when I have to replace a lot of disks or something. I deployed all the new servers we have but the original deployment (ugly) was done by someone else. I also manage all the switches/networking/routing stuff as well which I also backup daily. I also setup monitoring for the switches via MRTG along with automation of doing switchport descriptions so bandwidth stats can easily be searched, etc...
Basically the entire second deployment of hardware we have we got for free so all the hardware (even down to the network cables) came from a parent company that was going to e-waste it. The only problem is the crappy desktop seagate drives (each one has 8x1TB) have been dieing like crazy.
Here is what the deployment of servers I did looks like (half of the machines i manage):
I deployed all that in a week almost entirely by myself (from nothing in the racks, to racked, cabled, imaged, booted servers doing stress testing). I was pretty physically dead after that week =)
And if you want to see the horrible cabling job of the previous one:
Ok... I have gone too far now. All you asked was what I do for automation although technically most of what else I listed is automation as well =)
This forum needs more *nix geeks for me to talk to.