All opinions expressed are those of the authors and not necessarily those of OSNews.com, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by noreply@blogger.com (ian) on 2017-10-04 00:11:00 in the "DevOps" category
Hurricane Irma came a knocking but didn?t blow Miami away.

Hurricanes are never fun, and neither is knowing that your business servers may be in the direct path of a major outage due to a natural disaster.

Recently we helped a client prepare a disaster recovery environment, 4 days out pending Hurricane Irma approaching Miami ? where it happens their entire server ecosystem sat. In recent months we had been discussing a long-term disaster recovery plan for this client?s infrastructure, but the final details hadn?t been worked out yet, and no work begun on it, when the news of the impending storm started to arrive.

Although the Miami datacenter they are using is highly regarded and well-rated, the client had the foresight to think about an emergency project to replicate their entire server ecosystem out of Miami to somewhere a little safer.

Fire suits on, battle helmets ready, GO! Two team members jumped in and did an initial review of the ecosystem: six Linux servers, one Microsoft SQL Server database with about 50 GB of data, and several little minions. All of this hosted on a KVM virtualization platform. OK, easy, right? Export the VM disks and stand up in a new datacentre on a similar KVM-based host.

But no downtime could be scheduled at that time, and cloning the database would take too much time to make a replica. If we can?t clone the VM disks without downtime we need another plan. Enter our save-the-day idea: use the database clone from the development machine.

So first things first: Build the servers in the new environment using the cloned copy, rsync the entire disk across, hoping that a restart would return the machine into a working state, take the development database and use the Microsoft tools at our disposal to replicate one current snapshot, then replicate row level details, in preparation of reducing the amount of lost data, should the datacenter lose connection.

Over the course of the next 16 hours, with 2 DevOps engineers, we replicated 6 servers, migrated 100+ GB of data twice and had a complete environment ready for a manual cutover.

Luckily the hurricane didn?t cause any damage to the datacenter, which never lost power or connectivity, so the sites never went down.

Even though it wasn?t used in this case, this emergency work did have a silver lining: Until this client?s longer-term disaster recovery environment is built, these system replicas can serve as cold standby, which is much better than having to rebuild from backups if their primary infrastructure goes out.

That basic phase is out of the way, so the client can breathe easier. Along the way, we produced some updated documentation, and gained deeper knowledge of the client?s software ecosystem. That?s a win!

The key takeaways here: Don?t panic, and do a little planning prior. Leaving things to the last minute rarely works well.
And preferable to all of the above: Plan and build your long-term disaster recovery environment now, well before any possible disaster is on the horizon!

To all the people affected by Hurricane Irma we wish a speedy rebuild.
Comments

published by noreply@blogger.com (Richard Templet) on 2015-02-07 01:30:00 in the "DevOps" category
It is becoming more common for developers to not use the operating system packages for programming languages. Perl, Python, Ruby, and PHP are all making releases of new versions faster than the operating systems can keep up (at least without causing compatibility problems). There are now plenty of tools to help with this problem. For Perl we have Perlbrew and plenv. For Ruby there is rbenv and RVM. For Python there is Virtualenv. For PHP there is PHP version. These tools are all great for many different reasons but they all have issues when being used with cron jobs. The cron environment is very minimal on purpose. It has a very restrictive path, very few environment variables and other issues. As far as I know, all of these tools would prefer using the env command to get the right version of the language you are using. This works great while you are logged in but tends to fail bad as a cron job. The cron wrapper script is a super simple script that you put before whatever you want to run in your crontab which will ensure you have the right environment variables set.
#!/bin/bash -l

exec "$@"
The crontab entry would look something like this:
34 12 * * * bin/cron-wrapper bin/blog-update.pl
The -l on the executing of bash makes it act like it is logging in. Therefore it picks up anything in the ~/.bash_profile and has that available to the env command. This means the cron job runs in the same environment that is setup when you run it from the command line, helping to stop those annoying times where it works fine from the command line but breaks in cron. Jon Jensen went into much greater detail on the benefits of using the -l here. Hope this helps!
Comments

published by noreply@blogger.com (Mike Farmer) on 2014-03-12 13:00:00 in the "DevOps" category

I recently needed to reconstruct an old development environment for a project I worked on over a year ago. The code base had aged a little and I needed old versions of just about everything from the OS and database to Ruby and Rails. My preferred method for creating a development environment is to setup a small Virtual Machine (VM) that mimics the production environment as closely as possible.

Introducing Packer

I have been hearing a lot of buzz lately about Packer and wanted to give it a shot for setting up my environment. Packer is a small command line tool written in the increasingly popular Go programming language. It serves three primary purposes:

  1. Building a machine based on a set of configuration parameters
  2. Running a provisioner to setup the machine with a desired set of software and settings
  3. Performing any post processing instructions on that machine

Packer is really simple to install and I would refer you to their great documentation to get it setup. Once setup, you will have the packer command at your disposal. To build a new machine, all you need to is call

packer build my_machine.json

The file my_machine.json can be the name of any json file and contains all the information packer needs to setup your machine. The configuration json has three major sections: variables, builders, and provisioners. Variables are simply key value pairs that you can reference later in the builders and provisioners sections.

The Builder Configuration

Builders takes an array of json objects that specify different ways to build your machines. You can think of them as instructions on how to get your machine setup and running. For example, to get a machine up and running you need to create a machine, install an Operating System (OS) and create a user so that you can login to the machine. There are many different types of builders, but for the example here, I’ll just use the vmware-iso machine type. Here’s a working json configuration file:

{
  "variables": {
    "ssh_name": "mikefarmer",
    "ssh_pass": "mikefarmer",
    "hostname": "packer-test"
  },

  "builders": [
    {
      "type": "vmware-iso",
      "iso_url": "os/ubuntu-12.04.4-server-amd64.iso",
      "iso_checksum": "e83adb9af4ec0a039e6a5c6e145a34de",
      "iso_checksum_type": "md5",
      "ssh_username": "{{user `ssh_name`}}",
      "ssh_password": "{{user `ssh_pass`}}",
      "ssh_wait_timeout": "20m",
      "http_directory" : "preseeds",
      "http_port_min" : 9001,
      "http_port_max" : 9001,
      "shutdown_command": "echo {{user `ssh_pass`}} | sudo -S shutdown -P now",
      "boot_command": [
        "<esc><esc><enter><wait>",
        "/install/vmlinuz noapic ",
        "preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/precise_preseed.cfg ",
        "debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
        "hostname={{user `hostname`}} ",
        "fb=false debconf/frontend=noninteractive ",
        "keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA ",
        "keyboard-configuration/variant=USA console-setup/ask_detect=false ",
        "initrd=/install/initrd.gz -- <enter>"
      ]
    }
  ]
}

The documentation for these settings is really good but I want to point out a few things that weren’t immediately clear. Some of these pertain mostly to the vmware-iso builder type, but I believe they are worth pointing out because some of them apply to other builder types as well.

First, the iso_url setting can be either an absolute path, a relative path, or a fully qualified url. The relative path is relative to the directory where you run the packer command. So here, when I run packer, I need to make sure that I do so from a directory that has an os subdirectory with the ubuntu iso located therein.

Next, once the ISO is downloaded, packer will automatically start up your VMWare client and boot the virtual machine. Immediately after that, packer will start up a VNC client and server along with a mini web server to provide information for your machine. The http_port_min and http_port_max specify which ports to use for the VNC clients. Setting them to the same will allocate just that port for it to use. The http_directory setting provides the name of a local directory to use for the mini web server as the document root. This is important for providing your VM with a preseed file, more about the preseed file will be discussed below.

Since we are using Ubuntu as our main machine, we will need to use sudo to send the shutdown command. The shutdown_command setting is used to gracefully shut down the machine at the conclusion of the run and provisioning of the machine.

Installing your OS

The boot_command is a series of keystrokes that you can send to the machine via VNC. If you have setup a linux machine from scratch you know that you have to enter in a bunch of information to the machine about how to set it up for the first time such as time zone, keyboard layout, how to partition the hard drive, host name, etc. All these keystrokes needed to setup your machine can be used here. But if you think about it, that’s a ton of keystrokes and this command could get quite long. A better way to approach this is to use a preseed file. A preseed.cfg file contains the same information you enter when you setup a machine for the first time. This isn’t something provided by packer, but it is provided by the operating system to automatically provision machines. For Ubuntu, a preseed file is used like so:

  • When you boot from the startup media (in this case an iso), you can choose the location of the preseed file via a url
  • The preseed file is uploaded into memory and the configuration is read
  • The installation process begins using information from the preseed file to enter the values where the user would normally enter them.

So how do we get the preseed file up to the machine? Remember that little web server that packer sets up? Well, the ip and port is made available to the virtual machine when it boots from the ISO. The following line tells the OS where to find the web server and the configuration file:

 "preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/precise_preseed.cfg"

Strings in packer can be interpolated using a simple template format similar to moustache. The double curly braces tells packer to insert a variable here instead of text. The HTTPIP and HTTPPort variables are made available by packer to the template.

One more important note about the preseed file, you need to make sure that the settings for the username and password are the same as listed in your variables section so that you can login to the machine once it is built. Where do you get a preseed file? I found one on a blog titled Packer in 10 Minutes by @kappataumu. I only had to modify a few settings that were specific to my setup

Remember that http_directory mentioned above? Well, that directory needs to include your preseed file. I’ve named mine pricise_preseed.cfg for Ubuntu 12.04 Precise Pangolin.

Next up is provisioning but that is such a big topic by itself that I’ll move that into a separate blog post. The config file above will work as-is and once run it should setup a basic Ubuntu server for you. Go ahead and give it a try and let me know in the comments how it worked out for you.

Super Powers

I said that packer has 3 primary purposes earlier. Well, I lied. Packer’s super power is that it can perform those 3 purposes over any number of machines, whether virtual, hosted, or otherwise in parallel. Supported machines are currently:

  • Amazon EC2 (AMI)
  • DigitalOcean
  • Docker
  • Google Compute Engine
  • OpenStack
  • QEMU
  • VirtualBox
  • VMware

Consider for a moment that you can now automatically setup and provision multiple machines with the same environment using a single command. Now you are seeing the power of Packer.


Comments

published by noreply@blogger.com (Jon Jensen) on 2013-06-04 08:56:00 in the "DevOps" category

End Point continues to grow! We are looking for a full-time, salaried DevOps engineer to work on projects with our internal server hosting team and our external clients. If you like to figure out and solve problems, if you take responsibility for getting a job done well without intensive oversight, please read on.

What is in it for you?

  • Work from your home office
  • Flexible full-time work hours
  • Health insurance benefit
  • 401(k) retirement savings plan
  • Annual bonus opportunity
  • Ability to move without being tied to your job location
  • Collaborate with a team that knows their stuff

What you will be doing:

  • Remotely set up and maintain Linux servers (mostly RHEL/CentOS, Debian, and Ubuntu), with custom software written mostly in Ruby, Python, Perl, and PHP
  • Audit and improve security, backups,reliability, monitoring (with Nagios etc.)
  • Support developer use of major language ecosystems: Perl's CPAN, Python PyPI (pip/easy_install), Ruby gems, PHP PEAR/PECL, etc.
  • Automate provisioning with Ansible, Chef, Puppet, etc.
  • Use open source tools and contribute back as opportunity arises
  • Use your desktop platform of choice: Linux, Mac OS X, Windows

What you will need:

  • Professional experience with Linux system administration, networking, iptables, Apache or nginx web servers, SSL, DNS
  • A customer-centered focus
  • Strong verbal and written communication skills
  • Experience directing your own work, and working from home
  • Ability to learn new technologies
  • Willingness to shift work time to evening and weekend hours on occasion

Bonus points for experience:

  • Packaging software for RPM, Yum, and apt/dpkg
  • Managing Amazon Web Services, Rackspace Cloud, Heroku, or other cloud hosting services
  • Working with PostgreSQL, MySQL, Cassandra, CouchDB, MongoDB, or other databases
  • Complying with or auditing for PCI and other security standards
  • Using load balancers, virtualization (kvm, Xen, VirtualBox, VMware), FC or iSCSI SAN storage
  • With JavaScript, HTML/CSS, Java/JVM, Node.js, etc.
  • Contributing to open source projects

About us

End Point is a 17-year-old Internet consulting company based in New York City, with 35 full-time employees working mostly remotely from home offices. We serve over 200 clients ranging from small family businesses to large corporations, using a variety of open source technologies. Our team is made up of strong ecommerce, database, and system administration talent, working together using ssh, Screen and tmux, IRC, Google+ Hangouts, Skype, and good old phones.

How to apply

Please email us an introduction to jobs@endpoint.com to apply. Include a resume and your GitHub or other URLs that would help us get to know you. We look forward to hearing from you!


Comments