Over the last few years I have seen the emergence of several practices that are typically termed as ‘Best Practices for Cloud Hosting’ I have been involved in quite a few transitions of hosting providers from dedicated data centers to cloud based hosting providers and even to just a different data-center. Hence I shall be posting about these as a series.
As the first part of the series I am documenting about the common problems that most sys-admins have to face when they have to transition an application from a legacy datacenter to a different environment. Years of code/hacks usually the code base dependent on ‘assumed’ paths and ‘environment variables’. The other problem that people face is when its required to host the application on multiple servers. Which means when you need to install the latest version of your website you should be able to roll out the new code simultaneously on all the nodes in the cluster and be able to roll back similarly. This means a simple svn/git command is not going to be enough and would require well more than that.
In addition to the above problems there are other things that go hand in hand with the code deployment like your website server configurations (if your app code is dependent on things like apache modules or re-write rules etc) or any other configs like cron jobs etc. You might also want these to be updated (and corresponding services to be restarted) when the new version of code is deployed.
Packaging up the website code as a OS installer solves most of the above problems. You can make an rpm or a debian package out of your website and roll it to your local repository which can then be used to install or rollback to a particular version of your website code.
Since ubuntu is my OS of choice (atleast for now) this post will talk about how to setup a debian package, I might write a followup post for building RPM packages which turns out to equally similar to debian package.
Install the required packages
$ apt-get install dh-make devscripts
Now configure the dh_make command you updating some variables in your .bashrc file.
$ cat >>~/.bashrc <<EOF DEBEMAILfirstname.lastname@example.org DEBFULLNAME=Firstname Lastname export DEBEMAIL DEBFULLNAME EOF $ . ~/.bashrc
Checkout our code in your working directory in the structure defined below. This is where you would be working and creating additional files that the debian package manager would use to build your package. Lets consider our project is called top-secret
$ mkdir -p top-secret/top-secret-1.0; cd top-secret/top-secret-1.0 $ svn co <path to repo>/trunk . $ tar -cvzf top-secret-1.0.tar.gz top-secret-1.0
The 1.0 is just an indicative version, once you have understood the process you can automate the process of creating directories with version numbers using automation scripts like phing/ant etc.
Now create the initial template files:
rp@chlorine:~/packaging/top-secret/top-secret-1.0$ dh_make -f ../top-secret-1.0.tar.gz Type of package: single binary, indep binary, multiple binary, library, kernel module, kernel patch or cdbs? [s/i/m/l/k/n/b] s Maintainer name : Rajat Pandit Email-Address : email@example.com Date : Tue, 24 May 2011 09:08:35 +0100 Package Name : hello-sh Version : 1.0 License : blank Using dpatch : no Type of Package : Single Hit <enter> to confirm: Currently there is no top level Makefile. This may require additional tuning. Done. Please edit the files in the debian/ subdirectory now. You should also check that the hello-sh Makefiles install into $DESTDIR and not in / .
This process also builds a new tar file as well.
rp@chlorine:~/packaging/hello-sh$ ls -l total 12 drwxr-xr-x 3 rp rp 4096 2011-05-24 09:08 top-secret-1.0 -rw-r--r-- 1 rp rp 167 2011-05-24 09:00 top-secret_1.0.orig.tar.gz -rw-r--r-- 1 rp rp 167 2011-05-24 09:00 top-secret-1.0.tar.gz
Additionally it will also create a bunch of files under a new folder
debian/ ├── top-secret.cron.d ├── changelog ├── compat ├── control ├── copyright ├── docs ├── files ├── install ├── postinst ├── postrm ├── README ├── rules └── source └── format
These files are the ones that are used by the package manager to build a debian package.
You can learn about what each file does in the debian package managing guide http://www.debian.org/doc/manuals/maint-guide/dother.en.html which is why I won’t repeat that here.
The only important thing you need to remember is that the folder outside
debian folder should be what would be the install path. Which means if your code has to go in
/opt/vs/websites/top-secret the code should be in top-secret-1.0/op/vs/websites/top-secret
and also the file
install should contain which folder needs to be copied where. This information should be one line per folder. For example if my code contains
├── b │ ├── d │ │ └── d │ └── foo.php
and I want 'b' to be installed in /var/www/b and bla.php in /var/www/bin/bla.php then my install file would look like this:
b /var/www/b bla.php /var/www/bin/bla.php
Building the package is also very simple. If you don't want to worry about signing your packages then you can just use the following command:
$debuild -us -uc
and you would get a .deb file in the end. You can check the contents of the debian file using
$ dpkg -c top-secret-1.0.deb
and that will list the contents of the the debian file along with its location for installation.
Now to make this process a little more streamlined you can do a few other things:
- Setup a local debian repository, which is accessible by all web servers, serve it over http ideally
- Setup the local repo to point this repository so that all the webservers get to know about it
- Setup some form of automation that running the commands, modifying the debian configuration files etc be done using ant or phing
- Take the automation to the next level and call the script using Hudson
This will give you a nice workflow with the wonderful UI from hudson to managing building a webpackage and then deploying it to the web server.
If you have further ideas of where this could be used please share your thoughts in your comments. I would like to credit the initial idea of packaging websites in RPM to the lovely sysadmins at IPCMedia. I modified the approach and moved from their technology to work for the ubuntu platforms.