I have and have had many sites that I run. They're all some form of side-project.
What they almost all have in common is two things
- They have very little traffic (thus not particularly mission critical)
- I run everything on one server (no need for "spinning up" new VMs here and there)
Many many years ago, when current interns I work with were mere babies, I started a very simple "procedure".
-
On the server, in the user directory where the site is deployed, I write a script called something like
upgrade_myproject.sh
which is executable and does what the name of the script is: it upgrades the site. -
In the server's
root
home directory I write a script calledrestart_myproject.sh
which also does exactly what the name of the script is: it restarts the service. -
On my laptop, in my
~/bin
directory I create a script calledUpgradeMyproject.sh
(*) which runsupgrade_myproject.sh
on the server and runsrestart_myproject.sh
also on the server.
And here is, if I may say so, the cleverness of this; I use ssh to execute these scripts remotely by simply piping the commands to ssh
. For example:
#!/bin/bash echo "./upgrade_generousfriends.sh" | ssh -A django@ec2-54-235-210-62.compute-1.amazonaws.com echo "./restart_generousfriends.sh" | ssh root@ec2-54-235-210-62.compute-1.amazonaws.com
That's an example I use for Wish List Granted.
This works so darn well, and has done for years, that this is why I've never really learned to use more advanced tools like Fabric, Salt, Puppet, Chef or <insert latest deployment tool name>.
This means that all I need to do run a deployment is just type UpgradeMyproject.sh[ENTER]
and the simple little bash scripts takes care of everything else.
The reason I keep these on the server and not on my laptop is simply because that's where they naturally belong and if I'm ssh'ed in and mess around I don't have to exit out to re-run them.
Here's an example of the upgrade_generousfriends.sh
I use for Wish List Granted:
#!/bin/bash cd generousfriends source venv/bin/activate git pull origin master find . | grep '\.pyc$' | xargs rm -f pip install -r requirements/prod.txt ./manage.py syncdb --noinput ./manage.py migrate webapp.main ./manage.py collectstatic --noinput ./manage.py compress --force echo "Restart must be done by root"
I hope that, by blogging about this, that someone else sees that it doesn't really have to be that complicated. It's not rocket science and most complex tools are only really needed when you have a significant bigger scale in terms of people- and skill-complexity.
In conclusion
Keep it simple.
(*) The reason for the capitalization of my scripts is also an old habit. I use that habit to differentiate my scripts for stuff I install from any third parties.
Comments
Awesome tip. Very simple indeed.
In the spirit of KISSing, you can change
"find . | grep '\.pyc$' | xargs rm -f"
to
"find . -type f -name '*.pyc' -delete"
unless you are running a really old distribution. (BSD find on OS X and GNU find on Linux have supported this for years.)
Thanks!
Hi Peter,
You could also write a slightly longer but easier to maintain version if it ever grows or config details such as the server name change (I also fancy to think it's more "literate" when reading just the bit that does the work):
#!/bin/bash
SERVER="ec2-54-235-210-62.compute-1.amazonaws.com"
SSH_DJANGO="ssh -A django@$SERVER"
SSH_ROOT="ssh root@$SERVER"
$SSH_DJANGO ./upgrade_generousfriends.sh
$SSH_ROOT ./restart_generousfriends.sh
If the deploy scripts for your sites really are all the same, "generousfriends" could be fetched from the top-level directory name, too... and then this could become just one generic shell function deploy() in your laptop's .bashrc and not a separate file copy in each project folder. I bet the SERVER variable would fit nicely in .bashrc, too, as it could be reused in other little helper scripts and you wouldn't need to copy it around by value. But my approach adds a tiny little bit of "magic" to it (where did that "deploy" ran on the shell come from?), so if you want things to be completely straightforward, I see why you do it your way.
I write tons of these shell automation scripts and I agree that in many places configuration management tools are used they're a complete overkill and a solution in search of a problem (i.e. I would not choose to use them for fewer than a thousand VMs or hosts under management while I see them getting used for as few as a couple dozen VMs where they IMHO don't add real value). But maybe it lets current DevOps (web developers without a sysadmin available and with no sysadmin experience) write Ruby or Python instead of any bash at all, and that's what's so attractive about it, that sysadmin work just becomes another problem that can be solved with a web dev toolchain using a cool library that abstracts over all the boring and tricky shell commands. When all you have is a hammer...
I think you "get it". Keeping it simple is the way to go. And "literate" is an appropriate description.
The point is that it's a tiny executable README file basically. It's easy to understand and to maintain.