Short answer: about 5%
I had a few minutes and wanted to see if changing from Apache + mod_wsgi to Nginx + gunicorn would make the otherwise slow site any faster. It's not this site but another Django site for work (which, by the way, doesn't have to be fast). It's slow because it doesn't cache any of the SQL queries.
# with Apache + mod_wsgi $ ab -n 1000 -c 10 http://thelocaldomain/ ... Requests per second: 39 [#/sec] (mean) ... # Uses about 110 Mb
That's after running multiple times and roughly averaging the requests per seconds.
# with Nginx + guncorn --workers=4 $ ab -n 1000 -c 10 http://thelocaldomain/ ... Requests per second: 41 [#/sec] (mean) ... # uses about 70 Mb
So, if you want to make a site fast forget about how the code is being served until all the slow db I/O is taken care of properly.
Comments
Post your own commentMy deploy is nginx + uwsgi
So is mine too actually. On this site. I measured it before on fast sites and uwsgi came out top.
However I do have to agree with your last statement, first take care of the rest. No assumptions until you have a solid mechanism to test the difference.
I'm so sorry. I accidentally deleted your previous comment when I meant to click the Approve button. (admin feature). However, I do agree that the test was quite inconclusive towards Apache vs. Nginx but it did prove something. Getting the kit right before the app hits the database is futile to the point of 5%.
To put such stuff in context, worthwhile perhaps watching my PyCon talk on web server bottlenecks. Differences at web server level are a very small in the greater scheme of things and you can waste a lot of time at that level when there are easier things you can do to improve user satisfaction. Video and slides linked from http://lanyrd.com/2012/pycon/spcdg/
Great! I watched it all just now.
What's funny is how you despise hello-world apps for testing (just like me) but you say "That got me pretty excited" when asked about Apache 2.4's event-something and how you had tested a hello-world app on your laptop. So, even the likes of you sometimes enjoy those surreal benchmarks. :)
Another thing is that I suspect that a LOT of people judge their tools performance based on the default configurations. And maybe rightly so. Messing around with defaults does require quite the intimate knowledge of the tool. You sort of rely on the authors to know what they're doing. For example, Nginx is faster than Varnish out-of-the-box to serve static files such as .css files. Fully tuned and polished, I suspect Varnish can overtake. Not having to fully understand Varnish configs gives me a boost for free.
I tend to disagree a bit on the last sentence: if you use a sql driver like PyMySQL and run gunicorn with the gevent worker, you will notice a dramatic boost on high loads.
You can probably do the same stuff through mod_wsgi, but gunicorn configuration is pretty straightforward.
IOW the stack counts a lot when you do I/O bound Python apps.
I agree that DB I/O is going to play a much larger part in the overall performance, but once you've addressed that there's still the potential to gain substantially from switching from Apache + mod_wsgi to something like gunicorn + gevent.
On my site (which uses memcached to address the SQL bottleneck), I found a 13.5x performance increase after making this change. Setup and results here:
http://cmyk.sevennineteen.com/blogs/code/running-django-gunicorn-webfaction/
Do you mind sharing your Apache + mod_wsgi configuration?