Whenever you want to quickly bombard a URL with some concurrent traffic, you can use this:


import random
import time
import requests
import concurrent.futures


def _get_size(url):
    sleep = random.random() / 10
    # print("sleep", sleep)
    time.sleep(sleep)
    r = requests.get(url)
    # print(r.status_code)
    assert len(r.text)
    return len(r.text)


def run(url, times=10):
    sizes = []
    futures = []
    with concurrent.futures.ThreadPoolExecutor() as executor:
        for _ in range(times):
            futures.append(executor.submit(_get_size, url))
        for future in concurrent.futures.as_completed(futures):
            sizes.append(future.result())
    return sizes


if __name__ == "__main__":
    import sys

    print(run(sys.argv[1]))

It's really basic but it works wonderfully. It starts 10 concurrent threads that all hit the same URL at almost the same time.
I've been using this stress test a local Django server to test some atomicity writes with the file system.

Comments

Your email will never ever be published.

Previous:
HTMLMinifier in use on this blog now August 7, 2018 Web development, Web Performance, JavaScript
Next:
django-html-validator now supports Django 2.x August 13, 2018 Python, Web development, Django
Related by category:
How I run standalone Python in 2025 January 14, 2025 Python
get in JavaScript is the same as property in Python February 13, 2025 Python
How to resolve a git conflict in poetry.lock February 7, 2020 Python
Best practice with retries with requests April 19, 2017 Python
Related by keyword:
Concurrent Gzip in Python October 13, 2017 Python, Linux, Docker
Concurrent download with hashin without --update-all December 18, 2018 Python, Web development