22

I've django app host using nignx-uwsgi. Here is my uwsgi configuration:

[uwsgi]
 master          = true 
 socket          = /var/uwsgi/uwsgi.sock
 chmod-socket    = 666
 chdir           = /home/ubuntu/test
 wsgi-file       = /home/ubuntu/test/test/wsgi.py
 virtualenv      = /home/ubuntu/virtual
 vacuum          = true
 enable-threads  = true
 daemonize= /home/ubuntu/uwsgi.log

I'm getting error in nignx log

2017/06/16 04:25:42 [error] 26129#0: *1141328 connect() to unix:///var/uwsgi/uwsgi.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: xxx.xxx.xx, server:

and the site shows 502 bad gateway. I have to restart uwsgi to fix it. But the frequency of the error is increasing. Is there anyway to fix this.

2 Answers 2

24

This error comes when there is a heavy load on the server. First I had tried by increasing the value of worker_connections but it didn't work. Queue size for uWSGI is by default 100, so when more than 100 requests from Nginx to uWSGI is passed, queue get full and Nginx throws 502 to the client, to solve this increase the queue size of uWSGI. In uwsgi.ini file add "listen= {required queue size} ". In my case, I wrote, listen=200.

But before doing it, you must check $ cat /proc/sys/net/core/somaxconn by default it's value is 128, so increase its vale by: $echo 200 > /proc/sys/net/core/somaxconn
or $ sysctl -w net.core.somaxconn=200

1
1

I agree with @Vicky Gupta's answer, but I want to show how you can see what is happening. Inside a shell where uwsgi/nginx are running,

watch -n0.3 ss -xl

The ss command shows live socket usage. -x means unix sockets, and -l means sockets which are in state LISTEN. For this discussion, we don't care about all the random, unnamed sockets that uwsgi opens up (which are likely in state ESTABLISHED). Only the one called, in your case, uwsgi.sock. Should see results like the following. (Sorry, formatting is all messed up.)

output of ss command for one socket in listening state

See, e.g. https://www.linux.com/topic/networking/introduction-ss-command/ (or any decent resource) for more on the output of ss command. The key metric here is the value which should show up underneath Recv-Q. Typically, this value sits at 0, unless you watch the results change fast enough (e.g. 0.3 seconds), or your application is under a lot of load.

(For some reason, in my uwsgi / nginx setup, uwsgi seems to always leave the Send-Q at its maximum. I'm not sure if I'm doing something wrong, or they are, or someone else. But, it doesn't matter here.)

Run your load test while watching the value for Recv-Q. If it gets to 100 and then stops, even after you continue to increase workers / load, while at the same time "resource temporarily unavailable" starts to appear in your nginx logs, then that is very strong evidence you are hitting the limit.

In general, number of concurrent requests seems to match the number of items which sit in the Recv-Q. For me, I had 2 API instances handling requests, and I was able to reproduce resource temporarily unavailable almost exactly when I scaled requests/second to around 200+, while my listen queue was 100.


By the way, here is a python script I threw together for load-testing my application, in case you need one.

python load-test.py 150

# load-test.py

import sys
import os
import time
import requests
import functools
import concurrent.futures
import multiprocessing


def make_request(id_):
    while True:
        try:
            print(f"Thread {id_}: requesting")
            resp = requests.get(f"{os.environ['API_URL']}/my/test/api/", headers=dict(Authorization=os.environ["AUTH_HEADER"]))
        except Exception as e:
            print(f"Thread {id_}: error: {e}")
        else:
            print(f"Thread {id_}: resp: {resp}")


def manager(num_workers=None):
    print(f"Started new main process: {os.getpid()}")
    kwargs = dict(max_workers=num_workers) if num_workers else dict()
    with concurrent.futures.ThreadPoolExecutor(**kwargs) as executor:
        executor.map(make_request, range(executor._max_workers))


if __name__ == "__main__":

    if len(sys.argv) > 1:
        num_workers = int(sys.argv[1])
    else:
        num_workers = os.cpu_count() or 1
        print(f"Choosing {num_workers} workers, the same as detected CPU count")

    _manager = multiprocessing.Process(
        target=functools.partial(manager, num_workers=num_workers)
    )
    try:
        _manager.start()
        while True:
            time.sleep(1)
    except KeyboardInterrupt:
        _manager.terminate()

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.