How to solve Django memory leak

Jo-Yu Liao
Level Up Coding
Published in
3 min readJun 7, 2020

--

Django 2 + uWSGI + Nginx + AWS (ECR + ECS + EC2 + ALB + VPC + CloudFront)

image source: google

Recently, we had a memory leak problem after deploying a small Django application on AWS. Fortunately, we finally solved this problem by modifying uswgi.ini.

Before getting this problem, we dockerized the application with uWSGI and Nginx, built an image, and pushed to AWS ECR with CI/CD. After that, we deployed it with VPC, ALB, ECS, EC2, and CloudFront.

Problem

At first, it worked as excepted. However, after running for several hours, we got some 500 errors. After doing some research, we found that the memory usage of application is almost 100%. Therefore, we increased the memory and set up a soft limit. Nevertheless, we found that when the counts of request increased, the memory usage increased and never decreased. That is to say, we got a memory leak problem.

image source: https://makeameme.org/meme/memory-leaks-memory-8f1cc7495e

We could use some tools to find the cause and solve it, such as tracemalloc, objgraph, or we could set DEBUG to False. However, we didn’t have enough time to use the tools to find the bug and even we set DEBUG to False, the problem still existed.

Explanation

Fortunately, we found an explanation from this article — django memory leaks, part I.

Does Django leak memory?
In actual fact, No. It doesn’t. The title is therefore misleading. I know. However, if you’re not careful, your memory usage or configuration can easily lead to exhausting all memory and crashing django. So whilst django itself doesn’t leak memory, the end result is very similar.

What does it means? In short, the article says that the Django process has an initial space for loading objects into memory when requests come. Once the process finishes processing a request, it will clear all the objects from memory and go back to being ‘empty’. However, if the request is too big, the process will increase and never shrinks. Consequently, if more than one BIG request come at roughly the same time, Django process will inflate not just one process, but a few of those. Therefore, these Django process compete for space on the server.

image source: https://makeameme.org/meme/i-eat-to-5b7c9e

Solution

Ok, to solve this problem we need to restart workers in some conditions. Therefore, we follow Configuring uWSGI for Production Deployment to modify the uswgi.ini by adding the following code:

# Worker Management
max-requests = 1000 ; Restart workers after this many requests
max-worker-lifetime = 3600 ; Restart workers after this many seconds
reload-on-rss = 512 ; Restart workers after this much resident memory
worker-reload-mercy = 60 ; How long to wait before forcefully killing workers

Result

As you can see, before deploying the new Docker image, the memory usage (blue line) grew as the counts of request (Red line) increased and never decreased. After that, the memory usage goes down after consuming request.

The memory usage and request count

Hope this article can help you to solve your problem :).

--

--