Django is fast. Out of the box, it handles thousands of requests per second on modest hardware. But production Django can be dramatically faster with a handful of targeted changes. These are the optimisations we actually apply — not academic exercises, but things that make measurable differences on real apps.
1. Eliminate N+1 queries first
If you do nothing else from this list, do this. N+1 queries are the single most common performance problem in Django applications and they're completely avoidable.
Use select_related() for ForeignKey and OneToOne relationships, and prefetch_related() for ManyToMany and reverse ForeignKey lookups:
# Bad: hits the database once per article articles = ArticlePage.objects.all() for article in articles: print(article.author.name) # N queries # Good: two queries total articles = ArticlePage.objects.select_related('author').all()
Install django-debug-toolbar in development and set a query count threshold. Any view executing more than 10 database queries should be investigated.
2. Add the right database indexes
Django's ORM won't automatically index fields you filter or order on. Every filter(), order_by(), or get() on an unindexed column becomes a sequential table scan as your data grows.
class ArticlePage(Page): publication_date = models.DateTimeField( db_index=True # ← add this ) topic = models.ForeignKey( 'Topic', on_delete=models.PROTECT, db_index=True # ← ForeignKeys should almost always be indexed )
For composite queries, use Meta.indexes:
class Meta: indexes = [ models.Index(fields=['-publication_date', 'topic']), ]
3. Cache at the right level
There are three caching levels in Django. Use all three:
- Per-view caching — for views that serve the same response to all users.
@cache_page(60 * 15)is all you need. - Template fragment caching — for expensive template sections that vary by user. Use
{% cache %}template tags. - Queryset caching — for expensive aggregations or rarely-changing data. Cache the result in Redis, not Django's default DB cache.
Configure Redis as your cache backend:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.redis.RedisCache',
'LOCATION': os.environ['REDIS_URL'],
'OPTIONS': {
'db': '1',
'parser_class': 'redis.connection.HiredisParser',
}
}
}
4. Move slow work to Celery
Any operation that takes more than 100ms and doesn't need to happen synchronously should be a Celery task. Email sending, PDF generation, webhook delivery, third-party API calls, image processing — all of these should be offloaded.
@app.task(bind=True, max_retries=3, default_retry_delay=60) def send_welcome_email(self, user_id): try: user = User.objects.get(pk=user_id) send_mail( subject='Welcome to the platform', message=render_to_string('emails/welcome.txt', {'user': user}), from_email=settings.DEFAULT_FROM_EMAIL, recipient_list=[user.email], ) except Exception as exc: raise self.retry(exc=exc)
5. Use only() and defer() deliberately
By default, Django fetches every column in a table. On models with large text fields or BLOBs, this wastes significant memory. Use only() to fetch just what you need for list views:
# Listing view — don't load the body field articles = ArticlePage.objects.only( 'title', 'slug', 'publication_date', 'introduction' ).select_related('author')
6. Configure database connection pooling
Django's default behaviour opens a new database connection per request. On high-traffic sites, this creates connection overhead and can exhaust PostgreSQL's connection limit. Use PgBouncer or Django's built-in persistent connections:
DATABASES = {
'default': {
...
'CONN_MAX_AGE': 60, # keep connections open
'CONN_HEALTH_CHECKS': True, # Django 4.1+
}
}
7. Serve static files correctly
Never serve static files through Django in production — it's slow and wastes Python worker capacity. Use WhiteNoise for efficient static file serving directly from your application server, or push to a CDN.
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware', # right after security
...
]
STORAGES = {
'staticfiles': {
'BACKEND': 'whitenoise.storage.CompressedManifestStaticFilesStorage',
}
}
8. Profile queries in production
Debug toolbar only runs in development. For production query profiling, use django.db.connection.queries in staging, or set up the slow_query_log in PostgreSQL. We log any query taking more than 100ms to Sentry:
import time from django.db import connection class SlowQueryMiddleware: def __call__(self, request): n_queries_before = len(connection.queries) t_start = time.monotonic() response = self.get_response(request) duration = time.monotonic() - t_start if duration > 0.5: logger.warning(f"Slow view: {request.path} took {duration:.2f}s") return response
9. Use annotate() instead of Python aggregation
Aggregations done in Python after fetching rows are slow and memory-hungry. Push the work to PostgreSQL with annotate():
# Bad: fetches all orders, counts in Python users = User.objects.all() for user in users: count = user.orders.count() # N+1 # Good: single query with COUNT in SQL from django.db.models import Count users = User.objects.annotate( order_count=Count('orders') ).order_by('-order_count')
10. Add application-level monitoring
You can't optimise what you can't measure. Before shipping any Django app to production, we add:
- Sentry — error tracking with full stack traces and performance monitoring
- Django-prometheus or Datadog APM — request duration, query counts, cache hit rates as metrics
- Health check endpoint —
/healthz/that validates database connectivity, cache availability, and any critical external services
These ten optimisations aren't exhaustive, but they're the ones that consistently make the most difference on the Django applications we build and inherit. Apply them in roughly this order of priority, measure the impact, and you'll find most performance problems disappear before you need to reach for anything more exotic.