Speed up Rails with Nginx’s Reverse Proxy Cache

Nginx is a great web server for Rails apps, but did you know it also has powerful caching abilities? Here’s how to turn on reverse proxy caching in Nginx for an impressive performance boost.

What is a reverse proxy cache?

A reverse proxy cache is a web server that sits between the public internet and your Rails app. In the basic scenario (i.e. without the “cache” part), this extra web server is completely transparent: user requests pass through the web server, to Rails, and then responses go back out to the user.

But, when Rails marks a response as “public and cacheable”, this means the proxy can automatically cache it for future requests. If a subsequent request matches a response that was previously cached, the web server can answer immediately using the cached copy; the Rails app isn’t even touched!

The result is much faster response times, but only if pages are cacheable.

For all this to work, you’ll need to do three things:

  1. Allow Rails responses to be cached
  2. Enable the reverse proxy cache in Nginx
  3. Test the results

Let’s get started!

1 Allow Rails responses to be cached

If you want to use a reverse proxy cache, the first step is to make sure Rails is emitting the right HTTP headers. Caching only works if you explicitly opt in.

Send appropriate cache headers. By default, Rails sends this header in every response, which disallows caching:

Cache-Control: max-age=0, private, must-revalidate

To allow page caching, use the Rails expires_in method. For example, in a controller:

class MyController < ApplicationController
  before_action :allow_page_caching

  # controller actions...


  def allow_page_caching
    expires_in(5.minutes) unless Rails.env.development?

Now Rails will emit the following header instead, which allows the entire page to be cached for up to 5 minutes:

Cache-Control: max-age=300, public

You may also want to use the Rails fresh_when method to cache for even longer periods, based on an ETag or last modification time. Be sure to pass the :public => true option to allow proxy caching.

Remove CSRF. Another Rails default is that it places a CSRF token on every page to prevent request forgery attacks. This built-in security is nice, but it doesn’t work with caching, because:

  1. If we cache a page with a CSRF token in it, every user receiving the cached copy will have an identical token; this prevents forgery protection from working.
  2. The CSRF token is stored in the Rails session, which in turn relies on cookies. Pages that rely on cookies can’t be cached.

Therefore, you’ll need to remove the CSRF token from pages that you want to be cacheable.

<%# Remove this to allow caching %>
<%= csrf_meta_tags %>

Do not set session or cookie data. Reverse proxy caches (Nginx included) will typically refuse to cache any response that has a Set-Cookie header. This happens when you create a cookie; it also happens when you create a session in Rails, since by default Rails uses cookies to persist sessions.

Any controller action that you want to be cacheable cannot make use of session or cookie data.

2 Enable the reverse proxy cache in Nginx

If you are already using Nginx with Rails, chances are you are making use of Nginx’s HTTP proxy module. This same module also provides caching: we just need to turn it on!

Configuring Nginx involves two files: nginx.conf for declaring the cache itself, and sites-enabled/myapp where we reference the cache in the reverse proxy settings for the Rails app.

Declare a cache zone. Nginx allows for many “zones”, each with its own size and expiry settings. For this tutorial I’ll keep it simple with a single zone called default:

# In nginx.conf
http {
  proxy_cache_path  /var/cache/nginx levels=1:2 keys_zone=default:8m max_size=1000m inactive=30d;
  proxy_temp_path   /var/cache/nginx/tmp;

Configure the proxy. This proxy configuration is an abbreviated example based on the official Unicorn+Nginx recommendations (I’m skipping HTTPS, asset pipeline, and other settings that are outside the scope of this tutorial).

# In sites-enabled/myapp
upstream rails {
  server unix:/path/to/.unicorn.sock fail_timeout=0;

server {
  listen 80 default deferred; # for Linux
  root /path/to/app/current/public;
  try_files $uri/index.html $uri.html $uri @app;

  location @app {
    # Standard reverse proxy stuff
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    proxy_pass http://rails;

    # Reverse proxy cache
    proxy_cache default;
    proxy_cache_lock on;
    proxy_cache_use_stale updating;
    add_header X-Cache-Status $upstream_cache_status;

  error_page 500 502 503 504 /500.html;
  location = /500.html {
    root /path/to/app/current/public;

To clarify, here are the important bits related to caching:

# Use the cache zone we configured and named "default"
proxy_cache default;

# Prevent a "cache stampede" from happening when cache entries expire
proxy_cache_lock on;
proxy_cache_use_stale updating;

# Add a HIT or MISS header to the response so we can observe the cache behavior
add_header X-Cache-Status $upstream_cache_status;

That’s it! Make sure to restart Nginx for these changes to take effect.

3 Test the results

Now try accessing the app through a web browser and take a look at the response headers. In Safari, press ⌥ opt⌘ cmda to bring up the Resources inspector, where you can see the full request and response headers in the right-hand panel.

For a page that is not yet cached (or if it is not cacheable), you’ll see this response header:

x-cache-status: MISS

If you access the same cacheable page a second time, it should now be in the cache. You’ll probably notice a much faster response time. The header will be:

x-cache-status: HIT

Why go through the trouble?

Caching is tricky. Even a simple Nginx reverse proxy cache as described in this article can add a lot of complexity to an app. Users could end up with stale pages, or worse: if you mistakenly cache an authenticated page, users could end up seeing cached data belonging to someone else!

Then why go through the trouble? Speed and scale. A cached response from Nginx is always going to be faster than hitting your Rails app: we’re talking a few milliseconds for Nginx versus hundreds for Rails. And Nginx is going to be able to handle many more simultaneous requests without breaking a sweat. Taking load off of your Rails app will make your app snappier and save you money on server costs.

Not all apps can benefit from reverse proxy caching: if every page of your app exists behind a login page and is heavily customized per user, you’ll need a more fine-grained solution. But when it does work, the benefits are great.


Not using Nginx? You can get some of the benefits using Rack::Cache, which is a reverse proxy cache implemented as a Rack middleware that can be plugged into Rails. However, since it is part of your Rails app, it will not be nearly as fast as an optimized web server like Nginx.

Another option is to use a CDN. Many CDNs offer what is essentially a “reverse proxy cache as a service” (sometimes called a “custom origin”). These work on the same principles as an Nginx reverse proxy cache, but you also get the benefits of the CDN’s much beefier network capacity and geographic distribution. If you prefer not to manage your own infrastructure, this is a good choice.

Have you tried a reverse proxy cache to boost the performance of your Rails app? How did it go? (By the way, mattbrictson.com is powered by Rails with an Nginx reverse proxy cache. For me, the answer is: so far, so good!)