Please upgrade here. These earlier versions are no longer being updated and have security issues.
HackerOne users: Testing against this community violates our program's Terms of Service and will result in your bounty being denied.

Vanilla and Varnish Cache - some lessons learned

LeeHLeeH
edited August 2012 in Vanilla 2.0 - 2.8

Per @Todd's suggestion, I wanted to post some info on what I've learned so far with configuring Varnish Cache with Vanilla. I've spent many an evening over the past couple of weeks frantically googling to try to run down issues and I'm pretty happy with the functionality, so the config is probably in a state to share. I'm working on a big blog post about the experience, and I'll add a link to this thread when I've got it done, but I can trim out just the Vanilla bits for you guys and put them here.

Some caveats:
1) I use Nginx. If you're using Apache, then there might be additional stuff you have to do. I don't know, because my life has been better since I got Apache out of it and I don't intend to ever go back.
2) The Nginx config I'm using with Vanilla can be found here.
3) I'm also using Vanilla 2.1, so this might or might not work with 2.0.

What gets cached?
Rather than dive into exactly how to cache full pages, I decided to sort of cheat and cache everything but full pages. The config below lets Varnish cache CSS files, javascript, @font-face files, and any images (including userpics and sprite pngs).

Here are the relevant bits out of my Varnish default.vcl file. Yes, I know I should keep stuff like this in a separate vcl file, but I am lazy. I'm leaving out the backend and other fancy stuff and just posting the stuff from the vcl_ subs. I'll go through the entire big-ass config on the blog post, when I finish that.

    sub vcl_recv {
        # We don't care about POSTs, so pass them
        if (req.request == "POST") {
            return (pass);
        }

        # Ignore WhosOnline plugin 
        if (req.url ~ "/plugin/imonline") {
            return (pass);
        }
        # Ignore Vanilla analytics  
        if (req.url ~ "/settings/analyticstick.json") {
            return (pass);
        }

        # Ignore Vanilla notifications
        if (req.url ~ "/dashboard/notifications/inform") {
            return (pass);
        }

        # Remove cookies from things that should be static, if any are set
        if (req.url ~ "(?i)\.(png|gif|jpeg|jpg|ico|swf|css|js|html|htm|woff|ttf|eot|svg)(\?[a-zA-Z0-9\=\.\-]+)?$") {
            remove req.http.Cookie;
        }

        # Strip & re-set X-Forwarder-For header, so that Nginx sees
        # actual client IP addresses and not localhost for everything
        remove req.http.X-Forwarded-For;
        set req.http.X-Forwarded-For = req.http.rlnclientipaddr;

        # Remove Google Analytics cookies so caching functions
        if (req.http.Cookie) {
            set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", "");
        }
        if (req.http.Cookie == "") {
            remove req.http.Cookie;
        }
    }

    sub vcl_pass {
        # Ensure connections are closed and not reused
        set bereq.http.connection = "close";
        # Ensure even passed/piped requests still carry a good X-Forwarded-For header
        if (req.http.X-Forwarded-For) {
            set bereq.http.X-Forwarded-For = req.http.X-Forwarded-For;
        } else {
            set bereq.http.X-Forwarded-For = regsub(client.ip, ":.*", "");
        }
    }

    sub vcl_pipe {
        # Ensure connections are closed and not reused
        set bereq.http.connection = "close";
        # Ensure even passed/piped requests still carry a good X-Forwarded-For header
        if (req.http.X-Forwarded-For) {
            set bereq.http.X-Forwarded-For = req.http.X-Forwarded-For;
        } else {
            set bereq.http.X-Forwarded-For = regsub(client.ip, ":.*", "");
        }
    }

    sub vcl_fetch {
        # Strip cookies before static items are inserted into cache.
        if (req.url ~ "\.(png|gif|jpg|swf|css|js|ico|html|htm|woff|eof|ttf|svg)$") {
            remove beresp.http.set-cookie;
        }

        # Adjusting Varnish's caching - hold on to all cachable objects fo 24 hours.
        # Objects declared explicibly as uncacheable are held for 60 seconds, which
        # helps in the event of a sudden ridiculous rush of traffic.
        if (beresp.ttl < 24h) {
            if (beresp.http.Cache-Control ~ "(private|no-cache|no-store)") {
                set beresp.ttl = 60s;
            }
            else {
                set beresp.ttl = 24h;
            }
        }
    }

    sub vcl_deliver {
        # Display hit/miss info
        if (obj.hits > 0) {
            set resp.http.X-Cache = "HIT";
        }
        else {
            set resp.http.X-Cache = "MISS";
        }
    }

The comments shoudl explain most of this. Starting at the top, we automatically pass on all HTTP POST requests, ignore Vanilla's chattering, and most importantly strip cookies off of everything that might even be remotely static. The regex on the end of that line ensures that we're caching all versions of the JS & CSS that Vanilla produces (and you can always manually curl -x PURGE things that are known-stale). Then there's a great deal of fiddling with the X-Forwarded-For header, which is obviously extremely important so that your logs aren't filled with 127.0.0.1 from end to end, and a short regex to remove Google Analytics cookies. Then more X-Forwarded-For massaging, to ensure PASS and PIPE exceptions have correct IP addresses, and then some more cookie fiddling for when objects are fetched. The penultimate bit ensures that even objects with "NO DON'T CACHE ME PLEASE!" settings are held for at least a little bit, since you never know when that Slashdotting might come crashing down on your head, and finally we close with a quick extra header which helps tell you on view whether or not each object being served is served from cache or from the backend.

As I said, I'll have more of an explanation when I get around to actually writing the blog post about this, but this should be enough to nudge any brave souls who want to try it out in the right direction :)

Tagged:

Comments

  • ToddTodd Chief Product Officer Vanilla Staff

    This is awesome stuff. I'm wondering if you can explain the following a bit further:

    Objects declared explicibly as uncacheable are held for 60 seconds, which helps in the event of a sudden ridiculous rush of traffic.

    Does varnish deliver the uncacheable stuff if nginx times out or something?

    The other thing that I was thinking would be a huge performance gain would be to cache pages for a short time for guests. Is there a way to test for the existence of a cookie and then set the ttl for that session?

  • Haven't had a chance to digest all of this yet, but this looks like some awesome work, and is going to be extremely useful for my project. Thank you for sharing @LeeH!

  • LeeHLeeH
    edited August 2012

    @Todd -

    The logic from that section is lifted in part from here, which explains some of the difficulty in an RFP-compliant cache. The crux of their solution is this:

    Varnish acts like a RFC2616 client side cache by default, with the footnote, that if no cacheability information is available, we use a default Time To Live (TTL) from the paramter "default_ttl".

    ...
    Varnish leaves Expires: and Cache-Control: headers intact, and sets the Age: header with the number of seconds the object have been cached and therefore, any RFC2616 client will do the right thing by default.

    A web server can obviously also declare objects uncacheable, but what shouldn't be cacheable by a client and what shouldn't be cacheable by an actual web server object cache might be two different things; consequently, I wanted to make sure to ferret out things which might have a no-cache or private attribute and which would otherwise be ignored by Varnish and make sure we hang onto them for 60 seconds. In regular day-to-day serving this probably isn't a big deal, but a badly-coded web app or a web designer not paying attention might declare a big fat background image or something as no-cache, which might screw us over the day that Reddit comes calling.

    There is also a separate Varnish function called "Grace Mode" which does indeed serve stale cached content past its ttl if the backend is unreachable or busy; that's something you'd definitely want on, but it only applies to stuff in the cache. Overriding no-cache ensures that stuff is in the cache in the first place.

    You can absolutely test for the presence of cookies and do some per-session caching—in fact, there's a somewhat outdated example of this on the Varnish site, demonstrating how content can be cached for logged-in users. The example sort of demonstrates how to incorporate a user's session ID into the hash that Varnish uses to identify content, thus giving each logged in user a unique content store. The immediate problem is that this could generate a tremendously large cache, since every session would get its own set of cached objects.

    It should also be possible to set conditions based on a cookie's name or contents, using a regex in sub rcl_recv, but I'm not sure exactly how to do it—I'd need to read the docs a bit harder and puzzle it out. I tend to figure this stuff out by screwing around with it and breaking it :)

  • @LeeH

    Thank you! Great blog post; I read it this morning. Also, thank you for including the VCL file. Awesome job, and thanks for sharing your experience with us.

  • TimTim Operations Vanilla Staff

    @LeeH Awesome work. This was on my list of things to do, and you've saved me a lot of time with a great head start :)

    Vanilla Forums COO [GitHub, Twitter, About.me]

  • 422422 Developer MVP
    edited August 2012

    There was an error rendering this rich post.

  • LeeHLeeH
    edited August 2012

    Tim
    @LeeH Awesome work. This was on my list of things to do, and you've saved me a lot of time with a great head start :)

    Glad to do it. It was a fun way to spend my vacation, which says a lot about my mental health :):)

    I'm sure there's room for improvement, since I was reading the docs & making stuff up as I went along, so when you do get around to implementing a "recommended" config or at least a starting template, I'd love to see it!

    You could easily make full pages cacheable, too, with some thought about cookies, I believe. Alternately, it would take a lot of rework, but you could use Varnish's edge-side includes (ESIs) to chop up pages into static and dynamic portions, in order to make the entire forum more cache-friendly.

    You could also benefit from having the web server send a PURGE for the current view's page whenever a discussion is updated—see here for how to do this with PHP. That way, every time a post is made, the discussion's page (and perhaps also the parent category's page) is purged and recreated.

    I dunno—there's definitely lots of room for improvement, since caching images & css & JS gets rid of the bulk of the bits that have to be moved by the web server, but won't help you hit MySQL any less.

    @422 - thanks :)

  • TimTim Operations Vanilla Staff

    We're currently offloading all of our static content: CSS, JS, Images, Avatars, etc onto a cluster of nginx servers that only server static content. This dropped the load on our app servers by a ton. I think we could easily add Varnish in front of those servers without much configuration and see some benefit.

    The harder part is the cookie inspect and PURGE stuff, so I'll see about that. It could probably be done with plugin hooks, so there could be a Varnish plugin.

    Vanilla Forums COO [GitHub, Twitter, About.me]

  • LeeHLeeH
    edited August 2012

    @Tim - That's definitely nginx's strong suit, though personally I think it's a good fit everywhere, including serving at the application layer (unless you go specialized with node.js or some other application server/framework Frankenstein thing). And Nginx does obviously cache content using the host file system, but tossing one or more Varnish boxes in the very front as a load balancer might actually let you retask or even eliminate some content servers, since you're at minimum eliminating a lot of file system IO with Varnish serving objects out of its store. Your bandwidth is still obviously a limitation, but as long as your cache hit rate is relatively high, you can service more IOs with less hardware.

    With your hosted solution bring your obvious money-maker, I can see how you've got some challenges to overcome: how to leverage Varnish as more than just a css/js/image cache in a way that benefits your service offerings without complicating or breaking them. That's totally over my head and it's why they pay you guys the big bucks, so good luck with that ;-)

  • TimTim Operations Vanilla Staff

    We're now using Varnish on our hosted clusters to great effect! Thanks for the initial research you did @LeeH, it was very helpful.

    Vanilla Forums COO [GitHub, Twitter, About.me]

  • That's awesome :) And I'm sure that using Varnish with Vanilla for the hosted clusters will have trickle-down benefits for us self-hosters, since Vanilla's code will no doubt evolve now with Varnish compatibility in mind.

  • hbfhbf wiki guy? MVP

    I'm going to be doing a bit of server re-architecture in the next couple weeks. I think varnish will find it's way into the wiring diagram. I'll post my results.

  • Tried this but in headers it says:
    X-Varnish: 850271844
    Age: 0
    Via: 1.1 varnish
    Connection: keep-alive
    X-Cache: MISS

    So nothing is caching. I'm also on Nginx

  • hydnhydn New
    edited October 2012

    HITS always = 0 hits. I'm sure its something simple in the above config I need to change. So I have rewritten to:

    backend nginx {
        .host = "127.0.0.1";
        .port = "8080";
    }
    
    sub vcl_recv {
    if (req.http.Accept-Encoding) {
            if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") {
                    # No point in compressing these
                    remove req.http.Accept-Encoding;
            } elsif (req.http.Accept-Encoding ~ "gzip") {
                    set req.http.Accept-Encoding = "gzip";
            } elsif (req.http.Accept-Encoding ~ "deflate") {
                    set req.http.Accept-Encoding = "deflate";
            } else {
                    # unkown algorithm
                    remove req.http.Accept-Encoding;
            }
    }
    if (req.url ~ "\.(ico|png|gif|jpg|swf|css|js)$") {
            return(lookup);
    }
    }
    sub vcl_fetch {
    set beresp.ttl = 31556926s;
    if (req.url ~ "\.(ico|png|gif|jpg|swf|css|js)$") {
       unset beresp.http.set-cookie;
    }
                    remove req.http.X-Forwarded-For;
                    set     req.http.X-Forwarded-For = req.http.rlnclientipaddr;
                    return(deliver);
    }
    sub vcl_deliver {
                    remove resp.http.X-Varnish;
                    remove resp.http.Via;
    remove resp.http.Age;
                    remove resp.http.X-Powered-By;
    if (obj.hits > 0) {
             set resp.http.X-Cache = "HIT";
       } else {
             set resp.http.X-Cache = "MISS";
       }
    }
    

    I will fix any issues one by one and add ignore stuff one by one.

    TTFB down big time!!!

    EDIT: not sure how to make my code user-friendly sorry.

  • peregrineperegrine MVP
    edited November 2012

    it seems in the varnish above I'm guessing the following rules mean to ignore files coming from these two areas. If this statement is correct.

    what would be the equivalent .htaccess rule for this

    # Ignore WhosOnline plugin
    if (req.url ~ "/plugin/imonline") {
    return (pass);

    # Ignore Vanilla notifications
    if (req.url ~ "/dashboard/notifications/inform") {
    return (pass);
    }

    I ask because I am trying to resolve an issue here and provide some insights for this.

    http://vanillaforums.org/discussion/21977/anyone-using-wordpress-plugin-w3-total-cache-with-whos-online-plugin#latest

    I may not provide the completed solution you might desire, but I do try to provide honest suggestions to help you solve your issue.

  • Hello! Can You help me with next thing: Is there any opportunity to count number of hits and misses for each backend separately? varnishstat and varnishlog does not give this info...

  • Sorry for resurrecting this post, but I was struggling with cache not working with LeeH setup. Was getting the response X-Cache as MISS.

    Just added

    return(delivery);

    to the last line of vcl_fetch() and it started to work.

    May be useful for somebody.

  • @Damir said:
    Hello! Can You help me with next thing: Is there any opportunity to count number of hits and misses for each backend separately? varnishstat and varnishlog does not give this info...

    Good question Damir, Ive had to solve this recently and one way to do it is to log HIT/MISS status in varnishncsa log. This lets you then work on log files and group them as needed.
    Otherwise you can just deploy a Varnish solution that gives you metrics and logs out of the box - www.section.io

    Let me know if any additional help needed,
    Matt

  • Even though this topic goes pretty deep into Varnish, I seem to run into some low level issues with Varnish.

    My provider offers a Varnish cloud and the only control you have is: ON or OFF, and the choice between static, dynamic and cache_all. So I can't customize the handling for various filetypes, for example.

    My first tests with Dynamic are terrible: Users can't login anymore ("Please try again.") and also registering is impossible. I'm about to find out if it will work again with setting static, but each change on Varnish settings take approx 15 minutes to reload ... :(

    In the mean time: Any other advice or best practices regaring Vanilla + Varnish where it's no option to change the Varnish config yourself? Are there settings that can be place in htaccess or the Vanilla config.php?

    Thanks!

Sign In or Register to comment.