• Skip to main content
  • Skip to footer

TechGirlKB

Performance | Scalability | WordPress | Linux | Insights

  • Home
  • Speaking
  • Posts
    • Linux
    • Performance
    • Optimization
    • WordPress
    • Security
    • Scalability
  • About Janna Hilferty
  • Contact Me

Scalability

Adding version control to an existing application

Most of us begin working on projects, websites, or applications that are already version controlled in one way or another. If you encounter one that’s not, it’s fairly easy to start from exactly where you are at the moment by starting your git repository from that point. Recently, however, I ran into an application which was only halfway version controlled. By that I mean, the actual application code was version controlled, but it was deployed from ansible code hosted on a server that was NOT version controlled. This made the deploy process frustrating for a number of reasons.

  • If your deploy fails, is it the application code or the ansible code? If the latter, is it because something changed? If so, what? It’s nearly impossible to tell without version control.
  • Not only did this application use ansible to deploy, it also used capistrano within the ansible roles.
  • While the application itself had its own AMI that could be replicated across blue-green deployments in AWS, the source server performing the deploy did not — meaning a server outage could mean a devastating loss.
  • Much of the ansible (and capistrano) code had not been touched or updated in roughly 4 years.
  • To top it off, this app is a Ruby on Rails application, and Ruby was installed with rbenv instead of rvm, allowing multiple versions of ruby to be installed.
  • It’s on a separate AWS account from everything else, adding the fun mystery of figuring out which services it’s actually using, and which are just there because someone tried something and gave up.

As you might imagine, after two separate incidents of late nights trying to follow the demented rabbit trail of deployment issues in this app, I had enough. I was literally Lucille Bluth yelling at this disaster of an app.

It was a hot mess.

Do you ever just get this uncontrollable urge to take vengeance for the time you’ve lost just sorting through an unrelenting swamp of misery caused by NO ONE VERSION-CONTROLLING THIS THING FROM THE BEGINNING? Well, I did. So, below, read how I sorted this thing out.

Start with the basics

First of all, we created a repository for the ansible/deployment code and put the existing code on this server in place. Well, kind of. It turns out there were some keys and other secure things that shouldn’t be just checked into a git repo willy-nilly, so we had to do some strategic editing.

Then I did some mental white-boarding, planning out how to go about this metamorphosis. I knew the new version of this app’s deployment code would need a few things:

  • Version control (obviously)
  • Filter out which secure items were actually needed (there were definitely some superfluous ones), and encrypt them using ansible-vault.
  • Eliminate the need for a bastion/deployment server altogether — AWS CodeDeploy, Bitbucket Pipelines, or other deployment tools can accomplish blue-green deployments without needing an entirely separate server for it.
  • Upgrade the CentOS version in use (up to 7 from 6.5)
  • Filter out unnecessary work-arounds hacked into ansible over the years (ANSIBLE WHAT DID THEY DO TO YOU!? :sob:)
  • Fix the janky way Passenger was installed and switch it from httpd/apache as its base over to Nginx
  • A vagrant/local version of this app — I honestly don’t know how they developed this app without this the whole time, but here we are.

So clearly I had my work cut out for me. But if you know me, you also know I will stop at nothing to fix a thing that has done me wrong enough times. I dove in.

Creating a vagrant

Since I knew what operating system and version I was going to build, I started with my basic ansible + vagrant template. I had it pull the regular “centos/7” box as our starting point. To start I was given a layout like this to work with:

+ app_dev
- deploy_script.sh
- deploy_script_old.sh
- bak_deploy_script_old_KEEP.sh
- playbook.yml
- playbook2.yml
- playbook3.yml
- adhoc_deploy_script.sh
+ group_vars
- localhost
- localhost_bak
- localhost_old
- localhost_template
+ roles
+ role1
+ tasks
- main.yml
+ templates
- application.yml
- database.yml
- role2
+ tasks
- main.yml
+ templates
- application.yml
- database.yml
- role3
+ tasks
- main.yml
+ templates
- application.yml
- database.yml

There were several versions of old vars files and scripts leftover from the years of non-version-control, and inside the group_vars folder there were sensitive keys that should not be checked into the git repo in plain text. Additionally, the “templates” seemed to exist in different forms in every role, even though only one role used it.

I re-arranged the structure and filtered out some old versions of things to start:

+ app_dev
- README.md
- Vagrantfile
+ provisioning
- web_playbook.yml
- database_playbook.yml
- host.vagrant
+ group_vars
+ local
- local
+ develop
- local
+ staging
- staging
+ production
- production
+ vaulted_vars
- local
- develop
- staging
- production
+ roles
+ role1
+ tasks
- main.yml
+ templates
- application.yml
- database.yml
- role2
+ tasks
- main.yml
- role3
+ tasks
- main.yml
+ scripts
- deploy_script.sh
- vagrant_deploy.sh

Inside the playbooks I lined out the roles in the order they seemed to be run from the deploy_script.sh, so they could be utilized by ansible in the vagrant build process. From there, it was a lot of vagrant up, finding out where it failed this time, and finding a better way to run the tasks (if they were even needed, as often times they were not).

Perhaps the hardest part was figuring out the capistrano deploy part of the deploy process. If you’re not familiar, capistrano is a deployment tool for Ruby, which allows you to remotely deploy to servers. It also does some things like keeping old versions of releases, syncing assets, and migrating the database. For a command as simple as bundle exec cap production deploy (yes, every environment was production to this app, sigh), there was a lot of moving parts to figure out. In the end I got it working by setting a separate “production.rb” file for the cap deploy to use, specifically for vagrant, which allows it to deploy to itself.

# 192.168.67.4 is the vagrant webserver IP I setup in Vagrant
role :app, %w{192.168.67.4}
role :primary, %w{192.168.67.4}
set :branch, 'develop'
set :rails_env, 'production'
server '192.168.67.4', user: 'vagrant', roles: %w{app primary}
set :ssh_options, {:forward_agent => true, keys: ['/path/to/vagrant/ssh/key']}

The trick here is allowing the capistrano deploy to ssh to itself — so make sure your vagrant private key is specified to allow this.

Deploying on AWS

To deploy on AWS, I needed to create an AMI, or image from which new servers could be duplicated in the future. I started with a fairly clean CentOS 7 AMI I created a week or so earlier, and went from there. I used ansible-pull to checkout the correct git repository and branch for the newly-created ansible app code, then used ansible-playbook to work through the app deployment sequence on an actual AWS server. In the original app deploy code I brought down, there were some playbooks that could only be run on AWS (requiring data from the ansible ec2_metadata_facts module to run), so this step also involved troubleshooting issues with these pieces that did not run on local.

After several prototype servers, I determined that the AMI should contain the base packages needed to install Ruby and Passenger (with Nginx), as well as rbenv and ruby itself installed into the correct paths. Then the deploy itself will install any additional packages added to the Gemfile and run the bundle exec cap production deploy, as well as swapping new servers into the ELB (elastic load balancer) on AWS once deemed “healthy.”

This troubleshooting process also required me to copy over the database(s) in use by the old account (turns out this is possible with the “Share” option for RDS snapshots from AWS, so that was blissfully easy), create a new Redis instance, copy over all the s3 assets to a bucket in the new account, and create a Cloudfront instance to serve those assets, with the appropriate security groups to lock all these services down. Last, I updated the vaulted variables in ansible to the new AMIs, RDS instances, Redis instances, and Cloudfront/S3 instances to match the new ones. After verifying things still worked as they should, I saved the AMI for easily-replicable future use.

Still to come

A lot of progress has been made on this app, but there’s more still to come. After thorough testing, we’ll need to switch over the DNS to the new ELB CNAME and run entirely from the new account. And there is pipeline work in the future too — whereas before this app was serving as its own “blue-green” deployment using a “bastion” server of sorts, we’ll now be deploying with AWS CodeDeploy to accomplish the same thing. I’ll be keeping the blog updated as we go. Until then, I can rest easy knowing this app isn’t quite the hot mess I started with.

5 Winning WordPress Search Solutions

The Problem

If you’ve designed many WordPress sites, you may have noticed something: The default search function in WordPress… well… it sucks. It seriously does. If you’re unaware, allow me to enlighten you.

Firstly, the search by default only searches the title, content, and excerpt of default pages and posts on your site. Why does this suck? Because your users probably want to find things that are referenced in Custom Post Types. This includes WooCommerce orders, forums, and anything else you’ve separated to its own specific type of “post.”

The default WordPress search function also doesn’t intuitively understand searches in quotations (“phrase search”), or sort the results by how relevant they are to the term searched.

And, the default WordPress search uses a super ugly query. Here’s the results on my own default search when I searched for the word “tech” on my site:

As a performance expert, this query makes me cringe. These queries are very unoptimized! And they don’t scale well with highly-trafficked sites. Multiple people running searches on your site at once, especially ones with high post counts, will slow your site down to a crawl.

The Solution

So if WordPress search sucks, what is the best option for your site? I’m glad to explain. Firstly, if there’s any way for you to offload the searches to an external service, this will make your site much more “lightweight” on the server. This way, your queries can run on an external service specifically designed for sorting and searching! In this section I’ll explain some of the best options I’ve seen.

Algolia Search

Algolia is a third party integration you can use with WordPress. With this system, your searches happen “offsite,” on Algolia’s servers. It returns your results lightning fast. Here’s a comparison of using WordPress default search, to Algolia’s external query system, on a site with thousands of events:

Default WP search:

Algolia search:

Algolia clearly takes the cake here, returning results in .5 seconds compared to nearly 8 seconds. Not only is it fast, offloading searches to external servers optimized for query performance helps reduce the amount of work your server has to do to serve your pages. This means your site will support more concurrent traffic and more concurrent searches!

Lift: Search for WordPress

The Lift plugin offers similar benefits to Algolia in that it offers an offsite option for searching. This plugin specifically uses Amazon CloudSearch services to support your offsite searches. The major downside to this plugin is that it hasn’t been actively maintained: it hasn’t been updated in over two years. Here’s a cool diagram of how it works:

source: colorlib.com

While this plugin hasn’t been updated in quite a while, it works seamlessly with most plugins and themes, offers its own search widget, and can even search media uploads. WP Beginner has a great setup guide for help getting started.

ElasticPress

ElasticPress is a WordPress plugin which drastically improves searches by building massive indexes of your content. Not only does it integrate well with other post types, it allows for faster and more efficient searches to display related content. This plugin requires you to have ElasticSearch installed on a host. This can be the server your site resides on (if your host allows), your own computer, a separate set of servers, or using Elastic Cloud to host it on AWS using ElasticSearch’s own service. To manage your indexes, you’ll want to use WP CLI.

ElasticPress can sometimes be nebulous to set up, depending on your configuration and where ElasticSearch is actually installed. But the performance benefits are well worth the trouble. According to pressjitsu, “An orders list page that took as much as 80 seconds to load loaded in under 4 seconds” – and that’s just one example! This system can take massive, ugly search queries and crunch them in a far more performant environment geared specifically towards searching.

Other options

There are some other free, on-server options for search plugins. These plugins will offer more options for searching intuitively, but will not offer the performance benefits of the ones mentioned above.

Relevanssi

Relevanssi is what some in the business call a “Freemium” plugin. The base plugin is free, but has premium upgrades that can be purchased. Out of the box, the free features include:

  • Searching with quotes for “exact phrases” – this is how many search engines (like Google) search, so this is an intuitive win for your users.
  • Indexes custom post types – a big win for searching your products or other
  • “Fuzzy search” – this means if users type part of a word, or end up searching with a typo, the search results still bring up relevant items.
  • Highlights the search term(s) in the content returned – this is a win because it shows customers why specific content came up for their search term, and helps them determine if the result is what they need.
  • Shows results based on how relevant or closely matched they are, rather than just how recently they were published.

The premium version of Relevanssi includes:

  • Multisite support
  • Assign “weight” to posts so “heavier” ones show up more or higher in results
  • Ability to import/export settings

Why I don’t recommend Relevanssi at the top of my list: it’s made to be used with 10,000 posts or less. The more posts you have, the less performant it is. This is because it still uses MySQL to search in your site’s own database, which can weigh down your site and the server it resides on. Still, it offers more options for searching than many! It is a viable option if you have low traffic and fewer than 10,000 posts.

SearchWP

SearchWP claims to be the best search plugin out there. It certainly offers a lot of features, either way. Out of the box, it can search: PDFs, products and their description, shortcode data, terms and taxonomy data, and custom field data. That’s a pretty comprehensive list!

Above you can see some of the nice customizable settings like weight, excluding options, custom fields, and how to easily check/uncheck items to include.

However, SearchWP comes with a BIG asterisk from me. SearchWP will create giant tables in your database. Your database should be trim to perform well. You want to be sure the size of your databases fit within your Memory buffer pool for MySQL processes to ensure proper performance. Be absolutely certain you have enough server resources to support the amount of data stored by SearchWP!


These solutions are the only ones I would truly recommend for sites. There certainly are others available, but they work using AJAX which can easily overwhelm your server and slow down your site. Or, they use equally ugly queries to find the search terms.

As a rule of thumb, I absolutely recommend an offsite option specifically optimized for searches. If this simply isn’t an option, be sure to use a plugin solution that offers the range of features you need without weighing down your database too much.

Is there a search solution you like on your own site? Is there an important option I left off? Let me know in the comments, or contact me.

 

WordPress Doesn’t Use PHP Sessions, and Neither Should You

What are PHP Sessions?

PHP Sessions are a type of cookie, meant to store or track data about a user on your site. For instance, a shopping cart total, or recommended articles might gather this kind of data. If a site is using PHP Sessions, you’ll be able to see them by opening your Chrome Inspector. Right-click the page and choose “Inspect Element”. Then select “Application” and expand the “Cookies” section. Below is an example of a site which is using PHP Sessions:

What’s wrong with PHP Sessions?

There are a number of reasons sites should not use PHP Sessions. Firstly, let’s discuss the security implications:

  • PHP Sessions can easily be exploited by attackers. All an attacker needs to know is the Session ID Value, and they can effectively “pick up” where another user “left off”. They can obtain personal information about the user or manipulate their session.
  • PHP Sessions store Session data as temporary files on the server itself, under the /tmp directory. This is particularly insecure on shared hosting environments. Since any site would have equal access to store files in /tmp, it would be relatively easy for an attacker to write a script to read and exploit these files.

So we can see PHP Sessions are not exactly the most secure way to protect the identity of the users on the site. Not only this, but PHP Sessions also carry performance implications. By nature, since each session carries a unique identifier, each new user’s requests would effectively “bust cache” in any page caching system. This system simply won’t scale with more concurrent traffic! Page cache is integral to keeping your site up and running no matter the amount of traffic you receive. If your site relies on PHP Sessions, you’re essentially negating any benefits for those users.

So I can’t track user behavior on my site?

False! You absolutely can. There are certainly more secure ways to store session data, and ways that will work better within cache. For example, WooCommerce and other eCommerce solutions for WordPress store session data in the database using a transient session value. This takes away the security risk of the temporary files stored with $_SESSION cookies. WordPress themselves choose to track logged-in users and other sessions with cookies of other names and values. So it is definitely possible to achieve what you want using more secure cookies.

I’m already using PHP Sessions. What now?

I’d recommend searching your site’s content to ensure you don’t have any plugins that are setting a “$_SESSION” cookie. If you find one, take a step back and look critically at the plugin. Is this plugin up to date? If not, update it! Is it integral to the way your site functions? If not, delete it! And if the plugin is integral, look out for replacement plugins that offer similar functionality for your site.

If the plugin itself is irreplaceable and is up to date, your next step should be asking the plugin developer what their plan is. Why does it use $_SESSION cookies? Are they planning on switching to a more secure method soon? The harsh reality is, due to the insecure nature of PHP Sessions, many WordPress hosts don’t support them at all.

As a last resort, if your host supports it you may want to check out the Native PHP Sessions plugin from Pantheon. Be sure to check with your host if this plugin is allowed and supported in their environment!

Continuous Integration vs Continuous Delivery

Introduction to Automation

A common principle in modern development is that you should use a version control system to manage code. This principle is especially important when working with a team of developers. Version control will allow your team to label their changes, merge code with others, and manage multiple codebases intuitively. Continuous integration and Continuous delivery are systems which help automate version control. In these systems each developer’s code is merged daily or at frequent intervals, and is tested against builds. In this way, code is frequently checked to prevent conflicts and errors at an early stage.

Continuous Integration

The ideology of Continuous Integration (CI) is simple: commit early, commit often. In its early stages, CI also involved unit testing to be run on each developer’s local machine. In more modern implementations, build servers are used instead.

Continuous Integration can mean either integrating changes to the main codebase several times daily, or “frequently” depending on the size of the project. For smaller subtasks the code would be integrated several times daily, whereas with larger projects, a more appropriate term might be “frequent integration.” Ideally in Continuous Integration, projects are broken up into tasks that would take no more than a day’s time to complete. This way, code can be integrated at least once per day.

Continuous Delivery

The concept behind Continuous Delivery (CD) is simply the process of continually releasing code into production from your main codebase. Continuous Integration allows different developers within different projects to code changes, automatically test against builds, and integrate them into the codebase. Then, Continuous Delivery is the process of deploying groups of those codebase changes into production together. This is the basis of Agile Development:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

By continuously integrating, this allows developers across the company to strategically group code releases together and continually release features. Before the Agile Development strategy, development teams were faced with massive combinations of code, producing unexpected conflicts and causing what some called “integration hell” while developers integrated code for hours or days at a time for larger, more spread out releases.

Why Use Agile Methods?

Most companies who choose to use Agile methods to deliver code have some major pain points:

  • Updating code was nebulous and difficult to manage.
  • Developer code was often blocked by other releases, who were potentially waiting on others in a vicious cycle.
  • Integrating code for a large release produced blockers and errors.
  • Automated testing wasn’t happening until hours or days of work were already invested in bad code.
  • End users were experiencing a slow turnaround for issue resolution or new features.

With Agile methodology, releases are constant. Developers “check out” a piece of code from the “codebase” library. They make the needed changes, and “check in” the code to the library again. Before the code is accepted to the library, it’s checked against automated build tests. In this way, developers are receiving a continuous feedback loop. And code is checked right away for errors! Less time is wasted, and more work is done.

Help! There’s so many options!

Yes, there are a lot of tools out there to help automate your workflow. It can be difficult to choose which is right for your team! One of my favorite resources to use when choosing a new company or tool is G2 Crowd. G2 ranks companies as: Niche, Contenders, High Performers, and Leaders. Check out their Continuous Integration findings.

Before choosing the tool you wish to use, be sure to look at how G2 defines these quadrants:

  • “Niche” tools are not as widely adopted, or may be very new. They have good reviews so far, but not enough volume of usability ratings to know if they are a valid option for everyone.
  • “Contenders” are widely-used, but don’t have great usability ratings.
  • “High Performers” don’t have a huge base of users, but of those users, they received a high satisfaction rating.
  • Last, “Leaders” both have the largest market share of users, and have highest marks for user satisfaction.

Which tool you ultimately choose will highly depend on your business needs, budget, and team size. Be sure to thoroughly research the available options! G2 also allows you to compare software side-by-side if needed:


How did you choose the software your team uses? What are the benefits and disadvantages? Let me know in the comments, or contact me.

How to Choose the Best CDN for Your Site

What is a CDN?

Before we breach the topic of choosing the best CDN for your site, first we should consider: what is a CDN? The term “CDN” stands for “Content Delivery Network.” It’s a network of servers located all over the world. When used on your site, CDN helps distribute your content across these servers. This means users around the world will receive your content faster!

Full Site vs. Static Assets

There are two main ways CDN is used on websites: full-site CDN, or serving only static assets. The static assets are your site’s images, CSS files, fonts, and javascript. Since most CDN providers charge based on bandwidth, generally full-site CDN will be more expensive. If you choose to use a full-site CDN, the layers of CDN cache will serve as a web server in front of your origin server. And this means your origin server would be used for only uncached requests on your website. While this would help offload a large amount of traffic from your origin server, it also means your site is hosted across many servers worldwide. So if one CDN server goes down, the rest of the world could still see your site.

If full-site CDN isn’t an option or isn’t ideal for your site, you can still use CDN to host the images and other static content of your site. This option still helps your site load faster for users worldwide, and will likely be less expensive than using full-site CDN. With CDN serving static assets, you should still use a proxy cache like Varnish on your origin server. This will help your site’s performance and lighten the load on your server.

choosing the best cdn

CDN Comparison

There are a number of studies released comparing different CDNs, but in my mind there are a few key metrics to focus on: pricing, cacheability, number of web nodes, SSL support, and whether they offer full-site vs. only static resources. Both MaxCDN and Stratusly published similar reports on these factors in 2016. Below are the comparisons from Stratusly, who added more factors to consider:

Who supports full-site CDN?

Akamai, Fastly, Amazon CloudFront, and Verizon (formerly EdgeCast) offer static and dynamic caching options. These setups typically require pointing the nameservers from your DNS provider to them. This is known as a reverse-proxy. Fastly in particular supports “event-driven content” which they consider to be neither dynamic or static. This allows them to cache more effectively.

Cloudflare and MaxCDN (now StackPath) do not offer full-site CDN.

Who has the most web nodes worldwide?

Akamai is the clear leader here by far. They offer 2700 web nodes worldwide, while the closest competitor (CloudFlare) has 76. Akamai also isn’t able to provide a specific range of IPs if requested. The IPs in use are constantly changing. This makes it incredibly difficult for any one source to target them with a DDOS.

Who is the least expensive?

This is a more difficult question to answer directly. Some pricing models vary based on the amount of traffic, and some are flat-rate. MaxCDN made this visual representation of how Amazon’s pricing model compares to theirs:

And, here’s the graphic the Stratusly team made to compare the three whose pricing varies on bandwidth:

The pricing tends to look pretty similar in the beginning, until you look at enterprise volumes of traffic. If you look at high volumes of traffic then MaxCDN looks great, but also consider that they do not offer full-site CDN.

Who has the best support?

Of the six competitors we’re comparing, only two don’t include Free 24/7 Support. CloudFlare offers phone, chat, and email support 24/7 for Enterprise-level accounts, but only email support for the rest of their plans. For Amazon, they offer a wide library of online support guides and forums, but live support is not available unless you choose one of their Business or Premium Support plans.

Who supports SSL?

All the competitors offer “Shared SSL,” which is a way of saying they will secure the CDN URL they provide you. For example, on MaxCDN they use subdomains of the same root: somezone.company.netdna-cdn.com. You’re able to enable the “Shared SSL” to secure this URL. However, many companies will want to use a Dedicated SSL (SNI) based on a Custom CDN URL of their choosing. Amazon, CloudFlare, and MaxCDN offers these options. None of these, however, support HTTP/2 (a faster version of HTTP). MaxCDN is closer since they offer SPDY support.

Who is the fastest?

All the competitors offer differing metrics here, claiming to be the fastest. These metrics may mean nothing to you though, depending on your options, your geographic location, and the location of your users. I’d recommend using CDNPerf to compare usability and speed worldwide. This comparison spans 24 CDN providers and compares worldwide, specific regions, or specific countries to help you make the right decision. You can also use their “compare CDN” tool to view performance graphs.

Conclusion

So which is the best CDN for you? It highly depends on your company’s needs for CDN. Be sure to consider all the factors before making your decision. Compare performance, pricing, features, and support before deciding.

Are there other features worth considering when picking a CDN provider? Do you have other thoughts about CDN? Feel free to let me know in the comments or by contacting me.

 

How To: Configure Varnish for Responsive Sites

Scalability vs Responsive Sites

A major part of performance with sites on any platform is ensuring that your website is highly cacheable. By caching pages to serve as static files to repeat users, this offloads a lot of server processing power. Without cache, your web server would have to re-process the code to generate your page. Not only does this make more work for your server, it could slow down the response time for your site.

A challenge developers face is how to cache their sites heavily, while still adapting to mobile devices. With Google PageSpeed’s score hinging on mobile compatibility, many themes in WordPress are adapting to offer responsive options. Chances are you’re wanting a different, mobile-friendly version of your site to show to your smartphone users. With page cache, your user sees whatever version was requested the last time the page was stored in cache. This equates to a poor experience for your users.

X-Device Cache Groups

So how do you handle this scenario? Difference circumstances require two different versions of the site. You can do this by adding an “X-Device Group” header based on the HTTP User Agent the request came from. This allows you to detect devices in Varnish. This way, any device you designate as “mobile” gets cached in a mobile group. At the same time, desktop users are cached in a separate cache group. So this solves our problem! A mobile user receives the mobile site. A desktop user receives the desktop site.

Example Config

So how do you detect devices in Varnish? Here’s an example setup:

 

# Cache different devices in different cache groups so they're served the right version
if ( req.http.User-Agent ~ "(?i)(tablet|ipad|playbook|silk)|(android(?!.*mobile))" ) {
# Cache tablet devices in the same group - it's possible this is different than smartphones for some sites
set req.http.X-Device = "tablet";
}
elsif ( req.http.User-Agent ~ "(?i)
Mobile|iP(hone|od)|Android|BlackBerry|IEMobile|Kindle|NetFront|Silk-Accelerated|(hpw|web)OS|Fennec|Minimo|Opera M(obi|ini)|Blazer|Dolfin|Dolphin|Skyfire|Zune" ) {
# Cache smartphones together. The mobile user-agents receive a mobile version with a responsive theme
set req.http.X-Device = "smartphone";
}
else {
# Cache everything else together. We assume anything that doesn't fit into the above regex is a regular desktop user.
set req.http.X-Device = "regular";
}

 

This snippet says to add a X-Device header to each of these device groups. It allows Varnish to cache the devices separately. You can set the header to whatever you prefer. Some prefer something more like X-Cache-Bucket, or X-Cache-Group (which WP Engine uses). If you’re curious how I came to this setup and these particular user-agents, I grabbed these from a pretty thoughtful list. And, Varnish published a very helpful guide on device detection that I snagged to get started.

When you detect devices in Varnish caching groups like this, it allows you to take full advantage of page caching, while still maintaining responsive behavior. That’s a win you can take to the bank! Using more cached pages equates to faster website load times.

  • Page 1
  • Page 2
  • Next Page »

Footer

Categories

  • Ansible
  • AWS
  • Git
  • Linux
  • Optimization
  • Performance
  • PHP
  • Scalability
  • Security
  • Uncategorized
  • WordPress

Copyright © 2025 · Atmosphere Pro on Genesis Framework · WordPress · Log in