Why performance matters

Performance is important in many ways, some of which matter particularly for the Wikimedia Foundation.

By Ian Marlier, Engineering Manager, Wikimedia Performance Team

There are practical reasons that web performance matters. From a user perspective, a site that’s slow results in frustration, annoyance, and ultimately a preference for alternatives. From the perspective of a site operator, frustrated users are users who aren’t going to return, and that makes it more difficult to accomplish your mission (be it commercial or public service). Optimizations keep people happy, keep them coming back, and keep them engaged. [1]

But, there’s a far more important reason to care about performance, especially for an organization like Wikimedia: improving performance is an essential step toward equity of access.

There are a multitude of factors that influence how quickly a web site loads. Many of these are universal to every user: the software itself, the operational environment in which that software runs, the network that carries the bits from the server. Improvement in any of these areas benefits every consumer of the site.

This doesn’t account for the large number of factors that are user specific. Among the factors that can significantly influence how quickly a web page loads for a given user are geography (a user who lives further away from the servers that host a web site will typically have slower access than a user who is closer); the network between the server and the user (a network that is less developed may be slower, or more susceptible to congestion); the user’s connection (mobile data is slower than wired broadband in most cases); and the user’s actual device (an old computer will load pages more slowly than a new one).

The common thread between these factors is that they correlate to socioeconomic and social factors, rather than technical ones. Wealthier people, in more developed countries, have a significantly easier time accessing the vast resources of the Internet than others. If an increasingly networked world is going to result in a more equal human society, we need to make thoughtful interventions, including interventions focused on performance.


The correspondence of geography to socioeconomic factors manifests primarily in where servers are located. Datacenters, by and large, are located in wealthier parts of wealthier countries — places where physical and network security guarantees are high, infrastructure is reliable, and trained staff are easy to hire. This is a sensible decision by those who build and operate these facilities, but it has the unintended consequence of slowing web performance for anyone who isn’t located in a wealthier part of a wealthy country.

Backbone Networks

Backbone networks are the networks that carry traffic from servers to end users — the highways that collectively make up the “information superhighway”. And like highways, not all are equal. Massive cables connect cities like San Francisco, Seattle, and New York; many other cities, even ones that are quite large, are served by second or third order spurs off of these primary lines. Dozens of cables traverse the North Atlantic and North Pacific; only a small handful cross any oceans South of the equator. Interior network maps are hard to come by, but in most of the world we know that smaller towns and sometimes even smaller cities are simply not connected to the Internet at all.

Last-mile connectivity

Last-mile connectivity is the way that engineers talk about the way that your computer or smartphone connects to the network. Cable internet is one form of last-mile connectivity; so is 4G cellular, or DSL. In most of the world, the last mile is the biggest bottleneck in network traffic. It’s more likely than not that the last mile is the slowest part of the entire journey from the server to your computer, regardless of where you are in the world.

However, depending on where in the world you are, “slowest” can have very different meanings. In many countries, only a tiny fraction of the population has any access to high-speed internet, whether wired or wireless. Less than 1% in Ethiopia; about 2.5% of the population in Nicaragua; 15% in Libya. Even in India, considered by many to be a key cog in the modern Internet economy, less than 25% of the population has high speed data access. Meanwhile, in Japan, the average individual has 2 broadband subscriptions. In much of Western Europe, too, the rate of broadband penetration approaches or exceeds 100%.

Device quality

The final factor that corresponds with development and socioeconomic status is device quality. Stated simply, computers are expensive, whether those computers are placed on a desk or carried in a pocket. Recent trends in software development have pushed more computation down the wire to the client. This, in turn, means that the performance difference for a site when run on a high-end versus a low-end device can be quite significant, and in some cases it’s not even possible to access sites on devices that are underpowered. [2]

Though there is no single change that we can make that will address all of these factors, addressing each of them is core to serving the mission of the Wikimedia Foundation, and of the Wikimedia movement as a whole.

One ongoing element of this work is research to understand the actual factors that influence user perception of performance, and the way that user satisfaction is impacted when a page loads slowly. This allows us to make data-driven decisions about where to spend our time and our energy.

We’ve shown that expanding our cache footprint can help to minimize the effects of geography. This gives us a way to address the imbalances that result from immutable physics.

We’re not in a position to address inequality of backbone or last-mile network infrastructure — that’s something best left to telecom companies, governments, or non-profit organizations that have chosen that as their work. What we can do is to minimize the effects of these disparities by reducing the number of bytes that need to go down the wire in order to display a page, by exploring technologies like peer-to-peer distribution to eliminate them altogether, or by increasing usage of offline content that can be downloaded in bulk using public high-speed connections.

Finally, we can aggressively work to lower the compute cost of each page that we serve, so that the cost or the age of a user’s device doesn’t impact their ability to read, learn, and contribute to the world of free knowledge.

Performance engineering matters, in other words, because it gives us a way to eliminate technological divides that are otherwise difficult, expensive, or even impossible to address at a systemic level.


  1. A faster FT.com is a great breakdown of the implications of performance on content consumption, based on the experience of the Financial Times as they were developing a new website. Impact of slow page load time aggregates a number of different studies that illustrate the financial implications of slow page-load performance for commercial websites. ↩︎
  2. A number of years ago, Chris Zacharias, formerly an engineer at YouTube, published an anecdote about the creation of a very lightweight video display page. When they launched it to a subset of traffic, the result was that measured page performance got worse, a surprising result when the page was significantly smaller. In the end it turned out that this happened because it was suddenly possible to load the player on low-powered devices and in less-connected geographies — previously those data hadn’t been included at all because YouTube was entirely inaccessible at any speed. ↩︎

About this post

Featured image credit: NT-Design, CC BY 3.0, via Wikimedia Commons.