Should we be skeptical of performance guidelines which state that 100 milliseconds feels instantaneous to everyone?
Performance is important in many ways, some of which matter particularly for the Wikimedia Foundation.
Machine learning is a powerful tool, but it’s easy to use it incorrectly and draw biased conclusions, as we’ll show in this real world example.
We use both synthetic and RUM testing for Wikipedia. These two ways of testing performance are best friends and help us verify regressions. Today, we will look at two regressions where it helped us to get metrics both ways.
One of the Performance Team responsibilities at Wikimedia is to keep track of Wikipedias performance. Why is performance important for us? In our case it is easy: We have so many users and if we have a performance regression, we are really affecting people’s lives.
Let’s explore our web performance data from an angle we haven’t explored before: mobile device type.
Yesterday we deployed Thumbor support for Wikimedia-hosted private wikis. While 99.9% of our traffic is for public-facing wikis, the Wikimedia Foundation hosts a number of private MediaWiki instances on the same infrastructure.
Here is how we measure and interpret load times on Wikipedia. Let’s also look at what real-user metrics are, and how percentiles work.
Introducing Thumbor replaces an existing service, and as such it’s important that it doesn’t preform worse than its predecessor. We came up with a strategy to reach feature parity and ensure a launch that would be invisible to end users.
To understand why Thumbor is a good fit, it’s important to understand where it fits in our overall thumbnailing architecture. A lot of historic constraints come into play, where Thumbor could be adapted to meet those needs.