[go: up one dir, main page]

Page MenuHomePhabricator

On-demand performance testing
Open, Needs TriagePublicFeature

Description

When we move the mobile performance device lab to our new provider, we should make sure developers easily can run their own tests on demand.

We should make sure it's easy to run performance tests against different URLs/user journeys and compare the result. Today you need to commit the test to Git and then let our server pickup the tests and run the tests over and over. If we could make it possible to run on demand, we will make it a faster for us to know if a change has a potential performance regression or not.

We could potentially do so that devs can trigger a test (https://github.com/wikimedia/performance-synthetic-monitoring-tests) from Slack/IRC/web and then get a link to the result when the runs are finished.

I've started to implement this some time ago (modernising the work with the scholarship I got from the Swedish Internet Foundation many years ago) and I have a working version but what I wanna do before we can use it making it possible to change backend (today agent/server talks to each others using Redis, and it should be possible to implement a interface so we can easier can change backend).

Event Timeline

I'v started to prepare for this and a realistic time schedule is that I can demo on a device setup at my home but running the tests from a gui deployed somewhere else. I mostly did the work for this around Christmas but is some tuning that needs to be done, and realistic is to demo it in the end of March.

Peter removed a subscriber: Aklapper.
Peter renamed this task from Run mobile performance tests on demand to Run performance tests on demand.Aug 31 2023, 6:12 PM
Peter updated the task description. (Show Details)

I've be tuning this for a while and I think this plan will work:

17:th October: deploy the setup on Wikicloud
24th October: let one of our synthetic test suite go through our new setup. That means for example all emulate mobiled tests use the new setup, so that the result of the tests are stored in a database. The rest will work exactly as before. By collecting data early from one type of tests, we will have data to verify that everything works ok.

Next step is to do the work comparing metrics. There's part in Fresnel and there some logic that the Mozilla team has created for their setup that we probably can re-use or use as a start. The idea is that with the compare functionality we can first compare two test (run two tests, compare them) and know if the metrics has changed. Also the next step is to turn that functionality on our synthetic testing so it will be an alternative to the alerts we have in Grafana today.

Lets aim for having that last functionality deployed the 27 of November. Then we we have December where everything can just run and I can verify it work and I can tune what's needed.

Peter renamed this task from Run performance tests on demand to On-demand performance testing.Mar 8 2024, 7:35 AM

Today I configured our direct tests to us the new ondemand test service. This gives me time to verify that it works and fix bugs and watch how it works. The plan is to keep that running during June and July and then have its prepared for mobile phone on demand testing.