We've noticed that geckoview_example is ~300ms faster than fenix in cold page
load tests on arewefastyet for the Pixel 2. We suspect the main
difference is because geckoview_example runs with conditioned profiles
and fenix does not.
This PR is foremost an experiment to see if that's true because, after bug
1587542, we cannot run get results for fenix perftest PRs (i.e. it needs to be
merged into main). If we find that the results are not noisy, however, we
could end up leaving this in the tree. We've previously seen excessive
noise with fenix start up tests with conditioned profiles which is why
conditioned profiles are not currently enabled.
* Enable webrender for all tests and run a subset without webrender.
* Run tests on PR.
* Change task labels for webrender-enabled tests.
* Move transform logic to after the main transform.
* Reformat group symbol.
* Check if extra field is empty.
* Try a different method for treeherder info.
* Fix up assignment issue.
* Reformat symbol field instead of groupSymbol.
* Add new task group to config.
* Change the platform name for webrender tasks.
* Undo testing changes.
* Undo platform naming changes.
* Add new per-commit android-test build.
* Rename to nightly-simulation.
* Add treeherder group to the config file.
* Remove taskcluster index path and browsertime test.
* Add nightly-simulation to taskcluster indexes.
* Use nightly Fenix variant.
* Run the tests in PR.
* Update visual-metrics scripts to include the similarity metrics.
* Use python3.5 in visual-metrics docker.
* Install wget in the docker.
* Use python3.6 hashes instead of python3.5.
* Undo run-visual-metrics.py python changes.
* Upgrade python setuptools version to 46.1.3.
* Add setuptools to transitive dependency list.
* Undo PR test changes.
* Remove setuptools install line and use requirements.txt instead.
* Undo PR test changes.
* Fix geckodriver artifact suffix.
* Test a browsertime task.
* Revert browsertime test.
* Add google-search-restaurants to pageload tests in browsertime.
* Temporarily change the activity to pass tests.
* Change Raptor Fenix activity name.
* Remove test trigger for browsertime test.
* Add visual-metrics docker type.
* Add required browsertime toolchain fetches.
* Add browsertime tests for technical and visual metrics.
* Run browsertime tests in a cron task.
* Run visual metrics on all browsertime tests.
* Use spaces instead of tabs, and resolve visual-metric nits.
* Enable browsertime on pull request for testing.
* Restrict PR tests to amazon on browsertime.
* First attempt using multi_dep.
* Add a primary dependency to browsertime.
* Try by not popping.
* Debug prints.
* Make one grouping per browsertime task.
* Try without the multi_dep transform.
* Delete dependent-tasks in visual-metrics transformer.
* Update setuptools installed and copy run-on-tasks-for.
* Use get when getting run-on-tasks-for.
* Add new pinned requirements.
* Try it.
* Set run-on-tasks-for properly.
* Remove print statement.
* Remove single_dep loader, and print statements.
* Remove run-on-tasks-for testing setting.
* Restart testing, and set user to root in visual-metrics Docker.
* Remove testing settings.
* Remove fetch-content from Docker.
* Change attributes grouping method.
* Run all tests as a check.
* Undo testing changes, and fix a bad test name.