You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
python src\tools\perf\run_benchmark --browser-executable=".\src\out\Release\chrome.exe" --browser=exact list
得到的可用测试指标
Available benchmarks for release are:
blink_perf
blink_perf.accessibility
blink_perf.bindings
blink_perf.canvas
blink_perf.css
blink_perf.display_locking
blink_perf.dom
blink_perf.events
blink_perf.image_decoder
blink_perf.layout
blink_perf.layout_ng
blink_perf.owp_storage
blink_perf.paint
blink_perf.paint_layout_ng
blink_perf.parser
blink_perf.parser_layout_ng
blink_perf.shadow_dom
blink_perf.svg
blink_perf_xml_http_request.BlinkPerfXMLHttpRequest
dromaeo
dummy_benchmark.noisy_benchmark_1 A noisy benchmark with mean=50 & std=20.
dummy_benchmark.stable_benchmark_1 A low noise benchmark with mean=100 & std=1.
generic_trace.top25
generic_trace_ct
heap_profiling.desktop.disabled
heap_profiling.desktop.native
heap_profiling.desktop.pseudo
jetstream
jetstream2 JetStream2, a combination of JavaScript and Web Assembly benchmarks.
kraken Mozilla's Kraken JavaScript benchmark.
leak_detection.cluster_telemetry
loading.cluster_telemetry
loading.desktop A benchmark measuring loading performance of desktop sites.
loading.desktop_layout_ng A benchmark that runs loading.desktop with the layoutng flag.
loading.mobile_layout_ng A benchmark that runs loading.mobile with the layoutng flag.
media.desktop Obtains media performance for key user scenarios on desktop.
media_router.cpu_memory Obtains media performance for key user scenarios on desktop.
media_router.cpu_memory.no_media_router Benchmark for CPU and memory usage without Media Router.
memory.cluster_telemetry
memory.desktop Measure memory usage on synthetic sites.
memory.leak_detection
memory.long_running_desktop_sites Measure memory usage on popular sites.
multipage_skpicture_printer
multipage_skpicture_printer_ct Captures mSKPs for Cluster Telemetry.
octane Google's Octane JavaScript benchmark.
power.desktop
rasterize_and_record_micro.partial_invalidation Measures rasterize and record performance for partial inval. on big pages.
rasterize_and_record_micro.top_25 Measures rasterize and record performance on the top 25 web pages.
rasterize_and_record_micro_ct Measures rasterize and record performance for Cluster Telemetry.
rendering.cluster_telemetry Measures rendering performance for Cluster Telemetry.
rendering.desktop
repaint_ct Measures repaint performance for Cluster Telemetry.
screenshot_ct Captures PNG screenshots of web pages for Cluster Telemetry. Screenshots
skpicture_printer
skpicture_printer_ct Captures SKPs for Cluster Telemetry.
speedometer
speedometer-future Speedometer benchmark with the V8 flag --future.
speedometer2 Speedometer2 Benchmark.
speedometer2-future Speedometer2 benchmark with the V8 flag --future.
system_health.common_desktop Desktop Chrome Energy System Health Benchmark.
system_health.memory_desktop Desktop Chrome Memory System Health Benchmark.
tab_switching.typical_25 This test records the MPArch.RWH_TabSwitchPaintDuration histogram.
tracing.tracing_with_background_memory_infra Measures the overhead of background memory-infra dumps
tracing.tracing_with_debug_overhead
v8.browsing_desktop
v8.browsing_desktop-future
v8.loading.cluster_telemetry
v8.loading_runtime_stats.cluster_telemetry
v8.runtime_stats.top_25 Runtime Stats benchmark for a 25 top V8 web pages.
webrtc Base class for WebRTC metrics for real-time communications tests.
xr.webvr.live.static Measures WebVR performance with live websites.
xr.webvr.static Measures WebVR performance with synthetic sample pages.
xr.webvr.wpr.static Measures WebVR performance with WPR copies of live websites.
xr.webxr.static Measures WebXR performance with synthetic sample pages.
Not supported benchmarks for release are (force run with -d):
cros_tab_switching.typical_24 Measures tab switching performance with 24 tabs.
cros_ui_smoothness Measures ChromeOS UI smoothness.
heap_profiling.mobile.disabled
heap_profiling.mobile.native
heap_profiling.mobile.pseudo
loading.mobile A benchmark measuring loading performance of mobile sites.
media.mobile Obtains media performance for key user scenarios on mobile devices.
orderfile.memory_mobile Benchmark for native code memory footprint evaluation.
orderfile_generation.debugging A very short benchmark for debugging metrics collection.
orderfile_generation.testing
orderfile_generation.training
orderfile_generation.variation.testing0
orderfile_generation.variation.testing1
orderfile_generation.variation.testing2
orderfile_generation.variation.training
rendering.mobile
startup.mobile Startup benchmark for Chrome on Android.
system_health.common_mobile Mobile Chrome Energy System Health Benchmark.
system_health.memory_mobile Mobile Chrome Memory System Health Benchmark.
system_health.webview_startup Webview startup time benchmark
v8.browsing_mobile
v8.browsing_mobile-future
xr.browsing.static Benchmark for testing the VR Browsing Mode performance on sample pages.
xr.browsing.wpr.smoothness Benchmark for testing VR browser scrolling smoothness and throughput.
xr.browsing.wpr.static Benchmark for testing the VR Browsing Mode performance on WPR pages.
xxx\src>out\Release\browser_tests.exe -h
This program contains tests written using Google Test. You can use the
following command line flags to control its behavior:
Test Selection:
--gtest_list_tests
List the names of all tests instead of running them. The name of
TEST(Foo, Bar) is "Foo.Bar".
--gtest_filter=POSTIVE_PATTERNS[-NEGATIVE_PATTERNS]
Run only the tests whose name matches one of the positive patterns but
none of the negative patterns. '?' matches any single character; '*'
matches any substring; ':' separates two patterns.
--gtest_also_run_disabled_tests
Run all disabled tests too.
Test Execution:
--gtest_repeat=[COUNT]
Run the tests repeatedly; use a negative count to repeat forever.
--gtest_shuffle
Randomize tests' orders on every iteration.
--gtest_random_seed=[NUMBER]
Random number seed to use for shuffling test orders (between 1 and
99999, or 0 to use a seed based on the current time).
Test Output:
--gtest_color=(yes|no|auto)
Enable/disable colored output. The default is auto.
--gtest_print_time=0
Don't print the elapsed time of each test.
--gtest_output=(json|xml)[:DIRECTORY_PATH\|:FILE_PATH]
Generate a JSON or XML report in the given directory or with the given
file name. FILE_PATH defaults to test_detail.xml.
Assertion Behavior:
--gtest_break_on_failure
Turn assertion failures into debugger break-points.
--gtest_throw_on_failure
Turn assertion failures into C++ exceptions for use by an external
test framework.
--gtest_catch_exceptions=0
Do not report exceptions as test failures. Instead, allow them
to crash the program or throw a pop-up (on Windows).
Except for --gtest_list_tests, you can alternatively set the corresponding
environment variable of a flag (all letters in upper-case). For example, to
disable colored text output, you can either specify --gtest_color=no or set
the GTEST_COLOR environment variable to no.
For more information, please read the Google Test documentation at
https://github.com/google/googletest/. If you find a bug in Google Test
(not one in your own code or tests), please report it to
<[email protected]>.
IMPORTANT DEBUGGING NOTE: each test is run inside its own process.
For debugging a test inside a debugger, use the
--gtest_filter=<your_test_name> flag along with either
--single-process-tests (to run the test in one launcher/browser process) or
--single-process (to do the above, and also run Chrome in single-process mode)
PS xxx\src> .\out\Release\browser_tests.exe PlatformAppBrowserTest.RunningAppsAreRecorded
IMPORTANT DEBUGGING NOTE: each test is run inside its own process.
For debugging a test inside a debugger, use the
--gtest_filter=<your_test_name> flag along with either
--single-process-tests (to run the test in one launcher/browser process) or
--single-process (to do the above, and also run Chrome in single-process mode).
Using sharding settings from environment. This is shard 0/1
Using 3parallel jobs.
Still waiting for the following processes to finish:
"\src\out\Release\browser_tests.exe"--disable-gpu-process-for-dx12-vulkan-info-collection --gtest_also_run_disabled_tests --gtest_filter=PlatformAppBrowserTest.RunningAppsAreRecorded --single-process-tests --test-launcher-output="C:\Users\Liz\AppData\Local\Temp\16840_881301891\results16840_1744069879\test_results.xml"--user-data-dir="C:\Users\Liz\AppData\Local\Temp\16840_881301891\user_data""xxx\src\out\Release\browser_tests.exe"--disable-gpu-process-for-dx12-vulkan-info-collection --gtest_also_run_disabled_tests --gtest_filter=PlatformAppBrowserTest.ActiveAppsAreRecorded --single-process-tests --test-launcher-output="C:\Users\Liz\AppData\Local\Temp\16840_1934187520\results16840_1153539424\test_results.xml"--user-data-dir="C:\Users\Liz\AppData\Local\Temp\16840_1934187520\user_data""xxx\src\out\Release\browser_tests.exe"--disable-gpu-process-for-dx12-vulkan-info-collection --gtest_also_run_disabled_tests --gtest_filter=PlatformAppBrowserTest.FileAccessIsSavedToPrefs --single-process-tests --test-launcher-output="C:\Users\Liz\AppData\Local\Temp\16840_642618779\results16840_340282504\test_results.xml"--user-data-dir="C:\Users\Liz\AppData\Local\Temp\16840_642618779\user_data"
...
web tests(blink tests)
编译
# 编译web_tests
ninja -C out\Release blink_tests
查看帮助
xxx\src> python xxx\src\third_party\blink\tools\run_web_tests.py -h
Usage: run_web_tests.py [options] [tests]
Runs Blink web tests as described in docs/testing/web_tests.md
Options:
-h, --help show this help message and exit
Platform options:
--android Alias for --platform=android
--platform=PLATFORM
Platform to use (e.g., "mac-lion")
Configuration options:
--debug Set the configuration to Debug
-t TARGET, --target=TARGET
Specify the target build subdirectory under src/out/
--release Set the configuration to Release
--no-xvfb Do not run tests with Xvfb
Printing Options:
--debug-rwt-logging
print timestamps and debug information for
run_web_tests.py itself
--details print detailed results for every test
-q, --quiet run quietly (errors, warnings, and progress only)
--timing display test times (summary plus per-test w/
--verbose)
-v, --verbose print a summarized result for every test (one line per
test)
web-platform-tests (WPT) Options:
--no-manifest-update
Do not update the web-platform-tests MANIFEST.json
unless it does not exist.
...
尝试执行
PS xxx\src> python xxx\src\third_party\blink\tools\run_web_tests.py -t Release
Using port 'win-win10'
Test configuration: <win10, x86, release>
View the test results at file://xxx\src\out\Release\layout-test-results/results.html
Using random order with seed: 1654672627
Baseline search path: win -> generic
Using Release build
Regular timeout: 6000, slow test timeout: 30000
Command line: xxx\src\out\Release\content_shell.exe --run-web-tests --ignore-certificate-errors-spki-list=Nxvaj3+bY3oVrTc+Jp7m3E3sB1n3lXtnMDCyBsqEXiY=,55qC1nKu2A88ESbFmk5sTPQS/ScG+8DD7P+2bgFA9iM=,0Rt4mT6SJXojEMHTnKnlJ/hBKMBcI4kteBlhR1eTTdk= --user-data-dir --enable-direct-write --enable-crash-reporter --crash-dumps-dir=xxx\src\out\Release\crash-dumps\reports -
Collecting tests ...
请问一下在使用run_benchmark 测试Android X5 webview时遇到BrowserConnectionGoneException: Timed out while waiting 60s for _GetDevToolsClient的问题,这个你有遇到过吗,请问这种是什么原因,怎么解决的?
请问一下在使用run_benchmark 测试Android X5 webview时遇到BrowserConnectionGoneException: Timed out while waiting 60s for _GetDevToolsClient的问题,这个你有遇到过吗,请问这种是什么原因,怎么解决的?
chromium测试
感觉已经好久没有写相关的技术文章了,主要是因为做后端开发,实在是太忙了,我又对那玩意不太感兴趣,很多东西本来想写的,有时候开个头,写着写着就放弃了,最近刚转浏览器内核开发相关工作,老大分配的第一个任务就是把
chromium
的相关测试跑起来,所以大概花了一周多,大概熟悉了浏览器测试相关部分的内容,可能不太准确,毕竟刚刚开始接触,目前接触的内容,感觉东西可以写的有很多,后期随着内容深入不断更新:)在经过了解
chromium
相关测试内容后,我个人感觉可以把测试简单分为两大类单元测试
包含了浏览器测试(
browser_tests
),模糊测试(fuzzer
)等性能测试
主要测试浏览器的一些性能,由谷歌开发的测试框架
catapult
负责,它里面包含了很多的内容环境准备
基础环境
chromium
相关环境安装
depot_tools
设置环境变量
拉取源码
切换分支版本
编译生成
python
环境安装
pyenv
安装依赖
安装
pyenv
添加环境变量到
~/.bashrc
开始安装
python
关于
gn/ninja
构建系统通过上面的编译和生成过程,可以发现我们并不是使用比较常见的
make/cmake
这种构建方式,而是使用的gn
先生成相关的ninja
构建文件(相当于Makefile
文件),然后使用ninja
编译生成可执行文件。gn
是谷歌开源的一个元构建系统(meta-build system
)。这个”元构建“的意思是,gn
并不直接帮你构建项目,而是帮你产生构建项目的ninja
文件,然后你再用ninja
去构建项目。或者你可以这么理解,gn
相当于帮你生成Makefile
,然后你再用make
去编译构建你的项目。这么做的原因是,
ninja
虽然有构建速度快的优点,但它更多是为机器解析设计的,人能看懂ninja
文件,但要为项目手写ninja
文件就比较繁琐。gn
结合ninja
,能够让我们方便的创建和维护项目,同时又能享受ninja
的编译性能。可以阅读参考文章,对该构建系统有个简单的了解,这部分我们在
browser tests
会遇到性能测试
capatult
目前
capatult
主要有以下几个功能telemetry(run_benchmark)
列出当前存在哪些测试指标
得到的可用测试指标
挑选其中一个用于测试
如果成功的话,就会不停的弹窗开始测试。
单元测试
browser tests
编译
查看帮助,该命令会直接开启测试
列出可用测试
尝试某个测试
web tests(blink tests)
编译
查看帮助
尝试执行
如果在特定的版本,直接执行可能会有问题,原因在于测试配置的
json
文件存在字段错误修复:
src/third_party/blink/web_tests/VirtualTestSuites:1089
做如下修改重新执行上述命令即可
依赖问题
首先需要安装
pywin32
和VCForPython27
,以方便编译psutil
解决报
psutil
依赖,直接利用安装包安装解决报
fcntl
依赖目前没有办法,该模块在
windows
系统不可用解决报
numpy
依赖遇到的问题
ERROR at //device/vr/buildflags/buildflags.gni:xx:xx: Undefined identifier
缺少第三方库相关文件,解决办法:运行
gclient sync
参考
catapult框架
gn/ninja: 谷歌的新一代项目构建系统入门
Checking out and building Chromium on Linux
pyenv
Chromium 工程源码是如何测试的
ERROR at //device/vr/buildflags/buildflags.gni:xx:xx: Undefined identifier
browser tests
web tests
chromium checkout build
The text was updated successfully, but these errors were encountered: