Benchmarking Bridge Aggregators
  • Introduction
  • Getting started
    • Installation
    • Setup
    • Compatibility + Troubleshooting
  • Running the Benchmarker
    • Package.json
    • Setup
    • Benchmark
    • Result-gen
    • Test
    • Clear
  • Repository Overview
    • Repository Structure
    • /script
    • /src
    • /test
    • /benchmark-data
    • /benchmark-plots
    • /benchmark-tables
    • miscellaneous
  • Maintainers
Powered by GitBook
On this page

Was this helpful?

  1. Running the Benchmarker

Benchmark

PreviousSetupNextResult-gen

Last updated 1 year ago

Was this helpful?

yarn run benchmark:auto

This is the script we expect users to run the most as it runs the auto_run.sh fromthat batch logs quotes from API request to the aggregators. By default it runs a new batch every 5 minutes and runs 3 times. It is the same as:

./src/benchmark/runners/token-aggregators/auto_run.sh --time-interval 5 --time-scale m --run-count 3
yarn run benchmark:token-aggregator

Benchmarks all the token aggregators

yarn run benchmark:all

Benchmarks all the aggregators under . As of now, there are no message aggregators being benchmarked with scripts as they are manually benchmarked with the reports generated in .

yarn benchmark:<aggregator>

Allows you to benchmark individual aggregators, simply replace <aggregator> with the aggregator you want to benchmark.

Ex: yarn run benchmark:cowswap

Broadcast and Cache
Benchmark
Benchmark