Skip to main content

llm-d-benchmark

This repository provides an automated workflow for benchmarking LLM inference using the llm-d stack. It includes tools for deployment, experiment execution, data collection, and teardown across multiple environments and deployment styles.

tip

We acknowledge many users are still utilizing our previous (now deprecated) library, and to make the transition easier, we still have that library available. It can be found in our v0.5.2 version tag.

Main Goal

Provide a single source of automation for repeatable and reproducible experiments and performance evaluation on llm-d:

  • Declarative lifecycle: All infrastructure, workloads, and experiments render into reviewable YAML before provisioning.
  • End-to-end automation: A single llmdbenchmark CLI covers standup, benchmarking, result collection, and teardown.
  • Reproducibility: A deterministic config merge chain (defaults.yaml to scenario to CLI overrides) captures the exact configuration in each workspace. Any result traces back to its inputs.
  • Structured experiments: Built-in Design of Experiments (DoE) support automates parameter sweeps across both infrastructure and workload configurations.
  • Multiple harnesses: Swap between inference-perf, guidellm, vllm-benchmark, and others with a CLI flag (-l).
  • Post-deployment validation" Per-scenario smoketests verify that deployed pod configurations match what the scenario defines -- resources, parallelism, env vars, probes, routing, and vLLM flags.

Prerequisites

Please refer to the official llm-d prerequisites for the most up-to-date requirements. For the client setup, the provided install.sh will install the necessary tools.

Administrative Requirements

Deploying the llm-d stack requires cluster-level admin privileges, as you will be configuring cluster-level resources. However, the scripts can be executed by namespace-level admin users, as long as the Kubernetes infrastructure components are configured and the target namespace already exists.

Getting Started

Install

Quick install (one-liner):

curl -sSL https://raw.githubusercontent.com/llm-d/llm-d-benchmark/main/install.sh | bash
cd llm-d-benchmark
source .venv/bin/activate
llmdbenchmark --version

Or clone manually:

git clone https://github.com/llm-d/llm-d-benchmark.git
cd llm-d-benchmark
./install.sh
source .venv/bin/activate
llmdbenchmark --version

Install a specific branch:

LLMDBENCH_BRANCH=main \
curl -sSL https://raw.githubusercontent.com/llm-d/llm-d-benchmark/main/install.sh | bash

The install script auto-detects if the repo is present -- if not, it clones it first. It creates a virtualenv, validates system tools (kubectl, helm, Python 3.11+), and installs the llmdbenchmark package. See Installation for manual install and flags.

tip

The last line of output from llmdbenchmark standup shows the workspace path where all rendered configs, manifests, and results are stored.

Choose a specification

Every command takes a --spec that selects the configuration for your cluster and GPU type. Specs are Jinja2 templates under config/specification/:

--spec gpu                              # NVIDIA GPU setup (config/specification/examples/gpu.yaml.j2)
--spec inference-scheduling # inference scheduling guide
--spec pd-disaggregation # prefill-decode disaggregation guide
--spec /full/path/to/my-spec.yaml.j2 # custom spec

If the name is ambiguous or not found, the CLI lists all available specs and exits.

Deploy and benchmark (full pipeline)

Stand up the llm-d stack, run a quick sanity benchmark, and tear down:

# Preview what would be deployed (no cluster changes)
llmdbenchmark --spec gpu --dry-run standup

# Deploy for real
llmdbenchmark --spec gpu standup

# Run a sanity benchmark against the deployed endpoint
llmdbenchmark --spec gpu run -l inference-perf -w sanity_random.yaml

# Tear down when done
llmdbenchmark --spec gpu teardown
note

--dry-run renders all manifests and logs every command that would execute, without touching the cluster. Use it to review before deploying.

Each command renders Kubernetes manifests from your spec's templates and defaults, then applies them. The workspace directory captures rendered configs, manifests, and results for later inspection.

Benchmark an existing endpoint (run-only mode)

Already have a model-serving endpoint running? Skip deployment entirely:

llmdbenchmark --spec gpu run \
--endpoint-url http://10.131.0.42:80 \
--model meta-llama/Llama-3.1-8B \
--namespace my-namespace \
--harness inference-perf \
--workload sanity_random.yaml

This uses the same harness, profile rendering, and result collection pipeline -- just without the standup and teardown phases.

tip

run can also be used in debug mode (-d / --debug) which starts the harness pod with sleep infinity so you can exec into it and run commands interactively. See this example.

Run a parameter sweep

Experiment files in workload/experiments/ define structured parameter sweeps. Each file lists treatments (combinations of factor levels) that the benchmark iterates over:

# Sweep workload parameters against an existing stack
llmdbenchmark --spec inference-scheduling run \
--experiments workload/experiments/inference-scheduling.yaml

# Full DoE: auto standup/run/teardown per infrastructure configuration
llmdbenchmark --spec tiered-prefix-cache experiment \
--experiments workload/experiments/tiered-prefix-cache.yaml

The run --experiments form varies workload parameters (prompt length, concurrency) against a single endpoint.

The experiment command goes even further by providing an interface to variy infrastructure parameters (replica counts, cache sizes, routing plugins) and stands up a fresh stack for each configuration. This is for advanced performance benchmarking that expands beyond simple configurations - everything becomes tunable from infrastructure to inference time.

See workload/README.md for the full experiment file format and all pre-built experiments, as well as advanced functionality.

Next Steps

TopicWhere to look
Configuration system, defaults, scenarios, overridesconfig/README.md
Workloads, harnesses, profiles, experimentsworkload/README.md
Standup phase, deployment methods, step detailsllmdbenchmark/standup/README.md
Smoketests, per-scenario validation, adding validatorsllmdbenchmark/smoketests/README.md
Run phase, benchmark execution, result collectionllmdbenchmark/run/README.md
Teardown phase and deep cleanllmdbenchmark/teardown/README.md
Design of Experiments (DoE) orchestrationllmdbenchmark/experiment/README.md
Plan-phase rendering pipelinellmdbenchmark/parser/README.md
Execution framework and step contribution guidellmdbenchmark/executor/README.md
CLI reference (all flags, env vars)CLI Reference below

Prerequisites

Please refer to the official llm-d prerequisites for the most up-to-date requirements.

System Requirements

  • Python 3.11+
  • kubectl -- Kubernetes CLI
  • helm -- Helm package manager
  • curl, git -- Standard system tools
  • helmfile (optional) -- Required for modelservice deployments
  • oc (optional) -- Required for OpenShift clusters

Administrative Requirements

info

Deploying the llm-d stack requires cluster-level admin privileges for configuring cluster-level resources. Namespace-level admin users can run the tool if Kubernetes infrastructure components are configured and the target namespace already exists. Use --non-admin to skip admin-only steps.

Installation

# One-liner -- auto-clones if needed
curl -sSL https://raw.githubusercontent.com/llm-d/llm-d-benchmark/main/install.sh | bash
cd llm-d-benchmark
source .venv/bin/activate

Or manually:

git clone https://github.com/llm-d/llm-d-benchmark.git
cd llm-d-benchmark
./install.sh
source .venv/bin/activate

The install script:

  1. Creates a Python virtual environment at .venv/
  2. Validates Python 3.11+ and pip
  3. Checks for required system tools (curl, git, kubectl, helm)
  4. Checks for optional tools (oc, helmfile, kustomize, jq, yq, skopeo)
  5. Installs llmdbenchmark and config_explorer in editable mode
  6. Verifies all Python packages are importable

Manual Install w/o Install Script

git clone https://github.com/llm-d/llm-d-benchmark.git
cd llm-d-benchmark
python3 -m venv .venv && source .venv/bin/activate
pip install -e .
pip install -e config_explorer/

Verify Installation

llmdbenchmark --version

CLI Reference

Global Options

FlagEnv VarDescription
--spec SPECLLMDBENCH_SPECSpecification name or path (bare name, category/name, or full path)
--workspace DIR / --wsLLMDBENCH_WORKSPACEWorkspace directory for outputs (default: temp dir)
--base-dir DIR / --bdLLMDBENCH_BASE_DIRBase directory for templates/scenarios (default: .)
--non-admin / -iLLMDBENCH_NON_ADMINSkip admin-only steps
--dry-run / -nLLMDBENCH_DRY_RUNGenerate YAML without applying to cluster
--verbose / -vLLMDBENCH_VERBOSEEnable debug logging
--versionShow version

Plan Options

FlagEnv VarDescription
-p NSLLMDBENCH_NAMESPACENamespace(s) to render into the plan
-m MODELSLLMDBENCH_MODELSModel to render the plan for
-t METHODSLLMDBENCH_METHODSDeployment method (standalone, modelservice)
-f / --monitoringEnable monitoring in rendered templates (PodMonitor, EPP verbosity)
-k FILELLMDBENCH_KUBECONFIG / KUBECONFIGKubeconfig path (used for cluster resource auto-detection)

Standup Options

FlagEnv VarDescription
-s STEPSStep filter (e.g., 0,1,5 or 1-7)
-c FILELLMDBENCH_SCENARIOScenario file
-m MODELSLLMDBENCH_MODELSModels to deploy
-p NSLLMDBENCH_NAMESPACENamespace(s)
-t METHODSLLMDBENCH_METHODSDeployment methods (standalone, modelservice)
-r NAMELLMDBENCH_RELEASEHelm release name
-k FILELLMDBENCH_KUBECONFIG / KUBECONFIGKubeconfig path
--parallel NLLMDBENCH_PARALLELMax parallel stacks (default: 4)
-f / --monitoringLLMDBENCH_MONITORINGEnable PodMonitor creation and EPP verbosity during standup
--skip-smoketestSkip automatic smoketest after standup completes
--affinityLLMDBENCH_AFFINITYNode affinity / tolerations label
--annotationsLLMDBENCH_ANNOTATIONSExtra annotations for deployed resources
--wvaLLMDBENCH_WVAWorkload Variant Autoscaler config

Teardown Options

FlagEnv VarDescription
-s STEPSStep filter
-m MODELSLLMDBENCH_MODELSModel that was deployed (for resource name resolution)
-t METHODSLLMDBENCH_METHODSMethods to tear down (standalone, modelservice)
-r NAMELLMDBENCH_RELEASEHelm release name (default: llmdbench)
-d / --deepLLMDBENCH_DEEP_CLEANDeep clean: delete ALL resources in both namespaces
-p NSLLMDBENCH_NAMESPACEComma-separated namespaces (model,harness)
-k FILELLMDBENCH_KUBECONFIG / KUBECONFIGKubeconfig path

Experiment Options

FlagEnv VarDescription
-e FILELLMDBENCH_EXPERIMENTSExperiment YAML with setup and run treatments (required)
-p NSLLMDBENCH_NAMESPACENamespace(s)
-t METHODSLLMDBENCH_METHODSDeploy method
-m MODELSLLMDBENCH_MODELSModels to deploy
-k FILELLMDBENCH_KUBECONFIG / KUBECONFIGKubeconfig path
--parallel NLLMDBENCH_PARALLELMax parallel stacks (default: 4)
-f / --monitoringEnable monitoring during standup and run phases
-l HARNESSLLMDBENCH_HARNESSHarness name
-w PROFILELLMDBENCH_WORKLOADWorkload profile
-o OVERRIDESLLMDBENCH_OVERRIDESWorkload parameter overrides
-r DESTLLMDBENCH_OUTPUTResults destination (local, gs://, s3://)
-j NLLMDBENCH_PARALLELISMParallel harness pods
--wait-timeout NLLMDBENCH_WAIT_TIMEOUTSeconds to wait for harness completion
-x DATASETLLMDBENCH_DATASETDataset URL for harness replay
-d / --debugLLMDBENCH_DEBUGDebug mode: start harness pods with sleep infinity
--stop-on-errorAbort on first setup treatment failure
--skip-teardownLeave stacks running for debugging

Run Options

FlagEnv VarDescription
-s STEPSStep filter (e.g., 0,1,5 or 2-6)
-m MODELLLMDBENCH_MODELModel name override (e.g. facebook/opt-125m)
-p NSLLMDBENCH_NAMESPACENamespaces (deploy,benchmark)
-t METHODSLLMDBENCH_METHODSDeploy method used during standup
-k FILELLMDBENCH_KUBECONFIG / KUBECONFIGKubeconfig path
-l HARNESSLLMDBENCH_HARNESSHarness name (inference-perf, guidellm, vllm-benchmark)
-w PROFILELLMDBENCH_WORKLOADWorkload profile YAML
-e FILELLMDBENCH_EXPERIMENTSExperiment treatments YAML for parameter sweeping
-o OVERRIDESLLMDBENCH_OVERRIDESWorkload parameter overrides (param=value,...)
-r DESTLLMDBENCH_OUTPUTResults destination (local, gs://, s3://)
-j NLLMDBENCH_PARALLELISMParallel harness pods
-U URLLLMDBENCH_ENDPOINT_URLExplicit endpoint URL (run-only mode)
-c FILERun config YAML (run-only mode)
--generate-configGenerate config and exit
-x DATASETLLMDBENCH_DATASETDataset URL for harness replay
--wait-timeout NLLMDBENCH_WAIT_TIMEOUTSeconds to wait for harness completion
-f / --monitoringEnable metrics scraping and EPP log capture during benchmark
-q / --serviceaccountLLMDBENCH_SERVICE_ACCOUNTService account name for harness pods
-g / --envvarspodLLMDBENCH_HARNESS_ENVVARS_TO_YAMLComma-separated env var names to propagate into harness pod
--analyzeRun local analysis on results after collection
-z / --skipLLMDBENCH_SKIPSkip execution, only collect existing results
-d / --debugLLMDBENCH_DEBUGDebug mode: start harness pods with sleep infinity

Smoketest Options

Run post-deployment validation independently against an already-deployed stack.

llmdbenchmark --spec gpu smoketest -p my-namespace
llmdbenchmark --spec gpu smoketest -p my-namespace -s 2 # config validation only
FlagEnv VarDescription
-s STEPSStep filter (e.g., 0,1,2 or 0-2)
-p NSLLMDBENCH_NAMESPACENamespace(s)
-t METHODSLLMDBENCH_METHODSDeployment methods (standalone, modelservice)
-k FILELLMDBENCH_KUBECONFIG / KUBECONFIGKubeconfig path
--parallel NLLMDBENCH_PARALLELMax parallel stacks (default: 4)

Smoketests also run automatically after standup unless --skip-smoketest is passed. See llmdbenchmark/smoketests/README.md for details on what each step validates.

Environment Variables

Every CLI flag can be set via a LLMDBENCH_* environment variable (see tables above). The priority chain is:

  1. CLI flag (highest) -- explicitly passed on the command line
  2. Environment variable -- exported in the user's shell
  3. Rendered config (lowest) -- defaults.yaml + scenario YAML

This is useful for CI/CD pipelines, .bashrc configuration, or migrating from the original bash-based workflow.

# Example: set common defaults via env vars, override per-run via CLI
export LLMDBENCH_SPEC=inference-scheduling
export LLMDBENCH_NAMESPACE=my-team-ns
export LLMDBENCH_KUBECONFIG=~/.kube/my-cluster

# These use the env vars above; --dry-run overrides nothing, just adds a flag
llmdbenchmark standup --dry-run
llmdbenchmark standup # live deploy to my-team-ns
llmdbenchmark standup -p override-ns # CLI wins over env var

Boolean env vars accept 1, true, or yes (case-insensitive). Active LLMDBENCH_* overrides are logged at startup for debugging.

Architecture

The tool operates in three phases, each composed of numbered steps executed by a shared StepExecutor framework.

Config Override Chain

Values flow through a merge pipeline during the plan phase:

Config Override Chain

Steps read from the rendered config.yaml and never define their own fallback defaults. If a required key is missing from the rendered config, the step raises a clear error. This ensures defaults.yaml is the single source of truth for all default values. Environment variables (LLMDBENCH_*) sit between scenario overrides and CLI flags in the priority chain.

See config/README.md for the full configuration reference, including how to override values.

Deployment Methods

The standup phase supports two deployment paths:

  • standalone -- Direct Kubernetes Deployments and Services for each model (step 06)
  • modelservice -- Helm-based deployment with gateway infrastructure, GAIE, and LWS support (steps 07-09)

Both paths share steps 00-05 (infrastructure, namespaces, secrets) and step 10 (smoketest).

Standup Steps

StepNameScopeDescription
00ensure_infraGlobalValidate dependencies, cluster connectivity, kubeconfig
02admin_prerequisitesGlobalAdmin prerequisites (CRDs, gateway, LWS, namespaces)
03workload_monitoringGlobalWorkload monitoring, node resource discovery
04model_namespacePer-stackModel namespace (PVCs, secrets, download job)
05harness_namespacePer-stackHarness namespace (PVC, data access pod, preprocess)
06standalone_deployPer-stackStandalone vLLM deployment (Deployment + Service)
07deploy_setupPer-stackHelm repos and gateway infrastructure (helmfile)
08deploy_gaiePer-stackGAIE inference extension deployment
09deploy_modelservicePer-stackModelservice deployment (helmfile + LWS)
10smoketestPer-stackHealth check, inference test, per-scenario config validation
11inference_testPer-stackSample inference request with demo curl command

Run Steps

StepNameScopeDescription
00preflightGlobalValidate cluster connectivity and run-phase prerequisites
01cleanup_previousGlobalRemove leftover harness pods from previous runs
02detect_endpointPer-stackDiscover or accept the model-serving endpoint
03verify_modelPer-stackVerify the expected model is served at the endpoint
04render_profilesPer-stackRender workload profile templates with runtime values
05create_profile_configmapPer-stackCreate profile and harness-scripts ConfigMaps
06deploy_harnessPer-stackDeploy harness pod(s) and execute the full treatment cycle
07wait_completionPer-stackWait for harness pod(s) to complete
08collect_resultsPer-stackCollect results from PVC to local workspace
09upload_resultsGlobalUpload results to cloud storage (safety-net bulk upload)
10cleanup_postGlobalClean up harness pods and ConfigMaps
11analyze_resultsGlobalRun local analysis on collected results

Teardown Steps

StepNameDescriptionCondition
00preflightValidate cluster connectivity, load configAlways
01uninstall_helmUninstall Helm releases, delete routes and jobsModelservice only
02clean_harnessClean harness ConfigMaps, pods, secretsAlways
03delete_resourcesDelete namespaced resources (normal or deep)Always
04clean_cluster_rolesClean cluster-scoped ClusterRoles/BindingsAdmin + modelservice only

Project Structure

config/                       Declarative configuration (all plan-phase inputs)
templates/
jinja/ Jinja2 templates for Kubernetes manifests
values/defaults.yaml Base configuration with all anchored defaults
scenarios/ Deployment overrides (guides/, examples/, cicd/)
specification/ Specification templates (guides/, examples/, cicd/)

llmdbenchmark/ Python package
cli.py Entry point, workspace setup, command dispatch
config.py Plan-phase workspace configuration singleton

interface/ CLI subcommand definitions (argparse)
commands.py Command enum (plan, standup, teardown, run, experiment)
env.py Environment variable helpers for CLI defaults
plan.py Plan subcommand
standup.py Standup subcommand
teardown.py Teardown subcommand
run.py Run subcommand
experiment.py Experiment subcommand (DoE orchestration)

parser/ Plan-phase template rendering (see parser/README.md)
render_specification.py Specification file parsing and validation
render_plans.py Jinja2 template rendering engine
render_result.py Structured error tracking for renders
config_schema.py Pydantic config validation (typo/type detection)
version_resolver.py Auto-resolve image tags and chart versions
cluster_resource_resolver.py Auto-detect accelerator/network values

experiment/ DoE experiment orchestration (see experiment/README.md)
parser.py Parse experiment YAML (setup + run treatments)
summary.py Per-treatment result tracking and summary output

executor/ Execution framework (see executor/README.md)
step.py Step ABC, Phase enum, result dataclasses
step_executor.py Step orchestrator (sequential + parallel)
command.py kubectl/helm/helmfile subprocess wrapper
context.py Shared state (ExecutionContext dataclass)
protocols.py Structural typing (LoggerProtocol)
deps.py System dependency checker

smoketests/ Post-deployment validation (see smoketests/README.md)
base.py Health checks, inference tests, pod inspection helpers
report.py CheckResult / SmoketestReport tracking
steps/ Smoketest step implementations (00-02)
validators/ Per-scenario config validators

standup/ Standup phase (see standup/README.md)
preprocess/ Scripts mounted as ConfigMaps in vLLM pods
steps/ Step implementations (00-11)

teardown/ Teardown phase (see teardown/README.md)
steps/ Step implementations (00-05)

run/ Run phase (see run/README.md)
steps/ Step implementations (00-11)

logging/ Structured logger with emoji support (see logging/README.md)
exceptions/ Error hierarchy (Template, Configuration, Execution)
utilities/ Shared helpers (see utilities/README.md)
cluster.py Kubernetes connection, platform detection
capacity_validator.py GPU capacity validation
huggingface.py HuggingFace model access checks
endpoint.py Endpoint discovery and model verification
profile_renderer.py Workload profile template rendering
kube_helpers.py Shared kubectl patterns (wait, collect, cleanup)
cloud_upload.py Unified cloud storage upload (GCS, S3)
os/
filesystem.py Workspace and directory management
platform.py Host OS detection

See module-level READMEs for detailed documentation:

Well-Lit Path Guides

llm-d-benchmark supports all available Well-Lit Path Guides. Each guide has a corresponding specification:

llmdbenchmark --spec inference-scheduling standup       # Inference scheduling
llmdbenchmark --spec pd-disaggregation standup # Prefill-decode disaggregation
llmdbenchmark --spec tiered-prefix-cache standup # Tiered prefix cache
llmdbenchmark --spec precise-prefix-cache-aware standup # Precise prefix cache-aware routing
llmdbenchmark --spec wide-ep-lws standup # Wide expert-parallel with LWS
warning

wide-ep-lws requires RDMA/RoCE networking and LeaderWorkerSet (LWS) controller. Verify your cluster has working RDMA HCAs before deploying.

Main Concepts

Model ID Label

Kubernetes resource names derived from model IDs use a hashed model_id_label format: {first8}`-`{sha256_8}`-`{last8}. This keeps resource names within DNS length limits while remaining identifiable. The label is computed automatically during the plan phase and used in template rendering for deployment names, service names, and route names. See config/README.md for details.

Scenarios

Cluster-specific configuration: GPU model, LLM, and llm-d parameters. Scenarios are YAML files under config/scenarios/ that override defaults.yaml for a particular deployment context.

Harnesses

Load generators that drive benchmark traffic. Supported: inference-perf, guidellm, vllm benchmarks, inferencemax, and nop (for model load time benchmarking).

(Workload) Profiles

Benchmark load specifications including LLM use case, traffic pattern, input/output distribution, and dataset. Found under workload/profiles.

info

The triplet <scenario>, <harness>, <(workload) profile>, combined with the standup/teardown capabilities, provides enough information to reproduce any single experiment.

Experiments

Design of Experiments (DOE) files describing parameter sweeps across standup and run configurations. The experiment command automates the full setup x run treatment matrix -- standing up a different infrastructure configuration for each setup treatment, running all workload variations, tearing down, and producing a summary. See llmdbenchmark/experiment/README.md for the full experiment lifecycle documentation.

Configuration Explorer

The configuration explorer is a library that helps find the most cost-effective, optimal configuration for serving models on llm-d based on hardware specification, workload characteristics, and SLO requirements. A "Capacity Planner" is provided as an initial component to help determine if a vLLM configuration is feasible for deployment.

Benchmark Report

Results are saved in the native format of each harness, as well as a universal Benchmark Report format (v0.1 and v0.2). The benchmark report is a standard data format describing the cluster configuration, workload, and results of a benchmark run. It acts as a common API for comparing results across different harnesses and configurations. See llmdbenchmark/analysis/benchmark_report/README.md for the full schema documentation and Python API.

Analysis

The analysis pipeline generates per-request distribution plots, cross-treatment comparison tables and charts, and Prometheus metric visualizations. Analysis runs both inside the harness container (automatically) and locally via --analyze. For interactive exploration, a Jupyter notebook is also available at docs/analysis/README.md.

Dependencies

News

Topics

Testing

Unit tests live under tests/ and run with pytest:

pytest tests/ -v

For integration testing against a live cluster, util/test-scenarios.sh runs standup/teardown cycles across scenarios:

util/test-scenarios.sh --stable     # Run known-stable scenarios
util/test-scenarios.sh --trouble # Run scenarios that have had issues
util/test-scenarios.sh --all # Run all scenarios
util/test-scenarios.sh --ms-only # Modelservice scenarios only
util/test-scenarios.sh --sa-only # Standalone scenarios only

See tests/README.md for unit test details.

Developing

  • Developer Guide -- How to add new steps, analysis modules, harnesses, scenarios, and experiments
  • Package Architecture -- Overview of the llmdbenchmark package structure and submodules

Contribute

License

Licensed under Apache License 2.0. See LICENSE for details.

Content Source

This content is automatically synced from README.md on the main branch of the llm-d/llm-d-benchmark repository.

📝 To suggest changes, please edit the source file or create an issue.