Benchmark
-
Benchmark the latency of running a single static batch without a server. The arguments are the same as for
launch_server.py. Note that this is a simplified test script without a dynamic batching server, so it may run out of memory for a batch size that a real server can handle. A real server truncates the prefill into several batches, while this simplified script does not.- Without a server (do not need to launch a server)
- With a server (please use
sglang.launch_serverto launch a server first and run the following command.)
- Without a server (do not need to launch a server)
-
Benchmark offline processing. This script will start an offline engine and run the benchmark.
-
Benchmark online serving. Please use
sglang.launch_serverto launch a server first and run the following command.
Profile with PyTorch Profiler
Pytorch Profiler is a convenient basic tool to inspect kernel execution time, call stack, and kernel overlap and occupancy.Profile a server with sglang.bench_serving
SGLANG_TORCH_PROFILER_DIR should be set at both server and client side, otherwise the trace file cannot be generated correctly . A secure way will be setting SGLANG_TORCH_PROFILER_DIR in the .*rc file of shell (e.g. ~/.bashrc for bash shells).
For more details, please refer to Bench Serving Guide.
Profile In PD Disaggregation Mode
When profiling in PD disaggregation mode, prefill and decode workers must be profiled separately due to torch profiler limitations. Thebench_serving command provides dedicated options for this:
Profile Prefill Workers
Profile Decode Workers
Important Notes
--profile-prefill-urland--profile-decode-urlare mutually exclusive - you cannot profile both at the same time- Both options support multiple worker URLs for multi-instance setups:
- Make sure
SGLANG_TORCH_PROFILER_DIRis set on all worker nodes before starting the servers - For more details on setting up PD disaggregation, see PD Disaggregation Guide
Profile a server with sglang.bench_offline_throughput
Profile a server with sglang.profiler
When the server is running (e.g., processing a decoding request), you can start live profiling immediately by sending a profile request to the server.
You can do this by running python3 -m sglang.profiler. For example:
Profile a server with HTTP API endpoints
SGLang provides HTTP API endpoints to control profiling on a running server. This allows you to start and stop profiling programmatically, which is useful for capturing specific workload patterns.Using /start_profile endpoint
The /start_profile endpoint starts profiling on the server. You can control when profiling begins and how long it runs using the following parameters:
Basic usage:
output_dir(optional): Directory where profile traces will be saved. If not specified, usesSGLANG_TORCH_PROFILER_DIRenvironment variable, or/tmpas the defaultnum_steps(optional): Number of steps to profile. If not specified, profiling continues until manually stopped with/end_profilestart_step(optional): Step number at which to start profiling (inclusive). Useful for skipping warmup iterationsactivities(optional): List of activities to profile, e.g.,["CPU", "GPU"]. Default is["CPU", "GPU"]merge_profiles(optional): Whether to merge distributed traces. Default isfalse
start_step (inclusive) and continues for num_steps iterations. For example, with start_step=3 and num_steps=10, profiling captures steps 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12 (10 steps total, starting from step 3).
Advanced usage with start_step:
Using /end_profile endpoint
The /end_profile endpoint stops an ongoing profiling session and saves the trace file.
num_steps. If num_steps is specified, profiling will automatically stop after that many steps.
Example workflow
Profiler Trace Merger for Distributed Traces
SGLang now supports automatic merging of profiling traces from distributed setups with multiple parallelism types (TP, DP, PP, EP). This feature is particularly useful for analyzing performance across distributed runs.Multi-Node Profiling and Shared Storage Considerations
Single-node profiler output merging is completely supported. When profiling in distributed environments spanning multiple nodes, shared storage (e.g., NFS, Lustre) should be accessible by all nodes for the output directory to enable merging of trace files. If there is no shared storage accessible across nodes, automatic merging of trace files during profiling is not supported directly as of now.HTTP API Usage
Command Line Usage
Output Files
The profile merger generates:- Individual rank trace files:
{profile_id}-TP-{tp}-DP-{dp}-PP-{pp}-EP-{ep}.trace.json.gz - Merged trace file:
merged-{profile_id}.trace.json.gz
Possible PyTorch bugs
If in any cases you encounter the following error (for example, using qwen 2.5 VL):with_stack with an environment variable such as follows:
View traces
Trace files can be loaded and visualized from:- https://ui.perfetto.dev/ (any browser)
- chrome://tracing (Chrome browser only)
--num-prompts argument and limits the length of output sequences to 100 with --sharegpt-output-len argument, which can generate a small trace file for browser to open smoothly.
Additionally, if you want to locate the SGLang Python source code through the cuda kernel in Trace, you need to disable CUDA Graph when starting the service. This can be done by using the --disable-cuda-graph parameter in the command to start the service.
Profile with Nsight
Nsight systems is an advanced tool that exposes more profiling details, such as register and shared memory usage, annotated code regions and low-level CUDA APIs and events.-
Prerequisite:
Install using apt, or run inside a NVIDIA Docker container or SGLang Docker container.
-
To profile a single batch, use
-
To profile a server, e.g.
In practice, we recommend users to set
--durationargument to a large value. Whenever user wants the server to stop profiling. Firstly run:to get the session id in the form ofprofile-XXXXX, then run:to manually kill the profiler and generatensys-repfiles instantly. -
Use NVTX to annotate code regions, e.g. to see their execution time.
Layer-wise NVTX Profiling with Nsight Systems
SGLang provides built-in layerwise NVTX annotations that can be combined with the CUDA Profiler for detailed per-layer profiling in Nsight Systems. This is particularly useful for identifying performance bottlenecks at the layer level.Using --enable-layerwise-nvtx-marker with Nsight Systems and /start_profile
The --enable-layerwise-nvtx-marker flag automatically adds NVTX markers to every layer in your model. This is particularly powerful when combined with Nsight Systems profiling to see detailed per-layer performance.
Method 1: Using /start_profile with CUDA_PROFILER (for programmatic control)
This method allows you to control exactly when profiling starts/stops via HTTP API while Nsight Systems is running.
-
Launch the server with layerwise NVTX enabled under Nsight Systems:
Note: NVTX markers are not emitted for kernel launches captured by CUDA graphs. Use
--disable-cuda-graphto ensure all layerwise NVTX markers are emitted in the trace. -
In another terminal, control profiling via
/start_profilewithCUDA_PROFILERactivity: -
Send requests to generate load:
-
Profiling will automatically stop after 10 steps (due to
num_steps: 10). If you hadn’t specifiednum_steps, you would need to manually stop it:
--capture-range=cudaProfilerApi option tells Nsight Systems to only capture data between cudaProfilerStart() and cudaProfilerStop() calls (triggered by /start_profile and /end_profile), reducing overhead and file size. The start_step parameter skips the first 3 steps to avoid capturing warmup overhead.
Method 2: Simpler approach without /start_profile API
For simpler use cases where you don’t need fine-grained control over profiling start/stop, you can profile with Nsight Systems capturing the entire workload:
.qdrep file with Nsight Systems:
- NVTX ranges: Each layer appears as a labeled range in the timeline with detailed information in the marker metadata
- CUDA kernels: All GPU kernels are shown alongside the layer annotations
- Layer hierarchy: The full module path (e.g.,
meta-llama/Meta-Llama-3.1-8B-Instruct.model.layers.0.self_attn.qkv_proj) helps identify specific layers. The prefix uses the full model path from--model-path. - Tensor shapes: Input/output dimensions and parameter shapes are included in the NVTX marker data
- Granular visibility: See exactly which layers are taking the most time
- Memory tracking: Identify layers with large memory allocations
- Bottleneck identification: Quickly locate inefficient operations
- Communication overhead: In multi-GPU setups, see per-layer communication costs
- Development debugging: Validate that model architecture changes have the expected performance impact
Other tips
-
You can benchmark a model using dummy weights by only providing the config.json file. This allows for quick testing of model variants without training. To do so, add
--load-format dummyto the above commands and then you only need a correctconfig.jsonunder the checkpoint folder. -
You can benchmark a model with modified configs (e.g., less layers) by using
--json-model-override-args. For example, you can benchmark a model with only 2 layers and 2 kv heads using: -
You can use
--python-backtrace=cudato see python call stack for all CUDA kernels, as in PyTorch Profiler. (Caveat: this can cause inaccurately long kernel runtimes for CUDA event based timing) - For more arguments see Nsight Systems User Guide.
