--quantization to enable online quantization at the same time.
For popular pre-quantized models, please visit Unsloth, NVIDIA ModelOpt
or NeuralMagic collections on HF for some
popular quality validated quantized models. Quantized models must be validated via benchmarks post-quantization
to guard against abnormal quantization loss regressions.
Offline Quantization
To load already quantized models, simply load the model weights and config. Again, if the model has been quantized offline, there’s no need to add--quantization argument when starting the engine. The quantization method will be parsed from the
downloaded Hugging Face config. For example, DeepSeek V3/R1 models are already in FP8, so do not add redundant parameters.
--quantization w8a8_int8 or --quantization w8a8_fp8 to invoke the corresponding CUTLASS int8_kernel or fp8_kernel in sgl-kernel. This action will ignore the Hugging Face config’s quantization settings. For instance, with neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8-dynamic, if you execute with --quantization w8a8_fp8, the system will use the W8A8Fp8Config from SGLang to invoke the sgl-kernel, rather than the CompressedTensorsConfig for vLLM kernels.
Examples of Offline Model Quantization
Using Unsloth
We strongly suggest the use of Unsloth to quantize and load the model. Please refer to SGLang Deployment & Inference Guide with Unsloth.Using auto-round
- LLM quantization
- VLM quantization
- Command Line Usage (Gaudi/CPU/Intel GPU/CUDA)
- known issues
- Mixed-bit Quantization Limitations Mixed-bit quantization is not fully supported. Due to vLLM’s layer fusion (e.g., QKV fusion), applying different bit-widths to components within the same fused layer can lead to compatibility issues.
- Limited Support for Quantized MoE Models Quantized MoE models may encounter inference issues due to kernel limitations (e.g., lack of support for mlp.gate layer quantization). please try to skip quantizing these layers to avoid such errors.
-
Limited Support for Quantized VLMs
Details
Qwen2.5-VL-7Bauto_round:auto_gptq format: Accuracy is close to zero.GPTQ format: Fails with:auto_round:auto_awq and AWQ format: These work as expected.
Using GPTQModel
Using LLM Compressor
meta-llama/Meta-Llama-3-8B-Instruct to FP8 as an example to elaborate on how to do offline quantization.
SGLang, by using the following command:
Using NVIDIA ModelOpt
NVIDIA Model Optimizer (ModelOpt) provides advanced quantization techniques optimized for NVIDIA hardware. SGLang includes a streamlined workflow for quantizing models with ModelOpt and automatically exporting them for deployment.Installation
First, install ModelOpt. You can either install it directly or as an optional SGLang dependency:Quantization and Export Workflow
SGLang provides an example script that demonstrates the complete ModelOpt quantization and export workflow:Available Quantization Methods
modelopt_fp8: FP8 quantization with optimal performance on NVIDIA Hopper and Blackwell GPUsmodelopt_fp4: FP4 quantization with optimal performance on Nvidia Blackwell GPUs
Python API Usage
You can also use ModelOpt quantization programmatically:Deploying Quantized Models
After quantization and export, you can deploy the model with SGLang:Advanced Features
Checkpoint Management: Save and restore fake quantized checkpoints for reuse:Benefits of ModelOpt
- Hardware Optimization: Specifically optimized for NVIDIA GPU architectures
- Advanced Quantization: Supports cutting-edge FP8 and FP4 quantization techniques
- Seamless Integration: Automatic export to HuggingFace format for easy deployment
- Calibration-based: Uses calibration datasets for optimal quantization quality
- Production Ready: Enterprise-grade quantization with NVIDIA support
Online Quantization
To enable online quantization, you can simply specify--quantization in the command line. For example, you can launch the server with the following command to enable FP8 quantization for model meta-llama/Meta-Llama-3.1-8B-Instruct:
["awq", "gptq", "marlin", "gptq_marlin", "awq_marlin", "bitsandbytes", "gguf"].
torchao online quantization method
SGLang also supports quantization methods based on torchao. You can simply specify--torchao-config in the command line to support this feature. For example, if you want to enable int4wo-128 for model meta-llama/Meta-Llama-3.1-8B-Instruct, you can launch the server with the following command:
["int8dq", "int8wo", "fp8wo", "fp8dq-per_tensor", "fp8dq-per_row", "int4wo-32", "int4wo-64", "int4wo-128", "int4wo-256"].
Note: According to this issue, "int8dq" method currently has some bugs when using together with cuda graph capture. So we suggest to disable cuda graph capture when using "int8dq" method. Namely, please use the following command:
quark_int4fp8_moe online quantization method
SGLang running on AMD GPUs (CDNA3 or CDNA4 architecture) supports the quantization method --quantization quark_int4fp8_moe, that will replace MoE layers originally in high precision (bfloat16, float16 or float32) to use weights dynamically quantized to int4, that are upcasted to float8 during inference to run compute in float8 precision with activations dynamically quantized on the fly to float8.
Other layers (e.g. projections in the attention layers) have their weights quantized online to float8 directly.
