/generate Endpoint
The /generate endpoint accepts the following parameters in JSON format. For detailed usage, see the native API doc. The object is defined at io_struct.py::GenerateReqInput. You can also read the source code to find more arguments and docs.
| Argument | Type/Default | Description |
|---|---|---|
| text | Optional[Union[List[str], str]] = None | The input prompt. Can be a single prompt or a batch of prompts. |
| input_ids | Optional[Union[List[List[int]], List[int]]] = None | The token IDs for text; one can specify either text or input_ids. |
| input_embeds | Optional[Union[List[List[List[float]]], List[List[float]]]] = None | The embeddings for input_ids; one can specify either text, input_ids, or input_embeds. |
| image_data | Optional[Union[List[List[ImageDataItem]], List[ImageDataItem], ImageDataItem]] = None | The image input. Supports three formats: (1) Raw images: PIL Image, file path, URL, or base64 string; (2) Processor output: Dict with format: "processor_output" containing HuggingFace processor outputs; (3) Precomputed embeddings: Dict with format: "precomputed_embedding" and feature containing pre-calculated visual embeddings. Can be a single image, list of images, or list of lists of images. See Multimodal Input Formats for details. |
| audio_data | Optional[Union[List[AudioDataItem], AudioDataItem]] = None | The audio input. Can be a file name, URL, or base64 encoded string. |
| sampling_params | Optional[Union[List[Dict], Dict]] = None | The sampling parameters as described in the sections below. |
| rid | Optional[Union[List[str], str]] = None | The request ID. |
| return_logprob | Optional[Union[List[bool], bool]] = None | Whether to return log probabilities for tokens. |
| logprob_start_len | Optional[Union[List[int], int]] = None | If return_logprob, the start location in the prompt for returning logprobs. Default is “-1”, which returns logprobs for output tokens only. |
| top_logprobs_num | Optional[Union[List[int], int]] = None | If return_logprob, the number of top logprobs to return at each position. |
| token_ids_logprob | Optional[Union[List[List[int]], List[int]]] = None | If return_logprob, the token IDs to return logprob for. |
| return_text_in_logprobs | bool = False | Whether to detokenize tokens in text in the returned logprobs. |
| stream | bool = False | Whether to stream output. |
| lora_path | Optional[Union[List[Optional[str]], Optional[str]]] = None | The path to the LoRA. |
| custom_logit_processor | Optional[Union[List[Optional[str]], str]] = None | Custom logit processor for advanced sampling control. Must be a serialized instance of CustomLogitProcessor using its to_str() method. For usage see below. |
| return_hidden_states | Union[List[bool], bool] = False | Whether to return hidden states. |
| return_routed_experts | bool = False | Whether to return routed experts for MoE models. Requires --enable-return-routed-experts server flag. Returns base64-encoded int32 expert IDs as a flattened array with logical shape [num_tokens, num_layers, top_k]. |
Sampling parameters
The object is defined atsampling_params.py::SamplingParams. You can also read the source code to find more arguments and docs.
Note on defaults
By default, SGLang initializes several sampling parameters from the model’sgeneration_config.json (when the server is launched with --sampling-defaults model, which is the default). To use SGLang/OpenAI constant defaults instead, start the server with --sampling-defaults openai. You can always override any parameter per request via sampling_params.
Core parameters
| Argument | Type/Default | Description |
|---|---|---|
| max_new_tokens | int = 128 | The maximum output length measured in tokens. |
| stop | Optional[Union[str, List[str]]] = None | One or multiple stop words. Generation will stop if one of these words is sampled. |
| stop_token_ids | Optional[List[int]] = None | Provide stop words in the form of token IDs. Generation will stop if one of these token IDs is sampled. |
| stop_regex | Optional[Union[str, List[str]]] = None | Stop when hitting any of the regex patterns in this list |
| temperature | float (model default; fallback 1.0) | Temperature when sampling the next token. temperature = 0 corresponds to greedy sampling, a higher temperature leads to more diversity. |
| top_p | float (model default; fallback 1.0) | Top-p selects tokens from the smallest sorted set whose cumulative probability exceeds top_p. When top_p = 1, this reduces to unrestricted sampling from all tokens. |
| top_k | int (model default; fallback -1) | Top-k randomly selects from the k highest-probability tokens. |
| min_p | float (model default; fallback 0.0) | Min-p samples from tokens with probability larger than min_p * highest_token_probability. |
Penalizers
| Argument | Type/Default | Description |
|---|---|---|
| frequency_penalty | float = 0.0 | Penalizes tokens based on their frequency in generation so far. Must be between -2 and 2 where negative numbers encourage repeatment of tokens and positive number encourages sampling of new tokens. The scaling of penalization grows linearly with each appearance of a token. |
| presence_penalty | float = 0.0 | Penalizes tokens if they appeared in the generation so far. Must be between -2 and 2 where negative numbers encourage repeatment of tokens and positive number encourages sampling of new tokens. The scaling of the penalization is constant if a token occurred. |
| repetition_penalty | float = 1.0 | Scales the logits of previously generated tokens to discourage (values > 1) or encourage (values < 1) repetition. Valid range is [0, 2]; 1.0 leaves probabilities unchanged. |
| min_new_tokens | int = 0 | Forces the model to generate at least min_new_tokens until a stop word or EOS token is sampled. Note that this might lead to unintended behavior, for example, if the distribution is highly skewed towards these tokens. |
Constrained decoding
Please refer to our dedicated guide on constrained decoding for the following parameters.| Argument | Type/Default | Description |
|---|---|---|
| json_schema | Optional[str] = None | JSON schema for structured outputs. |
| regex | Optional[str] = None | Regex for structured outputs. |
| ebnf | Optional[str] = None | EBNF for structured outputs. |
| structural_tag | Optional[str] = None | The structal tag for structured outputs. |
Other options
| Argument | Type/Default | Description |
|---|---|---|
| n | int = 1 | Specifies the number of output sequences to generate per request. (Generating multiple outputs in one request (n > 1) is discouraged; repeating the same prompts several times offers better control and efficiency.) |
| ignore_eos | bool = False | Don’t stop generation when EOS token is sampled. |
| skip_special_tokens | bool = True | Remove special tokens during decoding. |
| spaces_between_special_tokens | bool = True | Whether or not to add spaces between special tokens during detokenization. |
| no_stop_trim | bool = False | Don’t trim stop words or EOS token from the generated text. |
| custom_params | Optional[List[Optional[Dict[str, Any]]]] = None | Used when employing CustomLogitProcessor. For usage, see below. |
Examples
Normal
Launch a server:Streaming
Send a request and stream the output:Multimodal
Launch a server:image_data can be a file name, a URL, or a base64 encoded string. See also python/sglang/srt/utils.py:load_image.
Streaming is supported in a similar manner as above.
Detailed example in OpenAI API Vision.
Structured Outputs (JSON, Regex, EBNF)
You can specify a JSON schema, regular expression or EBNF to constrain the model output. The model output will be guaranteed to follow the given constraints. Only one constraint parameter (json_schema, regex, or ebnf) can be specified for a request.
SGLang supports two grammar backends:
- XGrammar (default): Supports JSON schema, regular expression, and EBNF constraints.
- XGrammar currently uses the GGML BNF format.
- Outlines: Supports JSON schema and regular expression constraints.
--grammar-backend outlines flag:
Custom logit processor
Launch a server with--enable-custom-logit-processor flag on.
