Instructions to use Qwen/Qwen3.5-9B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Qwen/Qwen3.5-9B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="Qwen/Qwen3.5-9B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("Qwen/Qwen3.5-9B") model = AutoModelForImageTextToText.from_pretrained("Qwen/Qwen3.5-9B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- HuggingChat
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Qwen/Qwen3.5-9B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Qwen/Qwen3.5-9B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Qwen/Qwen3.5-9B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/Qwen/Qwen3.5-9B
- SGLang
How to use Qwen/Qwen3.5-9B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Qwen/Qwen3.5-9B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Qwen/Qwen3.5-9B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Qwen/Qwen3.5-9B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Qwen/Qwen3.5-9B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use Qwen/Qwen3.5-9B with Docker Model Runner:
docker model run hf.co/Qwen/Qwen3.5-9B
How to disable or reduce thinking
Hi everyone,
I'm using AutoProcessor and AutoModelForImageTextToText, and the model often outputs reasoning / thinking text .
I'm trying to make the model respond with only the final answer or at least reduce the amount of reasoning it outputs.
Current setup:
from transformers import AutoProcessor, AutoModelForImageTextToText
processor = AutoProcessor.from_pretrained("Qwen/Qwen3.5-9B")
model = AutoModelForImageTextToText.from_pretrained(
"Qwen/Qwen3.5-9B",
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
Questions:
- Is there an official way to disable the thinking / reasoning output in Qwen3.5?
- Is
enable_thinking=Falsethe recommended approach for this model? - Are there any other generation settings recommended to reduce reasoning and return concise answers?
Thanks in advance for the help!
You can't decide whether to disable inference at deployment time, you can do it at client request, for example, you can add a parameter to the data sent when using curl:
"chat_template_kwargs": {"enable_thinking": false}
full example:
curl http://localhost:8001/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: xxx" \
-d '{
"model": "Qwen3.5-9B","stream":true, "chat_template_kwargs": {"enable_thinking": false},
"messages": [
{
"role": "user",
"content": "hello"
}
]
}'
I use Qwen3.5 without thinking process in this way. Use enable_thinking=False in tokenizer.apply_chat_template.
from transformers import AutoModelForCausalLM, AutoTokenizer
import re
model_id = "Qwen/Qwen3.5-4B"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
messages = [
{"role": "user", "content":"Say five countries in Africa."}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
enable_thinking=False, # DISABLE THINKING PROCESS
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512,
temperature=0
)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
raw_answer = response.split("assistant\n")[-1]
clean_answer = re.sub(r'<think>.*?</think>', '', raw_answer, flags=re.DOTALL).strip()
The thinking for Qwen3.5 is egregious. It needlessly burns thousands of tokens thinking itself in circles without any benefit over Qwen3 non-thinking. It's a waste of time and a waste of tokens. This 9b model in particular, but the same is true with the larger, less quantized versions.