Update README.md
Browse files
README.md
CHANGED
|
@@ -16,4 +16,10 @@ base_model:
|
|
| 16 |
|
| 17 |
Check the original model card for information about this model.
|
| 18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.
|
|
|
|
| 16 |
|
| 17 |
Check the original model card for information about this model.
|
| 18 |
|
| 19 |
+
# Running the model with VLLM in Docker
|
| 20 |
+
```sh
|
| 21 |
+
sudo docker run --runtime nvidia --gpus all --ipc=host -p 8000:8000 -e VLLM_USE_FLASHINFER_MOE_FP4=1 vllm/vllm-openai:nightly Firworks/Kimi-Linear-48B-A3B-Instruct-nvfp4 --served-model-name kimi-48b-nvfp4 --max-model-len 32768 --tensor-parallel-size 2 --trust-remote-code --gpu-memory-utilization 0.7
|
| 22 |
+
```
|
| 23 |
+
This was tested on a 2 x RTX Pro 6000 Blackwell cloud instance.
|
| 24 |
+
|
| 25 |
If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.
|