Qwen3-8B-FP8-block / recipe.yaml
krishnateja95's picture
Add FP8 block quantized model weights
61a63b1
raw
history blame contribute delete
134 Bytes
default_stage:
default_modifiers:
QuantizationModifier:
targets: [Linear]
ignore: [lm_head]
scheme: FP8_BLOCK