mlx-community/granite-4.0-1b-speech-6bit
This model was converted to MLX format from ibm-granite/granite-4.0-1b-speech using mlx-audio version 0.4.0.
Refer to the original model card for more details on the model.
Use with mlx-audio
pip install -U mlx-audio
CLI Example:
python -m mlx_audio.stt.generate --model mlx-community/granite-4.0-1b-speech-6bit --audio "audio.wav"
Python Example:
from mlx_audio.stt.utils import load_model
from mlx_audio.stt.generate import generate_transcription
model = load_model("mlx-community/granite-4.0-1b-speech-6bit")
transcription = generate_transcription(
model=model,
audio_path="path_to_audio.wav",
output_path="path_to_output.txt",
format="txt",
verbose=True,
)
print(transcription.text)
- Downloads last month
- 16
Model size
0.9B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
6-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for mlx-community/granite-4.0-1b-speech-6bit
Base model
ibm-granite/granite-4.0-1b-base