Cohere Transcribe โ MLX
This repository contains an MLX-native int8 conversion of Cohere Transcribe for local automatic speech recognition on Apple Silicon.
It is intended for local transcription with mlx-speech, without a PyTorch runtime or cloud API dependency at inference time.
Variants
| Path | Precision |
|---|---|
mlx-int8/ |
int8 quantized weights |
Model Details
- Developed by: AppAutomaton
- Shared by: AppAutomaton on Hugging Face
- Upstream model:
cohere-transcribe-03-2026 - Task: automatic speech recognition
- Runtime: MLX on Apple Silicon
How to Get Started
Command-line transcription with mlx-speech:
python scripts/transcribe_cohere_asr.py \
--audio input.wav \
--output transcript.txt
Minimal Python usage:
import numpy as np
import soundfile as sf
from mlx_speech.generation import CohereAsrModel
audio, sr = sf.read("input.wav", dtype="float32", always_2d=False)
if audio.ndim > 1:
audio = audio.mean(axis=1)
if sr != 16000:
old_len = len(audio)
new_len = int(round(old_len * 16000 / sr))
audio = np.interp(np.linspace(0, old_len - 1, new_len), np.arange(old_len), audio).astype(np.float32)
model = CohereAsrModel.from_path("mlx-int8")
result = model.transcribe(audio, sample_rate=16000, language="en")
print(result.text)
Notes
- This repo contains the quantized MLX runtime artifact only.
- The conversion keeps the original encoder-decoder ASR architecture and remaps weights explicitly for MLX inference.
- The example above resamples to 16 kHz before calling
transcribe(), which matches the runtime requirement.
Links
- Source code: mlx-speech
- More examples: AppAutomaton
License
Apache 2.0 โ following the upstream Cohere Transcribe model license. Check the original Cohere release for current terms.
Hardware compatibility
Log In to add your hardware
Quantized
Model tree for appautomaton/cohere-asr-mlx
Base model
CohereLabs/cohere-transcribe-03-2026