Liquid AI
Try LFM • Docs • LEAP • Discord

LFM2.5-350M-MLX-6bit

MLX export of LFM2.5-350M for Apple Silicon inference.

LFM2.5-350M is a compact multilingual base model built on LiquidAI's hybrid architecture, combining convolutional and attention layers for efficient long-context processing.

Model Details

Property Value
Parameters 350M
Precision 6-bit
Group Size 64
Size 296 MB
Context Length 128K

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate
from mlx_lm.sample_utils import make_sampler

model, tokenizer = load("LiquidAI/LFM2.5-350M-MLX-6bit")

response = generate(
    model,
    tokenizer,
    prompt="The capital of France is",
    max_tokens=100,
    sampler=make_sampler(temp=0.7),
    verbose=True,
)

Other Precisions

License

This model is released under the LFM 1.0 License.

Downloads last month
187
Safetensors
Model size
77.6M params
Tensor type
F32
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LiquidAI/LFM2.5-350M-MLX-6bit

Quantized
(15)
this model

Collection including LiquidAI/LFM2.5-350M-MLX-6bit