💧 LFM2.5
Collection
Collection of post-trained and base LFM2.5 models. • 30 items • Updated • 118
MLX export of LFM2.5-350M for Apple Silicon inference.
LFM2.5-350M is a compact multilingual base model built on LiquidAI's hybrid architecture, combining convolutional and attention layers for efficient long-context processing.
| Property | Value |
|---|---|
| Parameters | 350M |
| Precision | 6-bit |
| Group Size | 64 |
| Size | 296 MB |
| Context Length | 128K |
pip install mlx-lm
from mlx_lm import load, generate
from mlx_lm.sample_utils import make_sampler
model, tokenizer = load("LiquidAI/LFM2.5-350M-MLX-6bit")
response = generate(
model,
tokenizer,
prompt="The capital of France is",
max_tokens=100,
sampler=make_sampler(temp=0.7),
verbose=True,
)
This model is released under the LFM 1.0 License.
6-bit