Nemotron-Cascade-2-30B-A3B-NVFP4-GGUF
This is a GGUF NVFP4 quantized export of NVIDIA Nemotron-Cascade-2-30B-A3B for llama.cpp made with my own experimental NVFP4 quantizer.
Please let me know about any issues so I can fix them!
For best and fast results, you must use the most recent llama.cpp installation integrated as of 1-April-2026, which is just generic GPU support. You should be able to expect this speed on a 5090:
Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, VRAM: 32606 MiB
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| nemotron_h_moe 31B.A3.5B NVFP4 | 19.28 GiB | 31.58 B | CUDA | 99 | pp512 | 7662.02 ± 29.70 |
| nemotron_h_moe 31B.A3.5B NVFP4 | 19.28 GiB | 31.58 B | CUDA | 99 | tg128 | 221.35 ± 1.31 |
I am still working on a faster Blackwell specific kernel, hopefully coming soon!
This release is designed to preserve behavior close to the base model while providing a strong quality-per-size tradeoff in a compact inference format. This version has F32, BF16, Q8, and NVFP4 tensors.
Model
Base model: nvidia/Nemotron-Cascade-2-30B-A3B
Format: GGUF
Runtime target: llama.cpp
Primary Quantization: NVFP4
Perplexity statistics
Mean PPL(Q): 9.810759 ± 0.073150
Mean PPL(base): 9.733525 ± 0.072496
Cor(ln(PPL(Q)), ln(PPL(base))): 99.78%
Mean ln(PPL(Q)/PPL(base)): 0.007904 ± 0.000489
Mean PPL(Q)/PPL(base): 1.007935 ± 0.000493
Mean PPL(Q)-PPL(base): 0.077234 ± 0.004821
KL divergence statistics
Mean KLD: 0.011451 ± 0.000066
Maximum KLD: 1.617739
99.9% KLD: 0.305346
99.0% KLD: 0.103110
95.0% KLD: 0.040079
90.0% KLD: 0.025450
Median KLD: 0.005200
10.0% KLD: 0.000183
5.0% KLD: 0.000044
1.0% KLD: 0.000003
0.1% KLD: -0.000004
Minimum KLD: -0.000169
Token probability statistics
Mean Δp: -0.204 ± 0.007 %
Maximum Δp: 71.340%
99.9% Δp: 18.886%
99.0% Δp: 7.960%
95.0% Δp: 3.486%
90.0% Δp: 1.939%
75.0% Δp: 0.325%
Median Δp: -0.003%
25.0% Δp: -0.617%
10.0% Δp: -2.587%
5.0% Δp: -4.324%
1.0% Δp: -9.661%
0.1% Δp: -23.429%
Minimum Δp: -63.499%
RMS Δp: 2.926 ± 0.023 %
Same top p: 95.064 ± 0.056 %
Interpretation
This quantized model remains very close to the base model:
Mean PPL increase: +0.077234
Mean KLD: 0.011451
Same-top agreement: 95.064%
Intended use
- Local inference with
llama.cpp - Compact deployment of Nemotron-Cascade-2-30B-A3B on NVFP4 with newly released CUDA kernel for NVFP4
- Evaluation and experimentation with pre-PR llama-quantizer
Example llama.cpp usage
./llama-cli -m /path/to/Nemotron-Cascade-2-30B-A3B-NVFP4.gguf -ngl 99 -p "Hello"
Support
If you'd like to support my costs doing this: Buy me a coffee - Thank you!
- Downloads last month
- 1,863
4-bit
Model tree for michaelw9999/Nemotron-Cascade-2-30B-A3B-NVFP4-GGUF
Base model
nvidia/Nemotron-Cascade-2-30B-A3B