repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
vllm-project/vllm | 31,787 | [Usage]: How to set different attention backend for prefill and decode phases? | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Alibaba Cloud Linux 3 (Soaring Falcon) (x86_64)
GCC version : (GCC) 10.2.1 20200825 (Alibaba 10.2.1-3.8 2.32)
Clan... | https://github.com/vllm-project/vllm/issues/31787 | open | [
"usage"
] | 2026-01-06T07:33:18Z | 2026-01-06T07:33:18Z | 0 | stormchasingg |
pytorch/audio | 4,165 | Does TorchAudio include any RISC-V / RVV specific optimizations? | ### 🚀 The feature
Hi TorchAudio maintainers,
I would like to ask whether TorchAudio currently contains any architecture-specific optimizations for RISC-V, especially for the RISC-V Vector Extension (RVV).
So far, I have checked the TorchAudio (audio-2.8.0) repository and observed that:
- There are no RISC-V or RVV ... | https://github.com/pytorch/audio/issues/4165 | open | [] | 2026-01-06T07:24:55Z | 2026-01-06T07:24:55Z | 0 | zhouying12 |
sgl-project/sglang | 16,546 | [RFC] SGLang-Omni Design | API Design: @shuaills
Proposal Draft: @FrankLeeeee @sleepcoo
## Motivation
Recent models, no matter open-source or proprietary, have the tendency to become more multi-modal than ever before. That is, models have the ability to process data in more than two modalities. For example, Gemini can have inputs of text, i... | https://github.com/sgl-project/sglang/issues/16546 | open | [] | 2026-01-06T06:23:37Z | 2026-01-06T07:14:36Z | 0 | FrankLeeeee |
vllm-project/vllm | 31,766 | [Docs] Feedback for `/en/latest/contributing/profiling/` | ### 📚 The doc issue
When I follow this doc and run OpenAI Server[¶](https://docs.vllm.ai/en/latest/contributing/profiling/#openai-server), I found
> usage: vllm [-h] [-v] {chat,complete,serve,bench,collect-env,run-batch} ...
> vllm: error: unrecognized arguments: --profiler-config {"profiler": "torch", "torch_profil... | https://github.com/vllm-project/vllm/issues/31766 | open | [
"documentation"
] | 2026-01-06T03:15:37Z | 2026-01-06T03:15:37Z | 0 | cyk2018 |
huggingface/tokenizers | 1,926 | [bug] Why is Apple's development for computers with Inter chips not supported in versions above 0.30.0 | Why is Apple's development for computers with Inter chips not supported in versions above 0.30.0? | https://github.com/huggingface/tokenizers/issues/1926 | open | [] | 2026-01-06T03:11:35Z | 2026-01-06T03:18:03Z | 1 | sustly |
sgl-project/sglang | 16,530 | [Bug] DecodingStage VRAM usage surges dramatically | ### Checklist
- [ ] I searched related issues but found no solution.
- [ ] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/16530 | open | [] | 2026-01-06T02:15:16Z | 2026-01-06T02:15:16Z | 0 | carloszhang999 |
huggingface/lerobot | 2,753 | Debugging poor eval with SmoVLA and two cameras. | ### Ticket Type
❓ Technical Question
### Environment & System Info
```Shell
- Lerobot running on a Jetson Orin nano Super
- Model trained on a 4090
- SO-ARM-101 model.
- two cameras setup (wrist and top view)
```
### Description
I just trained a 30K steps SmoVLA model from a 73 episodes dataset (which are a 2 merg... | https://github.com/huggingface/lerobot/issues/2753 | open | [
"question",
"policies",
"dataset",
"sensors",
"training",
"evaluation"
] | 2026-01-05T18:25:13Z | 2026-01-05T18:25:27Z | null | vettorazi |
vllm-project/vllm | 31,726 | [Usage]: Why does `vllm serve` keep filling up my system disk when loading a model from a network mount? |
### Your current environment
```
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could n... | https://github.com/vllm-project/vllm/issues/31726 | open | [
"usage"
] | 2026-01-05T14:50:19Z | 2026-01-05T15:30:39Z | 5 | tingjun-cs |
huggingface/diffusers | 12,913 | Is Lumina2Pipeline's mu calculation correct? | ### Describe the bug
Description
While reviewing the current main-branch implementation of pipeline_lumina2, I noticed a potential bug in the calculation of mu within the pipeline's __call__.
In the following section of the code:
https://github.com/huggingface/diffusers/blob/5ffb65803d0ddc5e3298c35df638ceed5e580922... | https://github.com/huggingface/diffusers/issues/12913 | open | [
"bug"
] | 2026-01-05T14:30:01Z | 2026-01-05T18:07:36Z | 1 | hwangdonghyun |
pytorch/pytorch | 171,687 | gfx1151 (Strix Halo) — LLM decode is ~90% hipMemcpyWithStream in FP16 & 4-bit; kernels not compute-bound | [benchmark-results_preauth.log](https://github.com/user-attachments/files/24424966/benchmark-results_preauth.log)
### 🐛 Describe the bug
Summary
On gfx1151 (Strix Halo / Ryzen AI MAX 395), autoregressive LLM inference is consistently dominated by hipMemcpyWithStream during decode in both:
FP16 / BF16 (no quantizati... | https://github.com/pytorch/pytorch/issues/171687 | open | [
"module: rocm",
"triaged"
] | 2026-01-04T23:53:11Z | 2026-01-05T12:45:47Z | 0 | BellaDoggie |
vllm-project/vllm | 31,689 | [Feature][Quantization][Help Wanted]: Clean up GPTQ + AWQ Quantization | ### 🚀 The feature, motivation and pitch
We are in process of cleaning up the quantization integrations in vllm (see the FusedMoE refactor PRs I am working on)
In general, this means we are trying to separate concerns of the quantization INTEGRATION (on disk format --- responsible for weight loading) from the quantiz... | https://github.com/vllm-project/vllm/issues/31689 | open | [
"help wanted",
"feature request"
] | 2026-01-04T20:56:04Z | 2026-01-06T04:42:19Z | 7 | robertgshaw2-redhat |
vllm-project/vllm | 31,683 | [Feature]: Error Logging Redesign | ### 🚀 The feature, motivation and pitch
vLLM has a multiprocess architecture with:
- API Server --> EngineCore --> [N] Workers
As a result, clean error message logging is challenging, since the error in the API server that occurs will often not be the root cause error. An example of this is at startup time:
```
(vl... | https://github.com/vllm-project/vllm/issues/31683 | open | [
"help wanted",
"feature request"
] | 2026-01-04T14:53:38Z | 2026-01-04T14:53:43Z | 0 | robertgshaw2-redhat |
sgl-project/sglang | 16,362 | [Bug] Deepseekv3.2 detect eos when reasonging | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/16362 | open | [] | 2026-01-04T02:43:14Z | 2026-01-04T02:43:14Z | 0 | duzeyan |
pytorch/pytorch | 171,656 | torch.distributed.pipelining fails on models having DynamicCache (esp. Llama) | ### 🐛 Describe the bug
torch.distributed.pipelining fails on model having DynamicCache.
Should this work? It's pared down from the PiPPy Llama2 example from the documentation (https://docs.pytorch.org/docs/stable/distributed.pipelining.html#hugging-face-examples)
Originally I was trying to use Llama 3.1 but was hav... | https://github.com/pytorch/pytorch/issues/171656 | open | [
"oncall: distributed"
] | 2026-01-03T21:32:58Z | 2026-01-05T12:48:54Z | 2 | hpcpony |
vllm-project/vllm | 31,646 | [Usage]: How can I use GPU12 as standalone KV LMCache? | ### Your current environment
```text
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version ... | https://github.com/vllm-project/vllm/issues/31646 | open | [
"usage"
] | 2026-01-03T13:25:41Z | 2026-01-03T13:25:41Z | 0 | joshuakoh1 |
vllm-project/vllm | 31,624 | [Bug]: ModelOpt Llama-4 Checkpoints Take 5+ minutes to load | ### 🚀 The feature, motivation and pitch
In working on some MoE refactors, I discovered that L4 for ModelOpt takes 5+minutes to load weights even from CPU page cache.
- https://huggingface.co/nvidia/Llama-4-Scout-17B-16E-Instruct-FP8
The root cause is basically this hack logic to load the state dict that ModelOpt us... | https://github.com/vllm-project/vllm/issues/31624 | open | [
"bug",
"help wanted",
"good first issue",
"feature request"
] | 2026-01-02T15:18:14Z | 2026-01-06T02:42:32Z | 6 | robertgshaw2-redhat |
huggingface/lerobot | 2,741 | XVLA: Clarification on provided lerobot/xvla-base model checkpoint and documentation | ### Ticket Type
❓ Technical Question
### Environment & System Info
```Shell
```
### Description
Dear lerobot-Team,
I hope you had a good start into 2026 and thanks for the great work on making X-VLA natively available via lerobot.
I have a few questions regarding the _lerobot/xvla-base_ checkpoint and the inform... | https://github.com/huggingface/lerobot/issues/2741 | open | [
"documentation",
"question",
"policies",
"dataset",
"training"
] | 2026-01-02T08:38:03Z | 2026-01-04T15:54:55Z | null | gianlucageraci |
huggingface/datasets | 7,927 | Using Stateful Dataloader with Split Dataset By Node and DCP for DDP | ### Describe the bug
I am trying to determine how to save and load the Stateful Dataloader State with DCP and Split Dataset by Node for DDP.
Currently, I am running into the issue where I am receiving a slow resume.
```
Neither dataset nor iter(dataset) defines state_dict/load_state_dict so we are naively fast-forwar... | https://github.com/huggingface/datasets/issues/7927 | open | [] | 2026-01-01T22:27:07Z | 2026-01-02T02:48:21Z | 2 | conceptofmind |
vllm-project/vllm | 31,609 | [Bug][ModelOpt]: FlashInfer CUTLASS MoE Accuracy Degraded (Llama4) | ### Your current environment
H100, B200 ---> vllm 0.13.0
### 🐛 Describe the bug
- running the following:
```bash
# modelopt
MODEL_TENSOR := "nvidia/Llama-4-Scout-17B-16E-Instruct-FP8"
GPUS := "2"
PORT := "8001"
# sm90 / sm100
launch_cutlass_tensor:
VLLM_USE_DEEP_GEMM=0 VLLM_USE_FLASHINFER_MOE_FP8=1 VLLM_FLASH... | https://github.com/vllm-project/vllm/issues/31609 | closed | [
"bug",
"help wanted"
] | 2026-01-01T21:45:48Z | 2026-01-03T20:26:38Z | 2 | robertgshaw2-redhat |
huggingface/trl | 4,766 | Asynchronous generation and training for GRPO? | ### Feature request
GRPOTrainer send requests for the next batch to vllm server when it is computing backpropagation, in order to reduce idle runtime for both server's GPUs and trainer's GPUs.
### Motivation
Under the current GRPO trainer, generation and backpropagation are sequential, meaning that lots of runtime a... | https://github.com/huggingface/trl/issues/4766 | open | [] | 2026-01-01T08:42:12Z | 2026-01-01T08:42:12Z | 0 | sxndqc |
pytorch/pytorch | 171,594 | Can you tell me which kernel function be used? | I'm newer for pytorch source code, but I want copy some pytorch cuda kernel to my project.
For example, "images data format nchw use torch.nn.functional.interpolate(..., antialias=False)",
then I find the function torch._C._nn.upsample_bilinear2d(...) in functional.py to use.
I find some kernel in https://github.co... | https://github.com/pytorch/pytorch/issues/171594 | closed | [] | 2026-01-01T07:37:53Z | 2026-01-03T06:58:52Z | 2 | lzcchl |
pytorch/pytorch | 171,592 | When does it make sense to compile DDP vs not? | Hello,
I have been looking online, but have seen conflicting information.
Say I can `fullgraph` compile a model with `max-autotune`:
```python
compiled_model = torch.compile(raw_model, fullgraph=True, mode="max-autotune")
ddp_model = DDP(
compiled_model,
device_ids=[local_rank],
output... | https://github.com/pytorch/pytorch/issues/171592 | closed | [] | 2026-01-01T02:12:06Z | 2026-01-05T14:54:02Z | 1 | conceptofmind |
vllm-project/vllm | 31,574 | [Usage]: If vllm surpport load LoRA adapter and DeepSeek-v3.1-termunis at the same time | ### Your current environment
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : ... | https://github.com/vllm-project/vllm/issues/31574 | open | [
"usage"
] | 2025-12-31T10:33:52Z | 2026-01-01T07:09:51Z | 1 | AIR-hl |
sgl-project/sglang | 16,220 | GLM pd disaggregation with mtp | did glm support pd disaggregation and mtp? i try to test,but the accept len in log is always 1(failed to predict everytime) and performance is bad.i use the start command below,is there something wrong?
args for prefill node :
SGLANG_ENABLE_SPEC_V2=1 SGLANG_DISAGGREGATION_QUEUE_SIZE=1 SGLANG_DISAGGREGATION_THREAD_POO... | https://github.com/sgl-project/sglang/issues/16220 | open | [] | 2025-12-31T10:19:04Z | 2026-01-04T01:52:56Z | 1 | dongliangwu |
pytorch/executorch | 16,422 | java linux cannot work , we need executorch java jar format package ,please support | ### 🐛 Describe the bug
java linux cannot work ,
I just can't figure it out. I've been communicating with you for a month now, so why can you still not compile a pure Java JAR that allows Java to use executors on Linux, macOS, and Windows? You insist on using JNI to bundle androidx.core in an AAR format, which is com... | https://github.com/pytorch/executorch/issues/16422 | open | [
"module: android"
] | 2025-12-31T10:09:02Z | 2026-01-06T07:52:28Z | 2 | mullerhai |
vllm-project/vllm | 31,567 | [RFC]: Why custom_mask is not exposed on FlashInfer to get more flexible use case? | ### Motivation.
Like what tensorrt-llm does https://github.com/NVIDIA/TensorRT-LLM/blob/6c1abf2d45c77d04121ebe10f6b29abf89373c60/tensorrt_llm/_torch/attention_backend/flashinfer.py#L411C17-L411C28
### Proposed Change.
expose the custom_weight to support use case like relative attention bias
### Feedback Period.
_N... | https://github.com/vllm-project/vllm/issues/31567 | open | [
"RFC"
] | 2025-12-31T06:00:07Z | 2025-12-31T06:00:07Z | 0 | npuichigo |
vllm-project/vllm | 31,564 | [Bug]: Qwen3-VL-8B-Instruct has accuracy issue - Multi modal accuracy issue | ### Your current environment
**Current input format:**
messages = [
{"role": "system", "content": system_prompt},
{
"role": "user",
"content": [
{"type": "text", "text": user_prompt},
{
"type": "ima... | https://github.com/vllm-project/vllm/issues/31564 | open | [
"bug"
] | 2025-12-31T05:13:32Z | 2026-01-02T04:29:14Z | 3 | Dineshkumar-Anandan-ZS0367 |
huggingface/lerobot | 2,737 | SARM WITH PI05: Why trainning loss getting more noise? | ### Ticket Type
❓ Technical Question
### Environment & System Info
```Shell
```
### Description
[SARM with pi05 training for folding towel task _ fold_towel_v3_0 – Weights & Biases.pdf](https://github.com/user-attachments/files/24389716/SARM.with.pi05.training.for.folding.towel.task._.fold_towel_v3_0.Weights.Bias... | https://github.com/huggingface/lerobot/issues/2737 | closed | [
"question",
"training"
] | 2025-12-31T03:20:16Z | 2026-01-02T08:01:25Z | null | xianglunkai |
huggingface/lerobot | 2,736 | Questions about VLA multi-task training. | ### Ticket Type
💡 Feature Request / Improvement
### Environment & System Info
```Shell
- LeRobot version: 0.4.2
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.10.18
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- FFmpeg version: 6.1.1
- PyTorch ver... | https://github.com/huggingface/lerobot/issues/2736 | open | [
"enhancement",
"question",
"examples",
"training"
] | 2025-12-31T03:12:02Z | 2026-01-04T20:02:02Z | null | yquanli |
vllm-project/vllm | 31,555 | [Docs] Feedback for `/en/stable/`MONSTERDOG | ### 📚 The doc issue
[Projets (1).csv](https://github.com/user-attachments/files/24389184/Projets.1.csv)
[Projets.csv](https://github.com/user-attachments/files/24389185/Projets.csv)
[MonsterDog_Pilot_ROI_ISO42001_Report.pdf](https://github.com/user-attachments/files/24389187/MonsterDog_Pilot_ROI_ISO42001_Report.pdf)
... | https://github.com/vllm-project/vllm/issues/31555 | closed | [
"documentation"
] | 2025-12-31T01:20:55Z | 2025-12-31T05:18:48Z | 0 | s33765387-cpu |
huggingface/lerobot | 2,735 | Buy the camera? | Hi! Where do I buy the camera and the whole SO-ARM101 kit?
I find the kit at a chinese website like WoWRobo Robotics with only Paypal payment. But is that it? How do I buy the camera otherwise? | https://github.com/huggingface/lerobot/issues/2735 | open | [
"question",
"sensors"
] | 2025-12-30T22:32:42Z | 2025-12-30T22:51:39Z | null | JFI12 |
pytorch/pytorch | 171,537 | `torch.compile(dynamic=True)` + `torch.func` triggers internal assertion error. | ### 🐛 Describe the bug
This is a bug in pytorch 2.8, with `nvcc` version `release 12.9, V12.9.86` on Ubuntu linux. It repros on BOTH my `RTX 5060 TI 16GB` AND on CPU.
The specific error message is `RuntimeError('isIntList() INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/core/ivalue_inl.h":1979, please report a bu... | https://github.com/pytorch/pytorch/issues/171537 | open | [
"oncall: pt2"
] | 2025-12-30T20:35:47Z | 2026-01-02T10:19:24Z | 0 | rwkeane |
pytorch/pytorch | 171,516 | How to verify that default_decompositions successfully reduce operators to the Core ATen IR set? | Hi~
Is there a way to test if all ops in `default_decompositions` can be fully decomposed into the Core ATen IR (~180 ops) using `ep.run_decompositions`, as specified in the Export IR documentation (https://docs.pytorch.org/docs/stable/export.html#export-ir-decompositions)?
cc @chauhang @penguinwu @avikchaudhuri @gm... | https://github.com/pytorch/pytorch/issues/171516 | open | [
"oncall: pt2",
"oncall: export"
] | 2025-12-30T09:22:16Z | 2026-01-05T16:23:29Z | null | Tongkaio |
pytorch/pytorch | 171,501 | Several Windows-related GitHub Actions not running — are they intentionally disabled? | Hi PyTorch team,
I noticed that several Windows-related GitHub Actions workflows have not run for quite some time. Could you please help confirm whether each of these workflows is intentionally not running, and if not, whether there are plans or timelines for re‑enabling them?
The workflows in question are:
- https://... | https://github.com/pytorch/pytorch/issues/171501 | open | [
"module: windows",
"module: ci",
"triaged",
"module: arm"
] | 2025-12-30T05:29:20Z | 2026-01-05T14:46:01Z | 2 | vortex-captain |
huggingface/candle | 3,272 | Added support for Vulkan, any interest? | I have a Intel Arc A770 16GB GPU and wanted to use it with candle.
I took niklasha's work on niklas-vulkan-2 branch cherry-pick's into the current main branch.
I (when I say I, I mean I was the navigator, Codex 5.2 max did the work) added the following:
Added Vulkan queue-family selection and synchronize() so VulkanD... | https://github.com/huggingface/candle/issues/3272 | open | [] | 2025-12-30T02:58:27Z | 2025-12-30T03:00:12Z | 0 | davidwynter |
pytorch/executorch | 16,413 | Batch Inference On 8255 device | Hi, I want to perform batch inference on the 8255 device now.
I noticed there is a --num_iters parameter in qnn_llama_runner. Is this parameter for batch inference? Additionally, how can I use the KV cache, that is, load the model and system_prompt once and then perform multiple inferences.
Looking forward to your re... | https://github.com/pytorch/executorch/issues/16413 | open | [
"partner: qualcomm",
"module: qnn"
] | 2025-12-30T02:55:46Z | 2026-01-06T07:15:45Z | 6 | imjking |
vllm-project/vllm | 31,515 | [Feature]: need scheduler solution with high priority to process prefill | ### 🚀 The feature, motivation and pitch
I have a model situiation which is that the model just care about the throughtput not care about the time delay, so I need a schedule solution which can get the high priority to process prefill and after all prefill is finished in the batch and then process the decode, this sol... | https://github.com/vllm-project/vllm/issues/31515 | open | [
"feature request"
] | 2025-12-30T02:09:35Z | 2025-12-30T02:09:35Z | 0 | 184603418 |
pytorch/tutorials | 3,710 | [DCP] Add DefaultStager example to distributed async checkpoint recipe | ### 🚀 Feature Request
**Description**
The current `distributed_async_checkpoint_recipe` covers basic usage of `dcp.async_save` and Pinned Memory optimization. However, it does not cover the **fully asynchronous staging** capabilities introduced in PyTorch 2.9 via `DefaultStager`.
Even with `async_save`, the Device-t... | https://github.com/pytorch/tutorials/issues/3710 | open | [] | 2025-12-29T13:28:55Z | 2025-12-29T13:28:55Z | 0 | niyunsheng |
vllm-project/vllm | 31,486 | [Feature]: GLM 4.7 vocab padding feature | ### 🚀 The feature, motivation and pitch
The number of attention heads in GLM-4.7 is 96, so I’m trying to run the FP8 version with 6× H20 GPUs using tensor parallelism (tp=6).
However, vllm serve fails and due to `151552 cannot be divided by 6`.
This seems to be caused by the vocab size 151552 not being divisible by... | https://github.com/vllm-project/vllm/issues/31486 | open | [
"feature request"
] | 2025-12-29T09:30:35Z | 2026-01-06T02:45:22Z | 3 | H100-H200-B200 |
vllm-project/vllm | 31,484 | [Usage]: RuntimeError when running Qwen2.5-VL-7B-Instruct with vllm: Potential version incompatibility | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/31484 | open | [
"usage"
] | 2025-12-29T08:36:11Z | 2025-12-30T02:40:38Z | 1 | puyuan1996 |
huggingface/diffusers | 12,899 | Training script of z-image controlnet? | Can diffusers provide training script of z-image controlnet? | https://github.com/huggingface/diffusers/issues/12899 | open | [] | 2025-12-29T08:30:09Z | 2025-12-29T08:30:09Z | 0 | universewill |
vllm-project/vllm | 31,480 | [Usage]: run deepseek v3.2 failed | ### Your current environment
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not c... | https://github.com/vllm-project/vllm/issues/31480 | open | [
"usage"
] | 2025-12-29T07:33:04Z | 2025-12-29T07:33:04Z | 0 | ljwps |
vllm-project/vllm | 31,479 | [Feature]: Enable LoRA support for tower and connector in more MM models | ### 🚀 The feature, motivation and pitch
Regarding multi-modal models, we have supported adding LoRA to the tower encoder and connector,see: #26674, but have only implemented it for a few models (`Qwen VL series` and `idefics3`). There is no reason not to support other multi-modal models.
### Solution
For the remai... | https://github.com/vllm-project/vllm/issues/31479 | open | [
"help wanted",
"feature request"
] | 2025-12-29T07:28:52Z | 2026-01-06T02:03:29Z | 4 | jeejeelee |
vllm-project/vllm | 31,474 | [Feature]: GLM 4.7 vocab padding feature | ### 🚀 The feature, motivation and pitch
The number of attention heads in GLM-4.7 is 96, so I’m trying to run the FP8 version with 6× H20 GPUs using tensor parallelism (tp=6).
However, vllm serve fails and due to `151552 cannot be divided by 6`.
This seems to be caused by the vocab size 151552 not being divisible by... | https://github.com/vllm-project/vllm/issues/31474 | closed | [
"feature request"
] | 2025-12-29T04:55:28Z | 2025-12-29T09:28:17Z | 0 | H100-H200-B200 |
vllm-project/vllm | 31,469 | [Feature]: Optimize the definition of the fake function in the code. | ### 🚀 The feature, motivation and pitch
The current code contains some fake function definitions, which are placed together with the main logic, such as `all_reduce_fake`. In the `parallel_state.py` file, can we define a file called `parallel_state_fake.py` and move all the corresponding fake functions to this file, ... | https://github.com/vllm-project/vllm/issues/31469 | open | [
"feature request"
] | 2025-12-29T03:14:26Z | 2025-12-29T06:16:08Z | 3 | lengrongfu |
vllm-project/vllm | 31,467 | [RFC]: A Triton operator dispatch mechanism through modified `CustomOp` | ### Motivation.
Triton is becoming increasingly important in vLLM, and we've noticed its use in many models, quantization processes, and general workflows. Meanwhile, vLLM supports various backends. Typically, to achieve high performance, **different implementations of the Triton kernels** are used on different hardwa... | https://github.com/vllm-project/vllm/issues/31467 | open | [
"RFC"
] | 2025-12-29T02:44:13Z | 2026-01-06T07:38:29Z | 12 | MengqingCao |
vllm-project/vllm | 31,437 | [Bug]: Streaming tool calls missing id/type/name in finish chunk | ### Your current environment
vLLM 0.14.0rc1.dev3 (but also affects main branch as of today)
### Model
GLM-4.7-AWQ with `--tool-call-parser glm47` (also affects other parsers that emit complete tool calls)
### What is the issue?
When streaming tool calls, the finish chunk code in `serving_chat.py` overwrites the to... | https://github.com/vllm-project/vllm/issues/31437 | closed | [] | 2025-12-27T23:54:20Z | 2025-12-29T13:10:54Z | 0 | amittell |
pytorch/pytorch | 171,392 | [Bug] c10::SmallVector: getNewCapacity has unused TSize parameter — remove or use for overflow-safety? | ### 🚀 The feature, motivation and pitch
In [`c10/util/SmallVector.cpp`](https://github.com/pytorch/pytorch/blob/913ea815a4555747729eb2206266411782f29370/c10/util/SmallVector.cpp#L87C53-L87C58) we have:
`template <class Size_T> static size_t getNewCapacity(size_t MinSize, size_t TSize, size_t OldCapacity)`
Currently... | https://github.com/pytorch/pytorch/issues/171392 | open | [
"module: cpp",
"triaged"
] | 2025-12-27T22:54:34Z | 2026-01-05T17:48:08Z | 4 | yewentao256 |
vllm-project/vllm | 31,414 | [Feature][Cleanup]: Unify `vllm.utils.flashinfer` and `vllm.model_executor.layers.quantization.utils.flashinfer_utils` | ### 🚀 The feature, motivation and pitch
its confusing to have both
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page... | https://github.com/vllm-project/vllm/issues/31414 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-12-27T18:27:00Z | 2025-12-31T22:25:36Z | 4 | robertgshaw2-redhat |
vllm-project/vllm | 31,398 | [Doc]: Eagle3 with tensor parallelism | ### 📚 The doc issue
According to https://docs.vllm.ai/en/latest/features/spec_decode/#speculating-using-eagle-based-draft-models:
> The EAGLE based draft models need to be run without tensor parallelism (i.e. draft_tensor_parallel_size is set to 1 in speculative_config), although it is possible to run the main mode... | https://github.com/vllm-project/vllm/issues/31398 | open | [
"documentation"
] | 2025-12-27T03:10:50Z | 2026-01-04T01:21:07Z | 3 | JSYRD |
huggingface/transformers | 43,048 | Need to understand difference between TP support via transformers code v/s Pytorch's native parallelize_module API. | Based on the existing code base of transformers, below sequence of operations are performed on model object to make it TP compatible.
- TP Plan for Llama: https://github.com/huggingface/transformers/blob/a7f29523361b2cc12e51c1f5133d95f122f6f45c/src/transformers/models/llama/configuration_llama.py#L113
- self._tp_plan ... | https://github.com/huggingface/transformers/issues/43048 | open | [] | 2025-12-26T10:05:38Z | 2026-01-05T15:35:13Z | 1 | quic-meetkuma |
huggingface/lerobot | 2,721 | The virtual machine is unable to recognize the keyboard. | ### Ticket Type
❓ Technical Question
### Environment & System Info
```Shell
(base) tom@tom-VMware-Virtual-Platform:~/lerobot_alohamini$ python check_lerobot.py
使用现有的DISPLAY: :0
=== 环境诊断 ===
Python 版本: 3.12.12 | packaged by conda-forge | (main, Oct 22 2025, 23:25:55) [GCC 14.3.0]
DISPLAY 环境变量: :0
XDG_SESSION_TYPE 环境... | https://github.com/huggingface/lerobot/issues/2721 | open | [
"question"
] | 2025-12-26T08:02:27Z | 2025-12-26T08:02:37Z | null | ht202 |
huggingface/transformers | 43,045 | Multimodal chat sample | ### Feature request
Add a sample covering chat scenario including images, videos or audio.
### Motivation
`AutoModelForCausalLM`'s `use_cache` is barely documented.
Describe a pattern handling the following cases
1. Tokenizer replaces tokens that are already in kv cache with a different token. For example, the model... | https://github.com/huggingface/transformers/issues/43045 | closed | [
"Feature request"
] | 2025-12-26T06:16:53Z | 2025-12-31T10:36:38Z | 9 | Wovchena |
sgl-project/sglang | 15,860 | [Ask for help] How to deploy GLM-4.7 | Hi, can anyone help me to deploy GLM-4.7? I encounter a bug when using `sglang==0.5.6.post2` (which is latest on `https://github.com/sgl-project/sglang`). What is the correct version for GLM-4.7?
```
launch_server.py: error: argument --tool-call-parser: invalid choice: 'glm47' (choose from 'deepseekv3', 'deepseekv31', ... | https://github.com/sgl-project/sglang/issues/15860 | open | [] | 2025-12-26T02:59:06Z | 2025-12-28T21:21:17Z | 2 | sunjie279 |
huggingface/tokenizers | 1,919 | De/tokenization on CUDA | Could at least de-tokenization be done directly on CUDA? Like in my hack `bpedecode_vec` in https://github.com/pytorch/pytorch/issues/135704#issue-2520180382 which indexes into a detokenization vocab byte table via `repeat_interleave`
Also, maybe for better CUDAGraph-ability / no CPU syncs, there should be some static... | https://github.com/huggingface/tokenizers/issues/1919 | open | [] | 2025-12-26T02:20:49Z | 2026-01-05T10:51:17Z | 1 | vadimkantorov |
vllm-project/vllm | 31,361 | [Usage]: Question about the dummy run。It seems the dummy run use different precision? | ### Question
I am trying to modify vllm. especially the **tp** communication, i'am tring to **break all-reduce into reduce-scatter + all-gather**.
However I encountered precision problem, after i print the hidden states. it seems each layer has around +-0.01 diff, when it accumulated over all the layers, the result... | https://github.com/vllm-project/vllm/issues/31361 | closed | [
"usage"
] | 2025-12-25T16:38:03Z | 2025-12-27T03:41:27Z | 0 | Dingjifeng |
vllm-project/vllm | 31,353 | [Bug]: KV Cache grows continuously with just one chat completion request using meta-llama/Llama-3.2-1B on L40 GPU with Flash Attention and finally completed after 10 minutes | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version ... | https://github.com/vllm-project/vllm/issues/31353 | open | [
"bug",
"help wanted"
] | 2025-12-25T13:56:52Z | 2025-12-27T15:55:34Z | 1 | aravilli |
sgl-project/sglang | 15,825 | Is it normal that Qwen3-30B-A3B runs slower than Qwen3-8B? | I serve two models on the Ascend 910 platform (following sglang's ascend examples) with the same tp2dp8 and benchmarked them.
Before testing, I suppose A3B will be faster than 8B for fewer activated tensor blocks.
But the result is different:
### qwen 30B A3B
```
export SGLANG_SET_CPU_AFFINITY=1
export PYTORCH_NPU_AL... | https://github.com/sgl-project/sglang/issues/15825 | open | [] | 2025-12-25T11:26:10Z | 2025-12-25T11:26:10Z | 0 | yucc-leon |
vllm-project/vllm | 31,344 | [Usage]: how to pass param logits_processors in AsyncEngineArgs? | ### Your current environment
import torch
from transformers import LogitsProcessor
from transformers.generation.logits_process import _calc_banned_ngram_tokens
from typing import List, Set
class NoRepeatNGramLogitsProcessor(LogitsProcessor):
def __init__(self, ngra... | https://github.com/vllm-project/vllm/issues/31344 | open | [
"usage"
] | 2025-12-25T10:12:02Z | 2025-12-25T13:30:54Z | 0 | cqray1990 |
pytorch/ao | 3,543 | [MXLinear]Where is the operator call for implementing MXFP8 in NVD? | In the forward method of the MXLinear class, `mx_mm.apply` is called, although `MXTensor.to_mx` is also invoked. The following code implements the quantization processing of MXFP8:
scale_e8m0_biased, data_lp = to_mx(data_hp, elem_dtype, block_size, scaling_mode, is_swizzled_scales)
When examining the implementation of... | https://github.com/pytorch/ao/issues/3543 | open | [] | 2025-12-25T09:58:57Z | 2025-12-26T07:21:30Z | null | LucaHW |
huggingface/diffusers | 12,889 | Question about qwen-image-edit-2511 loading warning | When loading the model qwen-image-edit-2511 using the diffusers library, I encounter the following warning:
The config attributes {'zero_cond_t': True} were passed to QwenImageTransformer2DModel, but are not expected and will be ignored. Please verify your config.json configuration file.
This suggests that the zero_c... | https://github.com/huggingface/diffusers/issues/12889 | closed | [] | 2025-12-25T07:06:28Z | 2025-12-25T08:56:28Z | 2 | wizardbob |
sgl-project/sglang | 15,810 | [Bug] hicache 3fs backend global metadata much instance deploy bug | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15810 | open | [] | 2025-12-25T06:52:45Z | 2025-12-25T09:42:30Z | 4 | weibingo |
vllm-project/vllm | 31,319 | [Bug]: GLM-4.7-FP8 missing beginning <think> tag | ### Your current environment
I am on docker nightly vLLM API server version 0.14.0rc1.dev104+g8ee90c83f
### 🐛 Describe the bug
I hosted the model via vllm and already without reasoning_parser, I found the model output with directly output without <think> but having close tag </think> later.
```
root@iv-ydzbs5zs... | https://github.com/vllm-project/vllm/issues/31319 | open | [
"bug"
] | 2025-12-24T18:45:34Z | 2026-01-06T07:59:45Z | 16 | Nemo-G |
pytorch/executorch | 16,392 | Reasoning without using the think function | Hi, i want to use Qwen3_0.6B model in 8255 device, i exported pte model and run it on device successfully. Now i want to disable the "think" function to verify something, how can i achieve it ?
I use the following command and get outputs.txt:
./qnn_llama_runner_ndk27 --decoder_model_version qwen3 --tokenizer_path token... | https://github.com/pytorch/executorch/issues/16392 | closed | [
"partner: qualcomm",
"module: qnn"
] | 2025-12-24T12:24:35Z | 2025-12-30T02:32:04Z | 2 | imjking |
vllm-project/vllm | 31,278 | [Usage]:请问Qwen3-VL本地加载模式支持单独加载LoRA么? | 请问Qwen3-VL本地加载模式支持单独加载LoRA么? | https://github.com/vllm-project/vllm/issues/31278 | open | [
"usage"
] | 2025-12-24T11:33:08Z | 2025-12-25T03:52:16Z | 3 | dengdeng-cat |
vllm-project/vllm | 31,272 | [Performance]: b200x8 deepseek-ai/DeepSeek-V3.2-Exp max perf | ### Proposal to improve performance
_No response_
### Report of performance regression
Do you have any ideas on how to increase TPS? I have two servers — one with H200 ×8 and another with B200 ×8. They use the same startup script, but the performance is almost identical. In my opinion, B200 should be faster than H20... | https://github.com/vllm-project/vllm/issues/31272 | open | [
"performance"
] | 2025-12-24T09:48:01Z | 2025-12-24T10:09:29Z | 0 | evgeniiperepelkin |
huggingface/trl | 4,747 | Addition of Supervised Reinforcement Learning | ### Feature request
https://arxiv.org/pdf/2510.25992 can i work on its implementation ?
### Motivation
Better approach then previous RL's
### Your contribution
I can work on it following reference paper | https://github.com/huggingface/trl/issues/4747 | open | [] | 2025-12-24T09:20:32Z | 2025-12-24T09:20:32Z | 0 | kushalgarg101 |
pytorch/executorch | 16,391 | Tokenizer fails on iOS (RE2 lookahead unsupported) – need regex_lookahead static lib or guidance | ### 🐛 Describe the bug
Summary
iOS Flutter app using ExecuTorch LLM (Qwen3 0.6B) cannot load the tokenizer because RE2 does not support lookahead (?!\S).
SPM branch: swiftpm-1.1.0.20251223 (no visible regex_lookahead target/lib).
Logs ask to link regex_lookahead, but SPM did not produce the static lib.
Environment
Pl... | https://github.com/pytorch/executorch/issues/16391 | open | [] | 2025-12-24T09:14:42Z | 2025-12-24T09:43:59Z | 0 | quocanh0712 |
vllm-project/vllm | 31,270 | [Bug]: Can run Speculative decode with PP >2? | ### Your current environment
vllm:0.12.0
### 🐛 Describe the bug
I run vllm:0.12.0 with start args like this:
`python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 --port 8080 --dtype bfloat16 --model /Qwen3-32B \
--pipeline-parallel-size 2 \
--gpu-memory-utilization 0.9 --max-model-len 32768 --max-num-b... | https://github.com/vllm-project/vllm/issues/31270 | open | [
"bug"
] | 2025-12-24T09:10:05Z | 2025-12-26T07:27:11Z | 1 | frankie-ys |
sgl-project/sglang | 15,739 | [Bug] Failed to deploy DeepSeek-V3.2 with LMCache | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15739 | open | [] | 2025-12-24T08:45:29Z | 2025-12-29T22:55:27Z | 1 | niceallen |
sgl-project/sglang | 15,710 | [Bug] Using TBO, but no overlap in decoding phase? | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15710 | open | [] | 2025-12-24T02:22:19Z | 2025-12-24T02:22:19Z | 0 | ziyuhuang123 |
sgl-project/sglang | 15,707 | [Feature] diffusion: TurboDiffusion achieves a 200x speedup on a single GPU, bringing video into the second-level era | ### Checklist
- [ ] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [ ] Please use English. Otherwise, it will be closed.
### Motivation
https://github.com/thu-ml/TurboDiffusion
When can it be in... | https://github.com/sgl-project/sglang/issues/15707 | open | [] | 2025-12-24T01:50:02Z | 2025-12-30T08:45:43Z | 1 | xiaolin8 |
pytorch/pytorch | 171,204 | Dynamo can't trace a code when we construct nn.Parameter in the forward. | ### 🐛 Describe the bug
```python
import torch
import torch._dynamo
torch._dynamo.config.graph_break_on_nn_param_ctor = False
def fn(x):
w = torch.nn.Parameter(torch.ones(4, 4))
if w.grad is None:
w.grad = torch.zeros_like(w)
return w.grad + x
x = torch.randn(4, 4)
compiled_fn = torch.compile(f... | https://github.com/pytorch/pytorch/issues/171204 | open | [
"oncall: pt2"
] | 2025-12-23T19:41:48Z | 2026-01-05T14:52:45Z | 1 | tugsbayasgalan |
huggingface/transformers | 43,023 | How to investigate "CAS service error" during model downloading? | ### System Info
(nm) PS C:\Users\myuser\AppData\Local\anaconda3\envs\nm\Lib\site-packages\transformers\commands> python .\transformers_cli.py env
```
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.57.3
- Platform: Windows-10-10.0.19045-SP0
- Python v... | https://github.com/huggingface/transformers/issues/43023 | open | [
"bug"
] | 2025-12-23T14:48:51Z | 2025-12-25T14:36:42Z | null | satyrmipt |
pytorch/executorch | 16,374 | `strided_copy` operator in output graph when sample input has been transposed | ### 🐛 Describe the bug
I occasionally read existing model calibration data from Numpy arrays that are in NHWC order when deploying with ExecuTorch. Whenever I do that and transpose the calibration data to NCHW, the output graph contains an `as_strided_copy` operator, even if I have previously called `.contiguous()` o... | https://github.com/pytorch/executorch/issues/16374 | open | [
"module: exir",
"module: arm"
] | 2025-12-23T14:45:30Z | 2025-12-24T15:40:34Z | 1 | etrommer |
vllm-project/vllm | 31,217 | [Usage]: suffix decoding | ### Your current environment
Does suffix decoding necessarily require a repetition penalty of 1?
### How would you like to use vllm
Does suffix decoding necessarily require a repetition penalty of 1?
In suffix decoding, I found that when the repetition penalty is not equal to 1, the acceleration is not significant. ... | https://github.com/vllm-project/vllm/issues/31217 | open | [
"usage"
] | 2025-12-23T10:43:45Z | 2025-12-24T02:56:35Z | 1 | jiangix-paper |
huggingface/lerobot | 2,707 | Transformers dependency | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
- lerobot version: 0.4.3
- Platform: Linux-5.14.0-570.26.1.el9_6.x86_64-x86_64-with-glibc2.34
- Python version: 3.12.12
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.3.5
- PyTorch version: ... | https://github.com/huggingface/lerobot/issues/2707 | closed | [
"bug",
"question",
"dependencies"
] | 2025-12-23T10:37:53Z | 2025-12-23T23:43:10Z | null | RomDeffayet |
vllm-project/vllm | 31,216 | [RFC]: Sampling Optimization: move gather of logits after argmax. | ### Motivation.
As shown in the left part of the following picture, in the original sampling procedure we perform `llm_head` and `gather` first, then perform `argmax` to full `logits`. However, we can in fact move `gather` after `argmax` to reduce both the communication volume of `gather` and the computation load of `... | https://github.com/vllm-project/vllm/issues/31216 | open | [
"RFC"
] | 2025-12-23T10:23:34Z | 2025-12-26T03:33:04Z | 2 | whx-sjtu |
huggingface/diffusers | 12,884 | Compatibility issues regarding checkpoint/VAE dependency conflicts when Diffusers load Civitai LoRA | Hello everyone, I'm currently learning to use diffusers and would like to ask all my friends a question. I saw a good lora on Civitai, but this lora has requirements for checkpoint and vea. So I downloaded both models as the author requested. However, when I ran the following code, an error occurred.
The specific code ... | https://github.com/huggingface/diffusers/issues/12884 | closed | [] | 2025-12-23T10:11:27Z | 2025-12-23T13:41:47Z | 1 | hhhFuture |
vllm-project/vllm | 31,211 | [Doc]: Add missing GPT-OSS tool calling instructions | ### 📚 The doc issue
Currently the `openai` tool calling format is not documented in [the tool calling documentation](https://docs.vllm.ai/en/stable/features/tool_calling/). However it is documented in the [cookbook](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html#tool-use)
### Suggest a potential... | https://github.com/vllm-project/vllm/issues/31211 | closed | [
"documentation"
] | 2025-12-23T08:35:09Z | 2025-12-25T05:29:11Z | 0 | amithkk |
huggingface/lerobot | 2,704 | Training XVLA: IndexError with auto mode; size mismatch with joint mode on 14D joint-action dataset | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
```
### Description
I am trying to train XVLA with base and folding checkpoint on a 14D joint-action dataset.
When I set --policy.action_mode=auto
lerobot-train \
--dataset.repo_id= \
--output_dir=./outputs/xvla_bim... | https://github.com/huggingface/lerobot/issues/2704 | closed | [
"bug",
"documentation",
"question",
"policies",
"dataset",
"CI",
"examples",
"training"
] | 2025-12-23T07:20:25Z | 2025-12-23T08:54:21Z | null | DaKhanh |
vllm-project/vllm | 31,205 | ValueError: Qwen3OmniMoeThinkerForConditionalGeneration does not support LoRA yet. |
hi, I have trained qwen3-omni thinker via ms-swift. However, when I tried to infer qwen3-omni with lora ckpt, an error occurred:
```
ValueError: Qwen3OmniMoeThinkerForConditionalGeneration does not support LoRA yet.
```
I have tried many verions of vllm including 0.9.2, 0.11.0 and 0.12.0
here is my script:
```
CUD... | https://github.com/vllm-project/vllm/issues/31205 | open | [
"usage"
] | 2025-12-23T06:52:11Z | 2025-12-29T14:50:37Z | 2 | VJJJJJJ1 |
pytorch/pytorch | 171,158 | `torch.func.grad` to allow some inplace ops | ### 🚀 The feature, motivation and pitch
At least for those under `@torch.no_grad` context manager
Currently only `torch.func.grad` allows to fullgraph-compile computing grads wrt inputs because of:
- https://github.com/pytorch/pytorch/issues/170487
so it's an important usecase
`torch.autograd.grad` is fine with so... | https://github.com/pytorch/pytorch/issues/171158 | open | [
"module: autograd",
"triaged",
"module: functorch"
] | 2025-12-23T04:16:33Z | 2026-01-05T17:20:24Z | 2 | vadimkantorov |
vllm-project/vllm | 31,204 | [RFC]: Supporting Multi MTP layers in Speculative Decoding (EagleProposer) | ### Motivation.
The EagleProposer for speculative decoding is only able to utilize the first MTP layer.
However, the model [XiaomiMiMo/MiMo-V2-Flash](https://huggingface.co/XiaomiMiMo/MiMo-V2-Flash) has 3 MTP layers.
Is there any plan or ongoing PR to extend support for multi MTP layers in speculative decoding?
btw, [... | https://github.com/vllm-project/vllm/issues/31204 | open | [
"RFC"
] | 2025-12-23T03:34:05Z | 2025-12-23T03:34:05Z | 0 | DingYibin |
huggingface/lerobot | 2,701 | Image keys with underscores not supported when migrating to v0.4.x | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
Python 3.12.3, LeRobot versions 0.3.4 and 0.4.2
From v0.4.2:
lerobot version: 0.4.2
- Platform: Linux-6.14.0-37-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface Hub version: 0.35.3
- Datasets version:... | https://github.com/huggingface/lerobot/issues/2701 | open | [
"bug",
"question",
"policies",
"sensors",
"processor"
] | 2025-12-23T03:27:41Z | 2025-12-23T03:27:50Z | null | dangr |
huggingface/lerobot | 2,700 | Training an Smolvla model on the lerobot/aloha_sim_insertion_human dataset does not converge | ### Ticket Type
❓ Technical Question
### Environment & System Info
```Shell
Ubuntu 22.04
lerobot 0.4.1
python 3.10
lerobot-train \
--job_name aloha_smolvla \
--output_dir $OUTPUT_DIR \
--env.type=aloha \
--env.task="AlohaInsertion-v0" \
--policy.type=smolvla \
--policy.load_vlm_weights=true \
--steps=... | https://github.com/huggingface/lerobot/issues/2700 | open | [
"question",
"policies",
"dataset",
"simulation",
"robots",
"training"
] | 2025-12-23T03:13:47Z | 2025-12-30T21:05:50Z | null | sslndora0612-max |
vllm-project/vllm | 31,202 | [Bug]: Mixtral Fp8 Accuracy is Degraded | ### Your current environment
H200
### 🐛 Describe the bug
- launch
```bash
vllm serve amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV --enforce-eager -tp 2
```
- eval
```bash
lm_eval \
--model local-completions \
--tasks gsm8k \
--model_args "model=amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV,base_url=http://localhost:8000/v1/co... | https://github.com/vllm-project/vllm/issues/31202 | closed | [
"bug",
"help wanted"
] | 2025-12-23T02:27:28Z | 2025-12-23T02:42:58Z | 1 | robertgshaw2-redhat |
vllm-project/vllm | 31,200 | [Bug]: class Request and block_hasher has cirular reference, may cause memory leak. | ### Your current environment
<summary> Running MultiModal Network with prefix caching will cause memory leak. </summary>
<details>
<code>
class Request:
def __init__(
...
self.block_hashes: list[BlockHash] = []
self.get_hash_new_full_blocks: Callable[[], list[BlockHash]] | None = None
... | https://github.com/vllm-project/vllm/issues/31200 | open | [
"bug"
] | 2025-12-23T01:55:47Z | 2025-12-23T15:02:37Z | 1 | frelam |
huggingface/diffusers | 12,881 | Is that a bug of prompt2prompt pipeline with replace word pormpt? | ### Describe the bug
It performance the same when return different cross attention map, is implement error or just the problem with prompt2prompt.
### Reproduction
Use stable-diffusion-2-1:
`images = pipe(["A turtle playing with a ball", "A monkey playing with a ball"],
generator=torch.Generator("cu... | https://github.com/huggingface/diffusers/issues/12881 | open | [
"bug"
] | 2025-12-23T01:55:06Z | 2025-12-23T01:55:06Z | 0 | lincion |
sgl-project/sglang | 15,641 | [Feature] In the event_loop_overlap function of the scheduler, can the recv operation be processed asynchronously? | ### Checklist
- [x] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [x] Please use English. Otherwise, it will be closed.
### Motivation
In the _offline large-scale high-concurrency multimodal det... | https://github.com/sgl-project/sglang/issues/15641 | open | [] | 2025-12-22T14:04:10Z | 2025-12-22T14:04:10Z | 0 | titanium-temu |
sgl-project/sglang | 15,634 | [Bug] sgl-kernel does not support fa3??? | ### Checklist
- [ ] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15634 | open | [] | 2025-12-22T10:50:36Z | 2025-12-22T10:50:55Z | 0 | ziyuhuang123 |
pytorch/pytorch | 171,080 | NxN BlockMask / Cumulative Sequence Length | Hi,
I tried to implement FlexAttention for large batch training. Each attention layer computes attention within a window. My tensor is a batch-packed tensor to handle variable sequence lengths. _The size of each batch sequence changes with the data (some batch samples are longer than other)_
This means that I not only... | https://github.com/pytorch/pytorch/issues/171080 | closed | [
"triaged",
"oncall: pt2",
"module: flex attention"
] | 2025-12-22T09:58:22Z | 2025-12-22T17:50:15Z | 1 | L-Reichardt |
huggingface/lerobot | 2,697 | Run pi0.5 on Libero, incorrect version of transformers | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
Copy-and-paste the text below in your GitHub issue and FILL OUT the last point.
- lerobot version: 0.4.0
- Platform: Linux-6.8.0-87-generic-x86_64-with-glibc2.35
- Python version: 3.10.19
- Huggingface Hub version: 0.35.3... | https://github.com/huggingface/lerobot/issues/2697 | open | [
"bug",
"question",
"evaluation"
] | 2025-12-22T08:54:56Z | 2025-12-22T16:20:01Z | null | yqi19 |
huggingface/lerobot | 2,696 | RTC does not work. | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
- lerobot version: 0.4.3
- Platform: Linux-5.10.134-17.3.al8.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.19
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- PyTorch version: 2.7.... | https://github.com/huggingface/lerobot/issues/2696 | closed | [
"bug",
"question",
"policies",
"dataset",
"CI",
"python",
"examples",
"training"
] | 2025-12-22T03:22:23Z | 2025-12-22T05:20:39Z | null | xiaozhisky1 |
huggingface/sentence-transformers | 3,601 | how to finetuning a bi-encoder embedding model of multimodel input | I want to cluster ecommerce products by bi-encoder. For each product, it has a name(text) and an image. Can I use sentence-transfomer to finetune a bi-encoder model? The training dataset contains product clusters, like:
```
product1_name, product1_img, cluster_id1
product2_name, product2_img, cluster_id1
product3_nam... | https://github.com/huggingface/sentence-transformers/issues/3601 | open | [] | 2025-12-22T02:46:43Z | 2025-12-22T09:09:31Z | null | fancyerii |
vllm-project/vllm | 31,096 | [Usage]: Qwen3-Next: Both Instruct and Thinking models don't support function calling |
Does the Qwen3-Next model not support the function calling feature? Test results show some common error scenarios:
1. The tools should be called, but content returned something like the following:
```
{
"choices": [
{
"message": {
"content": "</think>\n{\"name\": \"send_email\", \"arguments\": {\"u... | https://github.com/vllm-project/vllm/issues/31096 | open | [
"usage"
] | 2025-12-21T12:02:08Z | 2025-12-23T03:02:02Z | 0 | PHOEBEMOON0802 |
huggingface/lerobot | 2,694 | The GT00T algorithm simply won't run and throws the following error. Could someone please help me fix it? | The GT00T algorithm simply won't run and throws the following error. Could someone please help me fix it?
n_model.post_layernorm.bias', 'backbone.eagle_model.vision_model.vision_model.post_layernorm.weight']
Traceback (most recent call last):
File "/home/ruijia/miniconda3/envs/lerobot/bin/lerobot-train", line 7, i... | https://github.com/huggingface/lerobot/issues/2694 | open | [
"bug",
"question",
"policies",
"CI",
"python",
"processor",
"examples",
"training"
] | 2025-12-21T09:12:14Z | 2025-12-24T00:06:08Z | null | wuxiaolianggit |
huggingface/lerobot | 2,693 | Wrist Roll motor not responding | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
lerobot version 0.4.0
```
### Description
I connected to the lerobot so101 bot ->setup motors->callibrated->tested teleoperation
,everything wewnt fine .But after few hours when recallibration is done in some other syste... | https://github.com/huggingface/lerobot/issues/2693 | open | [
"bug",
"question",
"teleoperators"
] | 2025-12-21T09:01:51Z | 2025-12-26T10:19:17Z | null | CHIRANJEET1729DAS |
huggingface/lerobot | 2,692 | [Bug] Too many errors when Train RL in Simulation | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
`
- LeRobot version: 0.4.3
- Platform: Linux-6.8.0-90-generic-x86_64-with-glibc2.35
- Python version: 3.10.19
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- FFmpeg version: N/A
- PyTor... | https://github.com/huggingface/lerobot/issues/2692 | open | [
"bug",
"documentation",
"question",
"dataset",
"simulation",
"tests",
"examples",
"training"
] | 2025-12-21T08:22:16Z | 2026-01-04T06:19:05Z | null | Hukongtao |
huggingface/accelerate | 3,894 | How to specify different number of process per node | I've 2 node. First node has 8 gpus while second node has 2 GPUs. I want to specify the number of process to be 8 and 2 respectively in both nodes. I'm using this config in both node. But it always tries to divide equal number of process in both node. With below config file, it's starting 5 process in both nodes:-
Node... | https://github.com/huggingface/accelerate/issues/3894 | open | [] | 2025-12-21T07:09:15Z | 2025-12-21T07:09:15Z | null | AIML001 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.