problem_id
stringlengths
1
66
category
stringclasses
2 values
statement
stringlengths
0
20.2k
config
stringlengths
20
380
gemm_optimization/k_skewed
research
GEMM Optimization Problem ========================= Problem Setting --------------- Design and optimize high-performance Triton kernels for General Matrix-Matrix Multiplication (GEMM) on GPU. This problem focuses on implementing efficient matrix multiplication kernels using Triton's JIT compilation system. The challe...
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
gemm_optimization/near_tile
research
GEMM Optimization Problem ========================= Problem Setting --------------- Design and optimize high-performance Triton kernels for General Matrix-Matrix Multiplication (GEMM) on GPU. This problem focuses on implementing efficient matrix multiplication kernels using Triton's JIT compilation system. The challe...
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
gemm_optimization/rectangles
research
GEMM Optimization Problem ========================= Problem Setting --------------- Design and optimize high-performance Triton kernels for General Matrix-Matrix Multiplication (GEMM) on GPU. This problem focuses on implementing efficient matrix multiplication kernels using Triton's JIT compilation system. The challe...
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
gemm_optimization/squares
research
GEMM Optimization Problem ========================= Problem Setting --------------- Design and optimize high-performance Triton kernels for General Matrix-Matrix Multiplication (GEMM) on GPU. This problem focuses on implementing efficient matrix multiplication kernels using Triton's JIT compilation system. The challe...
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
gemm_optimization/transformerish
research
GEMM Optimization Problem ========================= Problem Setting --------------- Design and optimize high-performance Triton kernels for General Matrix-Matrix Multiplication (GEMM) on GPU. This problem focuses on implementing efficient matrix multiplication kernels using Triton's JIT compilation system. The challe...
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
group_gemm
research
Group GEMM Optimization Problem ================================ Problem Setting --------------- Design and optimize high-performance Triton kernels for Batched Matrix-Matrix Multiplication (BMM) on GPU. This problem focuses on implementing efficient batched matrix multiplication kernels using Triton's JIT compilation...
dependencies: uv_project: resources tag: hpc runtime: environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)" docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true
imagenet_pareto/1m
research
ImageNet Pareto Optimization - 1M Parameter Variant =================================================== Problem Setting --------------- Train a neural network on a synthetic ImageNet-like dataset to maximize accuracy while staying within a parameter budget of 1,000,000 parameters. Objective: Achieve the highest possi...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 3600 }, "tag": "ai" }
imagenet_pareto/200k
research
ImageNet Pareto Optimization - 200K Parameter Variant ===================================================== Problem Setting --------------- Train a neural network on a synthetic ImageNet-like dataset to maximize accuracy while staying within a parameter budget of 200,000 parameters. Objective: Achieve the highest pos...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 3600 }, "tag": "ai" }
imagenet_pareto/2_5m
research
ImageNet Pareto Optimization - 2.5M Parameter Variant ===================================================== Problem Setting --------------- Train a neural network on a synthetic ImageNet-like dataset to maximize accuracy while staying within a parameter budget of 2,500,000 parameters. Objective: Achieve the highest p...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 3600 }, "tag": "ai" }
imagenet_pareto/500k
research
ImageNet Pareto Optimization - 500K Parameter Variant ===================================================== Problem Setting --------------- Train a neural network on a synthetic ImageNet-like dataset to maximize accuracy while staying within a parameter budget of 500,000 parameters. Objective: Achieve the highest pos...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 3600 }, "tag": "ai" }
imagenet_pareto/5m
research
ImageNet Pareto Optimization - 5M Parameter Variant =================================================== Problem Setting --------------- Train a neural network on a synthetic ImageNet-like dataset to maximize accuracy while staying within a parameter budget of 5,000,000 parameters. Objective: Achieve the highest possi...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 3600 }, "tag": "ai" }
llm_router
research
LLM Router ================================ Overview -------- This benchmark evaluates a language model's ability to implement an LLM routing policy. Given a user query, the router must choose one model from a small candidate set with different cost–quality tradeoffs. The goal is to maximize accuracy while minimizing ...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "tag": "ai" }
llm_sql/large
research
Problem Setting --------------- Consider a CSV file with $N$ rows and $M$ columns, where $M \leq 10$. We feed each row to an LLM inference engine (with a prefix KV cache) by concatenating all column values in that row. For the $i$-th row with entries $A[i,1], A[i,2], \ldots, A[i,M]$, we construct the input string: ``...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 1800 }, "tag": "db" }
llm_sql/small
research
Problem Setting --------------- Consider a CSV file with $N$ rows and $M$ columns, where $M \leq 10$. We feed each row to an LLM inference engine (with a prefix KV cache) by concatenating all column values in that row. For the $i$-th row with entries $A[i,1], A[i,2], \ldots, A[i,M]$, we construct the input string: ``...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 1800 }, "tag": "db" }
mamba2_scan
research
Mamba2 Scan Optimization Problem ================================== Problem Setting --------------- Design and optimize high-performance Triton kernels for Mamba2 scan computation on GPU. This problem focuses on implementing efficient sequential scan operations using chunked parallelism with Triton's JIT compilation s...
tag: hpc dependencies: uv_project: resources runtime: environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)" docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true
mixed_gemm
research
Mixed GEMM Optimization Problem ================================= Problem Setting --------------- Design and optimize high-performance Triton kernels for Mixed GEMM (Linear + Bias + GELU) computation on GPU. This problem focuses on implementing efficient fused kernels that combine matrix multiplication, bias addition,...
dependencies: uv_project: resources tag: hpc runtime: environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)" docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true
nbody_simulation/random_100k
research
N-Body Simulation Problem - 100,000 Particles ============================================= Problem Setting --------------- Design and optimize a high-performance parallel N-body simulation. In physics and astronomy, an N-body simulation models the dynamics of particles under gravitational forces. The available hardwa...
tag: hpc runtime: language: cpp timeout_seconds: 600 environment: "C++17 with OpenMP (GCC with libgomp1) on Ubuntu 22.04, 16 vCPUs" docker: image: "gcc:13" resources: cloud: aws instance_type: c7i.4xlarge cpus: "16" memory: "32"
nbody_simulation/random_10k
research
N-Body Simulation Problem - 10,000 Particles ============================================= Problem Setting --------------- Design and optimize a high-performance parallel N-body simulation. In physics and astronomy, an N-body simulation models the dynamics of particles under gravitational forces. The available hardwar...
tag: hpc runtime: language: cpp timeout_seconds: 600 environment: "C++17 with OpenMP (GCC with libgomp1) on Ubuntu 22.04, 16 vCPUs" docker: image: "gcc:13" resources: cloud: aws instance_type: c7i.4xlarge cpus: "16" memory: "32"
poc_generation/heap_buffer_overflow
research
{"tag": "security"}
poc_generation/heap_use_after_free
research
tag: security { "dependencies": { "uv_project": "resources" }, "datasets": [ "arvo:47101" ], "tag": "security" }
poc_generation/stack_buffer_overflow
research
{"tag": "security"}
poc_generation/uninitialized_value
research
{"tag": "security"}
qknorm
research
QKNorm Optimization Problem ============================ Problem Setting --------------- Design and optimize high-performance implementations for Query-Key Normalization (QKNorm) on GPU. This problem focuses on implementing efficient normalization kernels that apply RMSNorm to query and key tensors. This is a **memor...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "resources": { "accelerators": "L4:1" }, "docker": { "image": "andylizf/triton-tlx:tlx-nv-cu122-nvcc", "gpu": true }, "environment": "CUDA 12.2, Python 3.11, PyTorch 2.0+, flashinfer 0.5.0, Tr...
quant_dot_int4
research
Quantized Dot (Int4 Packed) Optimization Problem ================================================ Problem Setting --------------- Design and optimize high-performance Triton kernels for a **quantized matrix multiplication** where the left-hand matrix is stored as **packed int4 weights** plus per-group scale/offset, an...
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true
ragged_attention
research
Ragged Attention Optimization Problem ====================================== Problem Setting --------------- Design and optimize high-performance Triton kernels for ragged attention computation on GPU. This problem focuses on implementing efficient kernels that handle variable-length sequences using ragged attention, ...
dependencies: uv_project: resources tag: hpc runtime: environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)" docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true resources: accelerators: L4:1
symbolic_regression/mccormick
research
Symbolic Regression Benchmark - McCormick Dataset ================================================= Problem Setting --------------- Learn a closed-form symbolic expression `f(x1, x2)` that predicts the target `y`. This dataset is derived from the McCormick function, a classic 2D optimization test function featuring a...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "tag": "pl" }
symbolic_regression/mixed_polyexp_4d
research
Symbolic Regression Benchmark - Mixed PolyExp 4D Dataset ========================================================= Problem Setting --------------- Learn a closed-form symbolic expression `f(x1, x2, x3, x4)` that predicts the target `y`. This is a higher-dimensional dataset (4 input features) combining polynomial inte...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "tag": "pl" }
symbolic_regression/peaks
research
Symbolic Regression Benchmark - Peaks Dataset ============================================== Problem Setting --------------- Learn a closed-form symbolic expression `f(x1, x2)` that predicts the target `y`. This dataset is based on a peaks-like function, characterized by exponential terms that create localized peaks ...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "tag": "pl" }
symbolic_regression/ripple
research
Symbolic Regression Benchmark - Ripple Dataset =============================================== Problem Setting --------------- Learn a closed-form symbolic expression `f(x1, x2)` that predicts the target `y`. This dataset is generated from a ripple-like function that combines polynomial amplitude modulation with high...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "tag": "pl" }
symbolic_regression/sincos
research
Symbolic Regression Benchmark - SinCos Dataset =============================================== Problem Setting --------------- Learn a closed-form symbolic expression `f(x1, x2)` that predicts the target `y`. This dataset features a function built from basic trigonometric operations. The target exhibits periodic beha...
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "tag": "pl" }
vdb_pareto/balanced
research
VDB Design Problem - Balanced Tier =================================== Problem Setting --------------- Design a Vector Database index optimized for **recall** subject to a **latency constraint**. This tier uses latency-gated scoring: solutions exceeding the latency threshold receive zero points, while solutions meetin...
{ "dependencies": { "uv_project": "resources" }, "datasets": [ { "type": "local_tar", "path": "resources/sift.tar.gz", "target": "data/sift1M", "expected_glob": "*.fvecs" } ], "runtime": { "timeout_seconds": 3600 }, "tag": "db" }
vdb_pareto/high_recall
research
VDB Design Problem - High Recall Tier ====================================== Problem Setting --------------- Design a Vector Database index optimized for **recall** subject to a **relaxed latency constraint**. This tier uses latency-gated scoring: solutions exceeding the latency threshold receive zero points, while so...
{ "dependencies": { "uv_project": "resources" }, "datasets": [ { "type": "local_tar", "path": "resources/sift.tar.gz", "target": "data/sift1M", "expected_glob": "*.fvecs" } ], "runtime": { "timeout_seconds": 3600 }, "tag": "db" }
vdb_pareto/low_latency
research
VDB Design Problem - Low Latency Tier ====================================== Problem Setting --------------- Design a Vector Database index optimized for **recall** subject to a **strict latency constraint**. This tier uses latency-gated scoring: solutions exceeding the latency threshold receive zero points, while sol...
{ "dependencies": { "uv_project": "resources" }, "datasets": [ { "type": "local_tar", "path": "resources/sift.tar.gz", "target": "data/sift1M", "expected_glob": "*.fvecs" } ], "runtime": { "timeout_seconds": 3600 }, "tag": "db" }
vdb_pareto/recall80_latency
research
VDB Design Problem - Recall80 Latency Tier =========================================== Problem Setting --------------- Design a Vector Database index optimized for **latency** subject to a **recall constraint**. This tier uses recall-gated scoring: solutions failing to meet the recall threshold receive zero points, wh...
{ "dependencies": { "uv_project": "resources" }, "datasets": [ { "type": "local_tar", "path": "resources/sift.tar.gz", "target": "data/sift1M", "expected_glob": "*.fvecs" } ], "runtime": { "timeout_seconds": 3600 }, "tag": "db" }
vdb_pareto/recall95_latency
research
VDB Design Problem - Recall95 Latency Tier =========================================== Problem Setting --------------- Design a Vector Database index optimized for **latency** subject to a **high recall constraint**. This tier uses recall-gated scoring: solutions failing to meet the recall threshold receive zero point...
{ "dependencies": { "uv_project": "resources" }, "datasets": [ { "type": "local_tar", "path": "resources/sift.tar.gz", "target": "data/sift1M", "expected_glob": "*.fvecs" } ], "runtime": { "timeout_seconds": 3600 }, "tag": "db" }
vector_addition/2_20
research
Vector Addition Problem - Medium Vectors (2^20) ================================================ Problem Setting --------------- Design and optimize high-performance Triton kernels for vector addition on GPU with medium vectors (1,048,576 elements). This problem focuses on implementing efficient element-wise addition ...
dependencies: uv_project: resources tag: hpc runtime: environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)" docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true
vector_addition/2_24
research
Vector Addition Problem - Large Vectors (2^24) =============================================== Problem Setting --------------- Design and optimize high-performance Triton kernels for vector addition on GPU with large vectors (16,777,216 elements). This problem focuses on implementing efficient element-wise addition fo...
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
vector_addition/2_28
research
Vector Addition Problem - Very Large Vectors (2^28) ============================================== Problem Setting --------------- Design and optimize high-performance Triton kernels for vector addition on GPU with very large vectors (268,435,456 elements). This problem focuses on implementing efficient element-wise a...
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"