0.8B

Polaris-VGA-0.8B-Post1.0

Polaris-VGA-0.8B-Post1.0 is a post-optimized evolution built on top of Qwen/Qwen3.5-0.8B, designed to extend compact language modeling into the domain of VGA (Visual Grounding Anything). This model integrates visual understanding with instruction-following capabilities, enabling it to interpret complex scenes, explain visual content in detail, and perform grounding across diverse inputs. Through targeted post-training optimizations, it enhances multimodal reasoning, making it capable of connecting textual instructions with visual elements for detection, explanation, and structured interpretation tasks, all while maintaining efficiency within a lightweight 0.8B parameter architecture.

Visual-Grounding-Anything (code) - https://huggingface.co/prithivMLmods/Polaris-VGA-0.8B-Post1.0/tree/main/Visual-Grounding-Anything

Key Highlights

  • VGA (Visual Grounding Anything) Specialization: Designed to associate textual queries with visual elements across a wide range of scenes and contexts.
  • Post-Optimized Training Pipeline: Refined on top of the base model to improve multimodal alignment and response clarity.
  • Strong Visual Understanding: Capable of interpreting complex scenes, objects, relationships, and contextual cues for reasoning tasks.
  • Scene Explanation & Reasoning: Generates detailed descriptions and structured explanations from visual inputs.
  • Object & Point Tracking Optimization: Adapted for video-based tasks including object tracking and point-level tracking across frames.
  • Efficient 0.8B Architecture: Maintains low computational requirements while extending capabilities beyond traditional text-only models.
Get GGUF
File Name Quant Type File Size File Link
Polaris-VGA-0.8B-Post1.0.BF16.gguf BF16 1.52 GB Download
Polaris-VGA-0.8B-Post1.0.F16.gguf F16 1.52 GB Download
Polaris-VGA-0.8B-Post1.0.F32.gguf F32 3.02 GB Download
Polaris-VGA-0.8B-Post1.0.Q8_0.gguf Q8_0 812 MB Download
Polaris-VGA-0.8B-Post1.0.mmproj-bf16.gguf mmproj-bf16 207 MB Download
Polaris-VGA-0.8B-Post1.0.mmproj-f16.gguf mmproj-f16 207 MB Download
Polaris-VGA-0.8B-Post1.0.mmproj-f32.gguf mmproj-f32 402 MB Download
Polaris-VGA-0.8B-Post1.0.mmproj-q8_0.gguf mmproj-q8_0 116 MB Download

Recommended (chat_template.jinja) - https://huggingface.co/prithivMLmods/Polaris-VGA-0.8B-Post1.0/blob/main/chat_template.jinja

Standard or Default (chat_template.jinja)https://huggingface.co/prithivMLmods/Polaris-VGA-0.8B-Post1.0/blob/main/standard-chat_template/chat_template.jinja

Download the model

hf auth login --token <YOUR_HF_TOKEN>

hf download prithivMLmods/Polaris-VGA-0.8B-Post1.0

Quick Start with Transformers

pip install transformers==5.3.0
# or
pip install git+https://github.com/huggingface/transformers.git
from transformers import Qwen3_5ForConditionalGeneration, AutoProcessor
import torch

model = Qwen3_5ForConditionalGeneration.from_pretrained(
    "prithivMLmods/Polaris-VGA-0.8B-Post1.0",
    torch_dtype="auto",
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(
    "prithivMLmods/Polaris-VGA-0.8B-Post1.0"
)

messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Describe this image in extreme detail."}
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)

inputs = processor(
    text=[text],
    padding=True,
    return_tensors="pt"
).to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=512)

generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]

output_text = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False
)

print(output_text)

Intended Use

  • Visual Grounding Research: Studying how language models connect text with visual elements across diverse scenarios.
  • Scene Understanding Applications: Explaining and analyzing images or visual data for downstream tasks.
  • Video Analysis Prototyping: Supporting object tracking and point tracking experiments in video pipelines.
  • Lightweight Multimodal Systems: Deploying visual reasoning capabilities on resource-constrained environments.
  • Research & Experimentation: Rapid prototyping with compact multimodal transformer architectures.

Capabilities

  • Visual Scene Understanding: Interprets any scene for reasoning, detection, and descriptive tasks.
  • Cross-Modal Reasoning: Bridges visual inputs with textual instructions for grounded outputs.
  • Detection-Oriented Tasks: Identifies and contextualizes objects and regions within visual data.
  • Tracking-Oriented Tasks: Supports object continuity and point tracking across sequential frames.
  • General Visual Explanation: Explains “anything” visible in an input with structured and coherent responses.

Limitations

Important Note: This model emphasizes broad visual understanding and grounding over scale.

  • Compact Model Constraints: As a 0.8B parameter model, depth of reasoning and long-chain inference may be limited compared to larger systems.
  • Visual Ambiguity Sensitivity: Performance may vary depending on input clarity and scene complexity.
  • User Responsibility: Outputs should be interpreted responsibly, especially in sensitive or high-stakes applications.
  • Experimental Multimodal Behavior: Some edge cases in grounding and tracking may require further refinement depending on deployment context.

Acknowledgements

Downloads last month
614
Safetensors
Model size
0.9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Polaris-VGA-0.8B-Post1.0

Quantized
(80)
this model
Quantizations
2 models

Collection including prithivMLmods/Polaris-VGA-0.8B-Post1.0