In the groundbreaking 2017 paper "Attention Is All You Need", Vaswani et al. introduced Sinusoidal Position Embeddings to help Transformers encode positional information, without recurrence or ...
Inside RoPE: Rotary Magic into Position Embeddings
Self-attention, the beating heart of Transformer architectures, treats its input as an unordered set. That mathematical elegance is also a curse: without extra signals, the model has no idea which ...
SmolLM3 Blueprint: SOTA 3B-Parameter LLM
In the evolving landscape of open-source language models, SmolLM3 emerges as a breakthrough: a 3 billion-parameter, decoder-only transformer that rivals larger 4 billion-parameter peers on many ...
Fine-Tuning AnomalyCLIP: Class-Agnostic Zero-Shot Anomaly Detection
Zero-shot anomaly detection (ZSAD) is a vital problem in computer vision, particularly in real-world scenarios where labeled anomalies are scarce or unavailable. Traditional vision-language models ...
Nanonets-OCR-s: Enabling Rich, Structured Markdown for Document Understanding
Traditional Optical Character Recognition (OCR) systems are primarily designed to extract plain text from scanned documents or images. While useful, such systems often ignore semantic structure, ...
Fine-Tuning Grounding DINO: Open-Vocabulary Object Detection
Object detection has traditionally been a closed-set problem: you train on a fixed list of classes and cannot recognize new ones. Grounding DINO breaks this mold, becoming an open-set, ...