Last week we released Atom V1 4B and Atom V1 8B, our first two preview releases in Project Atom from VANTA Research - a larger effort to refine and scale the Atom persona from 4B to 400B+.
Today we are excited to share Atom V1 12B! This preview release is built on Google's Gemma3 12B architecture, and fine tuned for exploratory and collaborative interaction.
Atom is intentionally trained to not simply be an informational source, but a partner in thought. As such, the model regularly and consistently returns thoughtful questions at the end of it's responses. This is designed not only to keep the interaction engaging, but to encourage deeper thought/exploration between the user and model.
As always, feedback is welcome as we continue to refine our approach to Project Atom, and human-AI collaboration in general.
Since Yann LeCun together with Randall Balestriero released a new paper on JEPA (Joint-Embedding Predictive Architecture), laying out its theory and introducing an efficient practical version called LeJEPA, we figured you might need even more JEPA. Here are 7 recent JEPA variants plus 5 iconic ones:
6. TS-JEPA (Time Series JEPA) β Joint Embeddings Go Temporal (2509.25449) Adapts JEPA to time-series by learning latent self-supervised representations and predicting future latents for robustness to noise and confounders
Instead of architectural upgade, each major model drop nowadays perfects a regional innovation. What Kimi brought to spot light this time is quantization aware training (QAT). I wrote an article to explain it and why it matters to reasoning models.