moving-plantain

A LoRA adapter on FLUX.2 Klein (4B) for single-step future-frame prediction. Tests whether the latent physics priors of an image generator can be surfaced through the instruction-tuning recipe of Image Generators are Generalist Vision Learners (Gabeur et al., 2026; arXiv:2604.20329).

Thesis

Vision Banana argues that image generation pretraining produces a generalist vision learner. moving-plantain extends that argument to dynamics. A model that can render a physically coherent t+1 frame conditioned on a t=0 frame and a free-form intervention prompt — "the ball rolls left", "the cup tips over", "the cloth falls" — implicitly carries a forward physics simulator in its weights. Recovering that simulator under parameter-efficient adaptation is the empirical test of whether generative vision pretraining encodes object permanence, gravity, contact dynamics, and other physical structure beyond static appearance.

Method

Input: a single RGB frame at t=0 and an intervention prompt describing the change. Output: the predicted RGB frame at t=1. Training pairs are drawn from natural video datasets, with intervention prompts derived from the optical flow / motion description between consecutive frames. The loss is the diffusion objective on the t=1 target.

Status

Placeholder. Weights and training data forthcoming.

License

Apache 2.0 — matches base FLUX.2 Klein 4B.

References

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for phanerozoic/moving-plantain

Adapter
(44)
this model

Collection including phanerozoic/moving-plantain

Paper for phanerozoic/moving-plantain