Instructions to use q-future/one-align with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use q-future/one-align with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="q-future/one-align", trust_remote_code=True) pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("q-future/one-align", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 7bb527eee008503537574ddffccd4d739bcc628a1aca4b74fcaf25249782b70f
- Size of remote file:
- 5.82 kB
- SHA256:
- d54c721b516d14d181e0e3336c248e0855c2802a7bd13ac1be9433aade2e9b13
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.