2 gpu
how run diffusers/FLUX.2-dev-bnb-4bit on two gpu 16 g rtx 4060ti???
on kagle
????????????
no idea, I don't use kagle, this model is big and using two GPUs is for very advanced users which you don't seem to be if you're asking, you probably should just rent some bigger instance or use group offloading if kagle has the RAM needed. In the future we will provide some kind of guide to use multiple GPUs but I repeat, this is for advanced users.
diffusers/FLUX.2-dev-bnb-4bit
2*16gb t4
g30 ram
kagle
In general, how can the model
diffusers/FLUX.2-dev-bnb-4bit
work on two gpu?
group offloading
?????????????????????????
في Google Colab Pro مع 2 T4 GPUs
import torch
from diffusers import Flux2Pipeline, ContextParallelConfig
from diffusers.hooks import apply_context_parallel
التحقق من عدد الأجهزة
print(f"عدد GPUs: {torch.cuda.device_count()}")
إذا كان لديك جهازان فقط
if torch.cuda.device_count() >= 2:
config = ContextParallelConfig(
ring_degree=2,
convert_to_fp32=True
)
pipe = Flux2Pipeline.from_pretrained(
"diffusers/FLUX.2-dev-bnb-4bit",
torch_dtype=torch.float16, # T4 لا يدعم bf16
)
# تطبيق التوازي
apply_context_parallel(pipe.transformer, config)
# تشغيل
prompt = "A futuristic city at night with flying cars"
image = pipe(prompt, num_inference_steps=28).images[0]
image.save("colab_ring_parallel.png")
else:
print("⚠️ هذا الكود يتطلب جهازين على الأقل")
RuntimeError: Failed to import diffusers.pipelines.flux2.pipeline_flux2 because of the following error (look up to see its traceback):
Could not import module 'AutoProcessor'. Are this object's requirements defined correctly?
add Codeadd Markdown
KAgl 2 T4
kagle 2*t4
code
https://huggingface.co/docs/diffusers/main/en/api/parallel
https://huggingface.co/diffusers/FLUX.2-dev-bnb-4bit/discussions/3#69d4c4443f29c59da80cb052
with parallelism
how??????????????????????????