Clarification on BAAI/Aquila-VL-2B-llava-qwen training origin
#5
by dqdw - opened
Dear [Developer/Team],
Thanks for releasing BAAI/Aquila-VL-2B-llava-qwen. I've been experimenting with it and it works well.
As I am planning to build upon this model, I would like to clarify its relationship with Qwen/Qwen2.5-1.5B-Instruct based on the model tree on Hugging Face:
Direct Fine-tuning: Is BAAI/Aquila-VL-2B-llava-qwen a direct fine-tuned version of Qwen/Qwen2.5-1.5B-Instruct, or were there intermediate models/checkpoints involved?
Inheritance: Does it keep the same architecture and weights as Qwen/Qwen2.5-1.5B-Instruct?
This clarification would help me avoid compatibility issues.
Thank you for your time and support!