Suggestion: Dense versions of Qwen 3.5 at multiple scales

#31
by zletpm - opened

Hi Qwen Team,

I hope you will consider developing dense versions of Qwen 3.5 at multiple scales—such as 32B, 24B, 16B, 8B, 4B, and 1B.

Having these intermediate sizes would better support the broad community of RTX and AMD GPU users, ensuring that people with all types of consumer hardware (and VRAM limits) can benefit from efficient, high-quality local AI models.

Thank you guys for all your hard work!

rumor to be 9b dense and 35b-a3b moe soon

rumor to be 9b dense and 35b-a3b moe soon

I hope we could have 32B dense version-- the smartest version on 24GB VRAM GPU.

Sign up or log in to comment