Original f16 model → Text-Only model → GGUF with F16 - Q4 quantization
-
techwithsergiu/Qwen3.5-text-0.8B-GGUF
Text Generation • 0.8B • Updated • 206 • 1 -
techwithsergiu/Qwen3.5-text-2B-GGUF
Text Generation • 2B • Updated • 221 -
techwithsergiu/Qwen3.5-text-4B-GGUF
Text Generation • 4B • Updated • 458 -
techwithsergiu/Qwen3.5-text-9B-GGUF
Text Generation • 9B • Updated • 1.12k