My experience with this model - bad results on code generation
hi
i've tried IQuest-Coder-V1-40B-Instruct.Q6_K.gguf and it is slow (~20.30 tok/sec) (2xRTX 3090 - 24GB=48GB) - it used 32.65 GB
and never got working results for first shot (even second or third) :
-prompt1: make html pong game with score. opponent is the computer.
-prompt2: make a html based minesweeper game
-prompt3: make a html based snake game
As a clear contrast to it, above (easy) tasks are solved at first shot, with
-devstral-small-2-24b-instruct-2512
-qwen3-coder-30b-a3b-instruct
-qwen3-next-80b-a3b-instruct@iq4_xs
I've used LM Studio, Open Webui and tried Claude Code (with CC Router) to generate a playwright test but failed at very beginning step.
so conclusion: it is slow and bad model.
its clearly benchmaxxed
same, tried with simple c++ tasks, wasn't great. it didn't even obey requests for naming conventions, which 7b models were able to.