I want to deploy the model directly on my Mac, but there is currently no .gguf format that can be used directly by llama.cpp. I would like to ask if it is possible to provide such a file format in the future.
Β· Sign up or log in to comment