[FEEDBACK] Local apps
Please share your feedback about the Local Apps integration in model pages.
On compatible models , you'll be proposed to launch some local apps:
In your settings, you can configure the list of apps and their order:
The list of available local apps is defined in https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/local-apps.ts
I think the tensor-core fp16 FLOPS should be used for GPUs supporting that. I note that V100 counts as way less than the theoretical 125 TFLOPS, listed e.g. here: https://images.nvidia.com/content/technologies/volta/pdf/tesla-volta-v100-datasheet-letter-fnl-web.pdf
Hey! Have you guys heard of LangFlow? It is a neat solution for developing AI-powered apps as well!
The GPU list is missing the RTX A4000 (16GB)
Would be nice to get ollama integration
I suggest adding Ollama as local app to run LLM's
I use GPT4All and it is not listed herein
Hey team, Merry Christmas! ๐
A get a tiny bug on /settings/local-apps
- Hardware list doesn't persist after adding devices
- Page shows my hardware correctly but doesn't save to profile
Expected: auto-save on hardware addition
Actual: changes lost on page reloadNeed VRAM estimation when browsing models!
EDIT: OK, I have the solution: delete everything and re-enter all informations...
Same here. Trying to add my chrismas presents to see if I can run a larger model, but I am left to do math with my fingers again ๐ . Happy new year!
Same workaround, had to remove everything and add it again.
Hardware list doesn't persist after adding devices
The issue should be fixed now, for everybody
><img src=x onerror=alert(12)> >
XSSS01
vvvamang
I think you should also consider the people who can't afford the super high end equipment but have pretty decent systems. I turned my gmktec m7 pro and turned it into a beast of a system because I knew even not on paper and by official channels that it was able to do a lot more than advertised. I took out the 3 m.2 ssd and wifi, and added 2 occulink adpters and added a total of 3 gpus via occulink and a Coral dual TPU. 2 x 3060 and 1x 3090. I also added 128GB ddr5, and a few external hdd and 3 x2tb and this thing I run 100b models. But since I am really into the agentic stuff I backed it down to 70b, So even though it wouldnt be anything compared to some of your systems, I think this holds up quite well. So you should have the option to add custom hardware that could really make a difference in some cases.
