AI & ML interests

The AI community building the future.

Recent Activity

alvarobartt  updated a dataset about 2 hours ago
huggingface/DEH-image-scan-data
dlouapre  updated a Space about 7 hours ago
huggingface/eleusis-benchmark
sayakpaul  updated a dataset about 21 hours ago
huggingface/diffusers-metadata
View all activity

Articles

sergiopaniego 
posted an update 2 days ago
evalstate 
posted an update 4 days ago
view post
Post
3335
Hugging Face MCP Server v0.3.2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

- Replace model_search and dataset_search with combined hub_repo_search tool.
- Less distracting description for hf_doc_search
- model_search and dataset_search tool calls will still function (plan to remove next release).
  • 4 replies
·
sergiopaniego 
posted an update 8 days ago
albertvillanova 
posted an update 9 days ago
view post
Post
1576
5 years already working in democratizing AI 🤗
Grateful to be part of such an awesome team making it happen every day.
sergiopaniego 
posted an update 12 days ago
view post
Post
421
if you're looking for a good first issue to get your open-source journey started, you could contribute to this TRL issue by documenting one impactful paper in the docs

we have a broad list to cover!! 🧐

https://github.com/huggingface/trl/issues/4407
sergiopaniego 
posted an update 22 days ago
view post
Post
474
Meet the Post-Training Toolkit (PTT), which easily integrates with TRL via a single callback, by Aditya Challapally ( @microsoft ):

🔍 Detects training issues early
🛠 Lets you intervene safely
📊 Keeps long training runs stable, auditable & efficient

Microsoft blog: https://devblogs.microsoft.com/engineering-at-microsoft/diagnosing-instability-in-production-scale-agent-rl/

Integration guide: https://huggingface.co/docs/trl/main/en/ptt_integration

Code: https://github.com/microsoft/post-training-toolkit
juanjucm 
posted an update 23 days ago
view post
Post
242
Last week,
zai-org
dropped zai-org/GLM-4.7-Flash. Now, we bring it to Microsoft Foundry!

- 🏆 30B-A3B MoE, the strongest model in the 30B class. It excels at coding tasks, agentic workflows and reasoning.
- 🤏🏻 Lighter version of his 358B big brother, balancing performance and efficiency.

Not light enough for you? We are also adding
unsloth
unsloth/GLM-4.7-Flash-GGUF to the catalog, with GPU and CPU support powered by llama.cpp 🔥

Go join the hype and deploy them from the Hugging Face collection on Microsoft Foundry!
  • 2 replies
·
alvarobartt 
posted an update 23 days ago
view post
Post
3007
💥 hf-mem v0.4.1 now also estimates KV cache memory requirements for any context length and batch size with the --experimental flag!

uvx hf-mem --model-id ... --experimental will automatically pull the required information from the Hugging Face Hub to include the KV cache estimation, when applicable.

💡 Alternatively, you can also set the --max-model-len, --batch-size and --kv-cache-dtype arguments (à la vLLM) manually if preferred.
  • 1 reply
·
evalstate 
posted an update 23 days ago
view post
Post
268
Hugging Face MCP Server v0.3.1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

- Streamable HTTP used for Gradio Connectivity
- SSE Transport (as Server) removed
- Proxy Configuration added for launch of sub-agent tools

sergiopaniego 
posted an update 23 days ago
IlyasMoutawwakil 
posted an update 25 days ago
view post
Post
3019
Transformers v5 just landed! 🚀
It significantly unifies and reduces modeling code across architectures, while opening the door to a whole new class of performance optimizations.

My favorite new feature? 🤔
The new dynamic weight loader + converter. Here’s why 👇

Over the last few months, the core Transformers maintainers built an incredibly fast weight loader, capable of converting tensors on the fly while loading them in parallel threads. This means we’re no longer constrained by how parameters are laid out inside the safetensors weight files.

In practice, this unlocks two big things:
- Much more modular modeling code. You can now clearly see how architectures build on top of each other (DeepSeek v2 → v3, Qwen v2 → v3 → MoE, etc.). This makes shared bottlenecks obvious and lets us optimize the right building blocks once, for all model families.
- Performance optimizations beyond what torch.compile can do alone. torch.compile operates on the computation graph, but it can’t change parameter layouts. With the new loader, we can restructure weights at load time: fusing MoE expert projections, merging attention QKV projections, and enabling more compute-dense kernels that simply weren’t possible before.

Personally, I'm honored to have contributed in this direction, including the work on optimizing MoE implementations and making modeling code more torch-exportable, so these optimizations can be ported cleanly across runtimes.

Overall, Transformers v5 is a strong signal of where the community and industry are converging: Modularity and Performance, without sacrificing Flexibility.

Transformers v5 makes its signature from_pretrained an entrypoint where you can mix and match:
- Parallelism
- Quantization
- Custom kernels
- Flash/Paged attention
- Continuous batching
- ...

Kudos to everyone involved! I highly recommend the:
Release notes: https://github.com/huggingface/transformers/releases/tag/v5.0.0
Blog post: https://huggingface.co/blog/transformers-v5
·