I'm really grateful for the positive response to the Claude Reflect System. In just 4 days, 30 developers have shown interest by starring the project. Thank you so much!
What Is Claude Reflect?
Correct once, never again. Claude Reflect helps Claude Code remember your corrections and preferences across sessions. Instead of repeating the same feedback, the system learns and applies it automatically.
Main Features:
π§ Learning System - Detects corrections and preferences from conversations - Stores them permanently in skill files - Applies learnings in future sessions
π Safety First - Automatic backups before changes - YAML validation - Git version control
β‘ Two Modes - Manual: Run /reflect when you want - Auto: Reflects automatically at session end
How It Works
If you correct Claude to use pytest instead of unittest, this preference gets saved. Next time, Claude will remember and use pytest automatically. It's that simple.
Getting Started
1. Clone the repository 2. Install dependencies 3. Activate the skill 4. Try it out!
The python-project-creator example shows how the system learns from your feedback.
I've been thinking a lot about what "helpfulness" means lately. Commonly in AI, that looks like fulfilling user requests as closely as possible as long as the request isn't unsafe.
But I wanted to know what it was like to build a model that might be helpful in the same way a human would be.
For example, if you ask Mox to write a 10 page paper on the cultural significance of staplers, Mox will probably refuse, tell you that wouldn't be useful or helpful to ANYBODY and recommend a different, but more useful approach.
Mox is still very much a work in progress, but I think that this is a good starting point! I'm already generating more datasets to add more elements to Mox's persona in future versions, which you should see on the hub soon!
reacted to MonsterMMORPG's
post with β€οΈ3 days ago
Finally NVFP4 models has arrived to ComfyUI thus SwarmUI with CUDA 13. NVFP4 models are literally 100%+ faster with minimal impact on quality. I have done grid quality comparison to show you the difference on FLUX 2, Z Image Turbo and FLUX 1 of NVFP4 versions. To make CUDA 13 work, I have compiled Flash Attention, Sage Attention & xFormers for both Windows and Linux with all of the CUDA archs to support literally all GPUs starting from GTX 1650 series, RTX 2000, 3000, 4000, 5000 series and more.
In this full tutorial, I will show you how to upgrade your ComfyUI and thus SwarmUI to use latest CUDA 13 with latest libraries and Torch 2.9.1. Moreover, our compiled libraries such as Sage Attention works with all models on all GPUs without generating black images or videos such as Qwen Image or Wan 2.2 models. Hopefully LTX 2 presets and tutorial coming soon too. Finally, I introduce a new private cloud GPU platform called as SimplePod like RunPod. This platform has all the features of RunPod same way but much faster and cheaper.