prithivMLmods commited on
Commit
9e26a4f
·
verified ·
1 Parent(s): 71204b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -7,7 +7,7 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- ![asddf.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/yq8DFLdgUVVX7dQ-aMaR4.png)
11
 
12
  Stranger Guard specializes in building strict content moderation models, with a core focus on advanced computer vision tasks. Our team develops precision-driven AI systems capable of detecting, classifying, and moderating visual content at scale. We are dedicated to safeguarding digital platforms through responsible AI, leveraging both deep learning and domain-specific datasets to fine-tune models for real-world moderation challenges. We craft robust, adaptable tools tailored to identify sensitive, explicit, or harmful content—supporting efforts in online safety, regulatory compliance, and ethical media distribution. Our models are engineered with realism, accuracy, and reliability at the forefront, ensuring trust in automation for content integrity. Stay updated on our work via [@prithivMLmods](https://huggingface.co/prithivMLmods), where we share fine-tuned models, dataset insights, and cutting-edge techniques in adapter-based learning and visual moderation.
13
 
 
7
  pinned: false
8
  ---
9
 
10
+ ![OPO'.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/cCWXMAl30EqsKqFXPIfC9.png)
11
 
12
  Stranger Guard specializes in building strict content moderation models, with a core focus on advanced computer vision tasks. Our team develops precision-driven AI systems capable of detecting, classifying, and moderating visual content at scale. We are dedicated to safeguarding digital platforms through responsible AI, leveraging both deep learning and domain-specific datasets to fine-tune models for real-world moderation challenges. We craft robust, adaptable tools tailored to identify sensitive, explicit, or harmful content—supporting efforts in online safety, regulatory compliance, and ethical media distribution. Our models are engineered with realism, accuracy, and reliability at the forefront, ensuring trust in automation for content integrity. Stay updated on our work via [@prithivMLmods](https://huggingface.co/prithivMLmods), where we share fine-tuned models, dataset insights, and cutting-edge techniques in adapter-based learning and visual moderation.
13