Update index.html
Browse files- index.html +38 -0
index.html
CHANGED
|
@@ -698,6 +698,44 @@
|
|
| 698 |
<p>Fine-tune the strength-quality trade-off based on your needs: subtle for sharing, aggressive for legal protection.</p>
|
| 699 |
</div>
|
| 700 |
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 701 |
</div>
|
| 702 |
</div>
|
| 703 |
|
|
|
|
| 698 |
<p>Fine-tune the strength-quality trade-off based on your needs: subtle for sharing, aggressive for legal protection.</p>
|
| 699 |
</div>
|
| 700 |
</div>
|
| 701 |
+
|
| 702 |
+
<h3 style="margin-top: 50px; margin-bottom: 20px; font-size: 1.8em;">How We Compare to Other Methods</h3>
|
| 703 |
+
<table class="comparison-table">
|
| 704 |
+
<thead>
|
| 705 |
+
<tr>
|
| 706 |
+
<th>Method</th>
|
| 707 |
+
<th>Primary Goal</th>
|
| 708 |
+
<th>Mechanism</th>
|
| 709 |
+
<th>Key Differentiator</th>
|
| 710 |
+
</tr>
|
| 711 |
+
</thead>
|
| 712 |
+
<tbody>
|
| 713 |
+
<tr>
|
| 714 |
+
<td class="highlight"><strong>Poisonous Shield for Images</strong></td>
|
| 715 |
+
<td class="highlight">Disrupt ML training convergence; make training economically unviable.</td>
|
| 716 |
+
<td class="highlight">Targets frequencies to fundamentally disrupt feature extraction and gradient learning. Combines this with cryptographic provenance.</td>
|
| 717 |
+
<td class="highlight"><strong>Targets the training process itself, not just the output. Verifiable via cryptographic metadata.</strong></td>
|
| 718 |
+
</tr>
|
| 719 |
+
<tr>
|
| 720 |
+
<td><strong>Glaze</strong></td>
|
| 721 |
+
<td>Prevent style mimicry by AI models.</td>
|
| 722 |
+
<td>Adds perturbations that mislead models about an artist's specific style, making it difficult to replicate.</td>
|
| 723 |
+
<td>Defensive tool focused on protecting artistic style.</td>
|
| 724 |
+
</tr>
|
| 725 |
+
<tr>
|
| 726 |
+
<td><strong>Nightshade</strong></td>
|
| 727 |
+
<td>"Poison" the model's understanding of concepts.</td>
|
| 728 |
+
<td>A data poisoning attack that manipulates training data to teach the model incorrect associations (e.g., "dogs" are "cats").</td>
|
| 729 |
+
<td>Offensive tool that corrupts specific concepts within the model.</td>
|
| 730 |
+
</tr>
|
| 731 |
+
<tr>
|
| 732 |
+
<td><strong>Metadata/Watermarking</strong></td>
|
| 733 |
+
<td>Add copyright info or visible marks.</td>
|
| 734 |
+
<td>Embeds text in file metadata (EXIF) or places a visible/invisible overlay on the image.</td>
|
| 735 |
+
<td>Easily stripped by social media platforms and basic image editing. Offers no protection against training.</td>
|
| 736 |
+
</tr>
|
| 737 |
+
</tbody>
|
| 738 |
+
</table>
|
| 739 |
</div>
|
| 740 |
</div>
|
| 741 |
|