Update index.html
Browse files- index.html +21 -16
index.html
CHANGED
|
@@ -656,7 +656,7 @@
|
|
| 656 |
<strong>What if you could poison your images to disrupt AI training while keeping them visually appealing to humans?</strong>
|
| 657 |
</p>
|
| 658 |
<p style="font-size: 1.1em; line-height: 1.6; margin: 20px 0 0 0; text-align: center; color: var(--text-secondary);">
|
| 659 |
-
Poisonous Shield for Images achieves <strong>3.4x training convergence degradation</strong> (343% slower convergence) — among the highest reported for low-distortion image protection — while maintaining excellent visual quality (SSIM 0.98+ at strength 1.5).
|
| 660 |
</p>
|
| 661 |
</div>
|
| 662 |
</div>
|
|
@@ -669,8 +669,8 @@
|
|
| 669 |
<div class="section-content">
|
| 670 |
<div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(280px, 1fr)); gap: 20px; margin: 20px 0;">
|
| 671 |
<div style="background: rgba(44, 140, 132, 0.1); padding: 20px; border-radius: 8px; border-left: 3px solid var(--accent-color);">
|
| 672 |
-
<h4>🎯 Frequency-Domain Targeting
|
| 673 |
-
<p>Protection embedded in the mathematical structure of images, targeting the frequencies that ML models rely on for training.
|
| 674 |
</div>
|
| 675 |
|
| 676 |
<div style="background: rgba(44, 140, 132, 0.1); padding: 20px; border-radius: 8px; border-left: 3px solid var(--accent-color);">
|
|
@@ -679,8 +679,8 @@
|
|
| 679 |
</div>
|
| 680 |
|
| 681 |
<div style="background: rgba(44, 140, 132, 0.1); padding: 20px; border-radius: 8px; border-left: 3px solid var(--accent-color);">
|
| 682 |
-
<h4>🛡️
|
| 683 |
-
<p>Not a surface watermark—protection is woven into the
|
| 684 |
</div>
|
| 685 |
|
| 686 |
<div style="background: rgba(44, 140, 132, 0.1); padding: 20px; border-radius: 8px; border-left: 3px solid var(--accent-color);">
|
|
@@ -710,12 +710,6 @@
|
|
| 710 |
|
| 711 |
<p><strong>Poisonous Shield for Images is different:</strong> Mathematical protection embedded in the frequency-domain structure of images—subtle to humans, toxic to AI models.</p>
|
| 712 |
|
| 713 |
-
<div class="warning-box" style="margin-top: 25px;">
|
| 714 |
-
<h4>🔒 Defense Against Advanced Removal Attacks</h4>
|
| 715 |
-
<p>Recent research (LightShed, USENIX 2025) demonstrated autoencoder-based attacks that can learn and remove deterministic protection patterns from images. Poisonous Shield for Images incorporates proprietary stochastic defense mechanisms that introduce non-deterministic perturbation components—making the protection pattern impossible for machine learning models to fully learn or predict.</p>
|
| 716 |
-
<p style="margin-top: 12px;"><strong>Result:</strong> Even if attackers collect thousands of protected images and train sophisticated autoencoders, they cannot extract the complete protection pattern. The non-learnable components remain intact, ensuring your images stay poisoned against unauthorized AI training.</p>
|
| 717 |
-
</div>
|
| 718 |
-
|
| 719 |
<div class="info-box" style="margin-top: 20px;">
|
| 720 |
<h4>🧪 Live Demonstration: Test It Yourself</h4>
|
| 721 |
<p><strong>Download this protected image and test it against real watermark removal tools:</strong></p>
|
|
@@ -905,6 +899,17 @@
|
|
| 905 |
<p style="margin-top: 10px; font-size: 1.1em;">0 out of 4 test images had protection removed</p>
|
| 906 |
<p style="margin-top: 15px; font-style: italic; color: var(--text-secondary);">Traditional watermark removers detect surface patterns. Poisonous Shield for Images embeds protection in the frequency-domain structure—it appears as natural image content.</p>
|
| 907 |
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 908 |
</div>
|
| 909 |
</div>
|
| 910 |
|
|
@@ -923,10 +928,10 @@
|
|
| 923 |
<h3 style="color: var(--accent-color); margin-bottom: 15px;">⚡ Layer 1: AI Poisoning</h3>
|
| 924 |
<ul style="line-height: 1.6; margin: 0; padding-left: 20px;">
|
| 925 |
<li>2-3.4x ML training slowdown</li>
|
| 926 |
-
<li>Frequency-domain protection (81-91% mid-band)</li>
|
| 927 |
-
<li>
|
| 928 |
-
<li>
|
| 929 |
-
<li>
|
| 930 |
<li>Maintains visual quality (SSIM 0.74-0.99)</li>
|
| 931 |
</ul>
|
| 932 |
</div>
|
|
@@ -1022,7 +1027,7 @@
|
|
| 1022 |
</div>
|
| 1023 |
|
| 1024 |
<div style="background: rgba(44, 140, 132, 0.1); padding: 20px; border-radius: 12px; margin-top: 25px; border: 2px solid rgba(44, 140, 132, 0.3);">
|
| 1025 |
-
<p style="font-size: 1.1em; line-height: 1.7; margin: 0; text-align: center;"><strong>Current Stage:</strong> Proven technology with validated results (2-3.4x ML training slowdown, 81-91% mid-band concentration, 100% watermark removal resistance, 57-71% robustness across transforms
|
| 1026 |
</div>
|
| 1027 |
</div>
|
| 1028 |
</div>
|
|
|
|
| 656 |
<strong>What if you could poison your images to disrupt AI training while keeping them visually appealing to humans?</strong>
|
| 657 |
</p>
|
| 658 |
<p style="font-size: 1.1em; line-height: 1.6; margin: 20px 0 0 0; text-align: center; color: var(--text-secondary);">
|
| 659 |
+
Poisonous Shield for Images achieves <strong>3.4x training convergence degradation</strong> (343% slower convergence) — among the highest reported for low-distortion image protection — while maintaining excellent visual quality (SSIM 0.98+ at strength 1.5). Frequency-domain protection embedded in image structure survives common transforms and resists casual removal attempts.
|
| 660 |
</p>
|
| 661 |
</div>
|
| 662 |
</div>
|
|
|
|
| 669 |
<div class="section-content">
|
| 670 |
<div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(280px, 1fr)); gap: 20px; margin: 20px 0;">
|
| 671 |
<div style="background: rgba(44, 140, 132, 0.1); padding: 20px; border-radius: 8px; border-left: 3px solid var(--accent-color);">
|
| 672 |
+
<h4>🎯 Frequency-Domain Targeting</h4>
|
| 673 |
+
<p>Protection embedded in the mathematical structure of images, targeting the frequencies (0.10-0.40 normalized radius) that ML models rely on for training. Disrupts neural network convergence while maintaining visual quality.</p>
|
| 674 |
</div>
|
| 675 |
|
| 676 |
<div style="background: rgba(44, 140, 132, 0.1); padding: 20px; border-radius: 8px; border-left: 3px solid var(--accent-color);">
|
|
|
|
| 679 |
</div>
|
| 680 |
|
| 681 |
<div style="background: rgba(44, 140, 132, 0.1); padding: 20px; border-radius: 8px; border-left: 3px solid var(--accent-color);">
|
| 682 |
+
<h4>🛡️ Transform-Resistant</h4>
|
| 683 |
+
<p>Not a surface watermark—protection is woven into the frequency-domain structure. Survives JPEG compression (Q75-95), resizing (0.75x-1.25x), blur, and format conversion with 57-71% armor retention across transforms.</p>
|
| 684 |
</div>
|
| 685 |
|
| 686 |
<div style="background: rgba(44, 140, 132, 0.1); padding: 20px; border-radius: 8px; border-left: 3px solid var(--accent-color);">
|
|
|
|
| 710 |
|
| 711 |
<p><strong>Poisonous Shield for Images is different:</strong> Mathematical protection embedded in the frequency-domain structure of images—subtle to humans, toxic to AI models.</p>
|
| 712 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 713 |
<div class="info-box" style="margin-top: 20px;">
|
| 714 |
<h4>🧪 Live Demonstration: Test It Yourself</h4>
|
| 715 |
<p><strong>Download this protected image and test it against real watermark removal tools:</strong></p>
|
|
|
|
| 899 |
<p style="margin-top: 10px; font-size: 1.1em;">0 out of 4 test images had protection removed</p>
|
| 900 |
<p style="margin-top: 15px; font-style: italic; color: var(--text-secondary);">Traditional watermark removers detect surface patterns. Poisonous Shield for Images embeds protection in the frequency-domain structure—it appears as natural image content.</p>
|
| 901 |
</div>
|
| 902 |
+
|
| 903 |
+
<div class="warning-box" style="margin-top: 40px;">
|
| 904 |
+
<h4>⚠️ Honest Assessment: The Goal is Economic Disruption, Not Perfect Unbreakability</h4>
|
| 905 |
+
<p>Recent research (LightShed, USENIX 2025) demonstrated autoencoder-based attacks that can learn to remove protection patterns when trained on large, paired clean/armored image datasets. <strong>When an attacker has access to both original and protected versions of many images, Poisonous Shield for Images can be removed.</strong></p>
|
| 906 |
+
<p style="margin-top: 12px;"><strong>The Economic Hurdle Strategy:</strong> The primary goal of Poisonous Shield for Images is to make unauthorized AI training prohibitively expensive and time-consuming. We achieve this in two ways:</p>
|
| 907 |
+
<ul style="margin: 15px 0 15px 20px; line-height: 1.7;">
|
| 908 |
+
<li><strong>Cost of Removal:</strong> To train a removal model, attackers must acquire thousands of paired (clean, protected) images. This forces them to either license/purchase original content from creators or use our service to generate armored versions—both creating significant financial and logistical barriers.</li>
|
| 909 |
+
<li><strong>Cost of Training:</strong> If attackers choose to train on poisoned images, the <strong>2-3.4x training degradation</strong> means they must spend significantly more on compute resources (time and money) to achieve their desired results. This directly impacts their bottom line.</li>
|
| 910 |
+
</ul>
|
| 911 |
+
<p style="margin-top: 12px;"><strong>Primary Value:</strong> The core strength of Poisonous Shield for Images lies in creating a powerful economic disincentive against unauthorized data scraping, forcing model creators to either pay for clean data or pay more for training on poisoned data. It is not designed to be an unbreakable shield against a determined adversary with unlimited resources and paired training data.</p>
|
| 912 |
+
</div>
|
| 913 |
</div>
|
| 914 |
</div>
|
| 915 |
|
|
|
|
| 928 |
<h3 style="color: var(--accent-color); margin-bottom: 15px;">⚡ Layer 1: AI Poisoning</h3>
|
| 929 |
<ul style="line-height: 1.6; margin: 0; padding-left: 20px;">
|
| 930 |
<li>2-3.4x ML training slowdown</li>
|
| 931 |
+
<li>Frequency-domain protection (81-91% mid-band concentration)</li>
|
| 932 |
+
<li>Transform-resistant (57-71% survival through JPEG/resize/blur)</li>
|
| 933 |
+
<li>Content-aware perceptual masking</li>
|
| 934 |
+
<li>Cryptographically-keyed deterministic generation</li>
|
| 935 |
<li>Maintains visual quality (SSIM 0.74-0.99)</li>
|
| 936 |
</ul>
|
| 937 |
</div>
|
|
|
|
| 1027 |
</div>
|
| 1028 |
|
| 1029 |
<div style="background: rgba(44, 140, 132, 0.1); padding: 20px; border-radius: 12px; margin-top: 25px; border: 2px solid rgba(44, 140, 132, 0.3);">
|
| 1030 |
+
<p style="font-size: 1.1em; line-height: 1.7; margin: 0; text-align: center;"><strong>Current Stage:</strong> Proven technology with validated results (2-3.4x ML training slowdown, 81-91% mid-band concentration, 100% casual watermark removal resistance, 57-71% robustness across transforms). <strong>Limitation:</strong> Vulnerable to sophisticated autoencoder removal when attackers possess paired training data. Seeking Series A funding and enterprise partnerships to scale from prototype to production-grade platform and explore advanced defenses.</p>
|
| 1031 |
</div>
|
| 1032 |
</div>
|
| 1033 |
</div>
|