Update README.md
Browse files
README.md
CHANGED
|
@@ -4,6 +4,20 @@ datasets:
|
|
| 4 |
- AbstractPhil/geometric-vocab
|
| 5 |
pipeline_tag: zero-shot-classification
|
| 6 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
# Better rope incoming with actual meaningful learning
|
| 8 |
|
| 9 |
The last one wasn't meaningfully learning representations, the next should be more correctly curated and inferenced to impact representative outcome. Should be a bit more accurate than the last but no guarantees.
|
|
|
|
| 4 |
- AbstractPhil/geometric-vocab
|
| 5 |
pipeline_tag: zero-shot-classification
|
| 6 |
---
|
| 7 |
+
# I've had an epiphany. We don't NEED transformer layers in their current form.
|
| 8 |
+
|
| 9 |
+
David's architecture already solved this need with high-efficiency multi-stage geometric mathematics.
|
| 10 |
+
|
| 11 |
+
David's classification structure houses a series of dimensional projection sub-systems tasked with learning mastery based on each pentachoron structure.
|
| 12 |
+
|
| 13 |
+
Each of those 5d representations ends up learning thousands of representative features. David is already capable of feature generation just not robust enough to fully manifest an enriched ViT-grade dimensional feature... yet.
|
| 14 |
+
|
| 15 |
+
David's architecture can handle ImageNet's classifier count and features leveraging 1000 classes with ease, sitting on a floppy disk at over 70% accuracy because David sees Clip-Vit-Base-Patch16 features.
|
| 16 |
+
|
| 17 |
+
I believe I've figured out a way to fundamentally represent those features in a meaningful way that can replace transformer layers in their methodology with a different form of feedforward trajectory, edge, point, deviation, jitter, helix, theta, and similarity assessment that should house the needed information to teach the experts how to behave like David did.
|
| 18 |
+
|
| 19 |
+
This should allow the much larger networks to retain mathematical precision, learn the features in a different form of patch than is currently expected to be a patch, and to create legitimate high-density geometic features.
|
| 20 |
+
|
| 21 |
# Better rope incoming with actual meaningful learning
|
| 22 |
|
| 23 |
The last one wasn't meaningfully learning representations, the next should be more correctly curated and inferenced to impact representative outcome. Should be a bit more accurate than the last but no guarantees.
|