AbstractPhil commited on
Commit
14afdff
·
verified ·
1 Parent(s): 05a5de4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -187,6 +187,21 @@ size_categories:
187
  - 100K<n<1M
188
  ---
189
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
190
  # Known Issue 9/7/2025
191
 
192
  The repo's split has been cleaned and the current paradigm is the same that will be used moving foward unless huggingface changes their system.
@@ -234,6 +249,41 @@ Formerly;
234
 
235
  This will effectively prevent you from automatically downloading all the splits without any weird janky workarounds.
236
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
237
 
238
  ## 📦 Dataset Structure
239
 
 
187
  - 100K<n<1M
188
  ---
189
 
190
+ # Research Update 9/13/2025
191
+
192
+ The MULTITUDE of tests I've ran show that with weighted decay these pentachora are more likely to collapse to zero than retain utility when trained directly. However, when used as a starting point and then only minorly shifted as a trajectory towards a goal, they are more likely to retain full cohesion and even be backtrackable. The constellations show that this is more than a probable solution, it's a likely solution to work.
193
+
194
+ When the anchor [n, 1, dim] is frozen, the rest of the structure can be warped as long as it remains within the cayley-menger and graham principles - you will get some overlap with the others unless you form a constellation of embeddings however. This has shown on multiple occasions to assist but not to properly uniformly create cohesion. I've yet to find a fully reliable form of this yet, but as it stands it may be one of the best routes to any sort of geometric vocabulary - a prefabricated and frozen anchor applied DIRECTLY in the token - and not as some sort of external representation to access either.
195
+
196
+ The most potent uses show that when keeping a frozen form of the crystal for masking, and then masking a starting point with alpha masking, the models are less likely to collapse towards zero given many epochs. They will however - collapse to zero eventually. This is a deterministic outcome that happens when you simply feed sha256 valuations into an entropic decay engine. Essentially you want the crystal to be slightly editable, but not the full thing. You also want the model to see it, but not fully learn it. This gives it a kind of self-learned bias cohesion that self-regulates when you stick to cayley menger and graham formulas.
197
+
198
+ If you try to fully train the simultaneous infinities adjacent to each-other one by one, you will end up with the crystals overlapping unless you hard gate them via tokens. If you hard gate via tokens [CLS1], [CLS2] etc, you will end up with a much slower to converge and bulky objective - which tends to corrupt and build incorrect shortcuts down the chain of depth - even with perfect geometry. The system manages to find it's own incorrect shortcuts, which is the whole point of the geometric structure - shortcuts. Incorrect shortcuts however, is essentially learning how to open a fridge with your foot. It works yes, but it's generally more difficult - however because these models are so dang small they tend to have the "full hands" problem. This forces them to adapt and learn the best they can, even when there's no room to solve the problem. Throw the ball near the hoop and sometimes it goes in, instead of learning the precise process of going in. Since the geometric structure is refinforced by multiple cosine similarity assessments and the losses are gated by geometry, there's going to need to be a full cayley-graham infinity decay that needs to be DIRECTLY applied to geometric structures, while simultaneously an alternative route needs to be applied to standard linear layers if used in conjunction.
199
+
200
+ Retaining cohesive structures is a tricky paradigm, but it's very doable if you consult some of my trains. Some of them have fully robust crystal lattices that formed their own cohesive nature, others completely collapsed into themselves before they even began.
201
+
202
+ I've been at it all week and it's been tough, but enlightening.
203
+
204
+
205
  # Known Issue 9/7/2025
206
 
207
  The repo's split has been cleaned and the current paradigm is the same that will be used moving foward unless huggingface changes their system.
 
249
 
250
  This will effectively prevent you from automatically downloading all the splits without any weird janky workarounds.
251
 
252
+ ## Data Format and Code
253
+
254
+ ```Python
255
+ def _deterministic_pentachoron(center_vec: np.ndarray) -> np.ndarray:
256
+ d = center_vec.shape[0]
257
+ proposals = np.stack([
258
+ center_vec,
259
+ np.roll(center_vec, 1),
260
+ np.roll(center_vec, 3) * np.sign(center_vec + 1e-8),
261
+ np.roll(center_vec, 7) - center_vec,
262
+ np.roll(center_vec, 11) + center_vec,
263
+ ], 0).astype(np.float32)
264
+
265
+ # Normalize rows with L1 norm
266
+ norms = np.sum(np.abs(proposals), axis=1, keepdims=True) + 1e-8
267
+ Q = proposals / norms
268
+
269
+ # Gram–Schmidt orthogonalization with L1 re-norm
270
+ for i in range(5):
271
+ for j in range(i):
272
+ Q[i] -= np.dot(Q[i], Q[j]) * Q[j]
273
+ Q[i] /= (np.sum(np.abs(Q[i])) + 1e-8)
274
+
275
+ # Apply scaling factors to spread vertices
276
+ gamma = np.array([1.0, 0.9, -0.8, 1.1, 1.2], np.float32)
277
+ X = np.zeros((5, d), np.float32)
278
+ for i in range(5):
279
+ X[i] = center_vec + gamma[i] * Q[i]
280
+
281
+ # Center the pentachoron
282
+ return X - X.mean(0, keepdims=True)
283
+
284
+ ```
285
+ This is currently hosted on the repo for the lattice_geometry and it's imperfect. Keep in mind this is meant to be a starting point.
286
+
287
 
288
  ## 📦 Dataset Structure
289