adambuttrick's picture
Update README.md
1dab3dd verified
metadata
license: cc0-1.0

Affiliation Triplet Curriculum Dataset

This dataset is designed for training an embedding model using triplet loss. It contains triplets of affiliation strings (anchor, positive, negative) structured to teach a model to recognize when two strings refer to the same institution.

The dataset is sorted from easiest to hardest to facilitate curriculum learning, allowing the model to learn from simple examples before progressing to more challenging ones.


Dataset Details

Data Fields

Each row in the dataset is a complete triplet with the following fields:

  • triplet_id: (integer) A unique identifier for each triplet in the sequence.
  • anchor: (string) The primary affiliation string.
  • positive: (string) An affiliation string known to be a match for the anchor.
  • negative: (string) An affiliation string known not to be a match for the anchor.
  • difficulty: (float) A calculated score representing the triplet's difficulty (positive_dist_ratio - negative_dist_ratio). Lower (and negative) scores indicate harder triplets.
  • positive_dist_ratio: (float) The fuzzy string similarity score (0-100) between the normalized anchor and positive.
  • negative_dist_ratio: (float) The fuzzy string similarity score (0-100) between the normalized anchor and negative.
  • negative_type: (string) The type of negative example, either 'hard' or 'easy'.

Dataset Structure and Curriculum Learning

The dataset is delivered as a single, unsplit file.

It is not randomly shuffled. Instead, it is intentionally sorted by the difficulty field in descending order (from highest score to lowest). This creates a curriculum:

  1. Easiest Triplets (Top of the file): These have a high positive similarity and a low negative similarity.
  2. Hardest Triplets (Bottom of the file): These have a low positive similarity and a high negative similarity, often resulting in a negative difficulty score. These examples are crucial for teaching the model to handle nuanced and tricky cases.

Triplet Generation

Positive Examples

The positive affiliation is a known variant of the anchor, both sharing the same ROR ID. To ensure a meaningful learning signal, trivial pairs (e.g., "Google" vs. "google") with near-perfect similarity (>=99%) are filtered out. This forces the model to learn from meaningful variations like abbreviations, acronyms, or different subunit names.

Negative Examples

The dataset includes two types of negative examples to provide a robust training experience. The final dataset is generated with a target ratio of 80% hard negatives and 20% easy negatives.

  1. Hard Negatives: These are intentionally tricky pairs. They are generated by pairing an anchor with an affiliation that is semantically similar but incorrect (e.g., "University of Michigan" vs. "Western Michigan University"). This forces the model to learn subtle but important distinctions.

  2. Easy Negatives: These pairs are generated by selecting an affiliation from a completely different and unrelated institution. These are generally easier for the model to distinguish and help it learn the broad feature space of what makes affiliations different.