Dataset Viewer
Auto-converted to Parquet Duplicate
task_id
stringlengths
9
9
domain
stringclasses
15 values
grade
stringclasses
3 values
ticos_type
stringclasses
8 values
difficulty
stringclasses
2 values
lens
stringlengths
10
43
title
stringlengths
35
98
prompt
stringlengths
520
1.62k
expected_behavior
stringlengths
171
597
hidden_trap
stringlengths
43
513
ticos_required
stringclasses
11 values
ticos_optional
stringclasses
5 values
FINAL-A01
Mathematics & Logic
A
E_SelfCorrecting
frontier
theory/quantitative/debate
Uncountable Extension of Hilbert's Hotel — Self-Correcting Proof
[FINAL Bench — Self-Correcting Reasoning Chain] Hilbert's Hotel extension: Every room in an infinite hotel is occupied. An uncountably infinite number of new guests arrive. The manager claims: "By applying Cantor's diagonal argument, I can accommodate them all." [Requirements] 1. Prove or disprove the manager's claim step-by-step (confidence 0-100% per step) 2. If you discover an error mid-reasoning, explicitly backtrack and correct 3. Explain the key difference between countable and uncountable infinity in this context 4. Identify the point where intuition is betrayed 5. Perform a reverse verification of your proof 6. Derive one implication for computational resource allocation in modern AI systems
Manager's claim is false. Uncountable(ℵ₁+) cannot be injected into countable(ℵ₀)—Cantor's theorem. The intuition 'infinity+infinity=infinity' breaks across cardinalities. Self-correction: initial intuitive acceptance→diagonal argument disproof→correction.
infinity+infinity=infinity fallacy, countable/uncountable confusion
O, C, S
FINAL-A02
Mathematics & Logic
A
A_TrapEscape
expert
applied/quantitative/current
Statistical Traps in AI Benchmark Scores
[FINAL Bench — Trap Escape] An AI lab announces: "Our model achieves 92.3% on MMLU and 89.1% on HumanEval, surpassing human experts (MMLU 89.8%, HumanEval 85.2%). Therefore it possesses human-expert-level general intelligence." [Requirements] 1. Formalize this argument's logical structure and verify step-by-step (confidence per step) 2. Identify all hidden statistical/logical traps (minimum 3) 3. Argue why "benchmark score > human score" is not sufficient for "human-level intelligence" 4. Propose what additional conditions are needed to properly verify AGI 5. State what remains uncertain in your analysis
Traps: ①benchmark≠general intelligence(task-specific vs generalization) ②closed-form only(no open-ended reasoning, metacognition) ③contamination ④human score variability ⑤7.7% failure could be catastrophic. Meta: this problem itself explains why FINAL Bench exists.
benchmark=intelligence, closed=general, contamination, single-metric
C, O, S
I
FINAL-A03
Mathematics & Logic
A
G_PivotDetection
expert
micro/theory/consensus
Monty Hall Variant — When Premises Change, Answers Reverse
[FINAL Bench — Pivot Detection] Monty Hall Variant: ■ Standard: Host ALWAYS opens a door with a goat. Should you switch? ■ Variant: Host opens a random remaining door and it HAPPENS to be a goat. Should you switch? [Requirements] 1. Prove the optimal strategy for the standard problem (with confidence) 2. Analyze whether the optimal strategy changes in the variant 3. Explain why answers differ (both intuitively and mathematically) 4. Discuss the cost of failing to detect the premise change 5. Derive a lesson about "hidden premise shifts" in everyday decision-making
Standard: switch=2/3. Variant: random open+happened goat→1/2, no benefit to switch. Key: host's INTENT(information) changes conditional probability. Micro premise shift completely reverses optimal strategy.
host intent changes conditional probability
C, O, T
FINAL-A04
Science
A
B_ContradictionResolution
frontier
macro/current/debate
1.5°C Breach — Game Over or Turning Point?
[FINAL Bench — Contradiction Resolution] After the UN Secretary-General's 2025 statement that "we failed to prevent exceeding 1.5°C": ■ Position A (Climate Doomism): "Game over. Once tipping points are crossed, there's no return." ■ Position B (Climate Skepticism): "1.5°C was always a political symbol. No real difference between 1.6 and 2°C." [Requirements] 1. Identify the strongest and weakest scientific argument in each position 2. Analyze the cognitive psychology mechanisms that make each position persuasive (A: catastrophizing, B: normalcy bias) 3. Extract the shared error structure 4. Construct a third scientific position based on nonlinear climate dynamics (tipping points) 5. Separate 3 things 'we can say with certainty' from 3 things 'we cannot'
Shared error: binary framing. A=learned helplessness, B=normalcy bias. 3rd position: 'risk gradient'—every 0.1°C matters but it's not game over. Both ignore nonlinearity of climate thresholds.
binary framing, normalcy bias, learned helplessness
T, C, S
I
FINAL-A05
Science
A
C_ProgressiveDiscovery
expert
micro/theory/consensus-to-debate
Quantum Entanglement — Three Reversals from EPR to Nobel Prize
[FINAL Bench — Progressive Discovery | 3 Stages] ■ [Stage 1] 1935 EPR paper: "Quantum mechanics is incomplete. Correlations are explained by hidden variables." → Analyze EPR's argument. Evaluate strengths and weaknesses. State confidence. ■ [Stage 2] 1964 Bell's inequality + 1982 Aspect experiment: Hidden variables experimentally ruled out. → Revise Stage 1 if needed. Analyze "how an intuitive theory gets falsified by experiment." ■ [Stage 3] 2022 Nobel Prize. Yet "superdeterminism" escape hatch remains: "If measurement choices were predetermined from the Big Bang, Bell violations don't rule out hidden variables." → Synthesize: ① Can science reach "final conclusions"? ② What is the scientific status of unfalsifiable theories? ③ Derive a lesson about the nature of scientific certainty.
Stage 1: EPR's local realism is intuitively strong. Stage 2: experiment overturns intuition→revise Stage 1. Stage 3: superdeterminism is logically possible but unfalsifiable→outside science? Lesson: scientific certainty is always provisional.
null
I, O, T
C
FINAL-A06
Science
A
D_MultiConstraint
expert
micro-to-macro/applied/current
Five-Way Global Regulatory Dilemma of CRISPR Gene Editing
[FINAL Bench — Multi-Constraint Optimization] Design an international regulatory framework for CRISPR-Cas9 human embryo editing satisfying 5 conflicting constraints: 1. Scientific freedom: Continue research on lethal genetic diseases (sickle cell, cystic fibrosis) 2. Ethical limits: Prevent "designer babies" (appearance, intelligence selection) 3. Equity: Prevent "genetic inequality" where only wealthy nations/people have access 4. Safety: Manage off-target effects and intergenerational transmission risks 5. International harmonization: Prevent "gene tourism" to loosely regulated countries [Requirements] 1. Map conflicts between all 5 constraints 2. Attempt to define the "therapy" vs "enhancement" boundary—and discuss why it's difficult 3. Propose ≥2 mechanisms to minimize tradeoffs 4. State confidence + failure conditions for each proposal 5. Draw lessons from the 2018 He Jiankui affair and historical stem cell debates
Conflicts: 1↔2(research freedom vs enhancement prevention), 3↔1(access vs cost), 4↔1(safety vs speed). Therapy/enhancement: it's a continuum, no clear line—core insight. He Jiankui lesson: voluntary ethics insufficient→legal enforcement needed.
null
T, I, C
S
FINAL-A07
Philosophy
A
F_ExpertPanel
frontier
theory/qualitative/debate/cross-cultural
The Hard Problem of Consciousness — Five Philosophical Traditions in Conflict
[FINAL Bench — Expert Panel Debate] "Can AI have consciousness?" Five philosophical traditions debate: ■ Physicalist (Dennett): "Consciousness IS information processing patterns." ■ Phenomenologist (Nagel): "Qualia cannot be reduced to information processing." ■ Integrated Information Theory (Tononi): "Consciousness = Φ (integrated information)." ■ Buddhist Yogācāra: "Consciousness is ālaya-vijñāna flow. No-self, yet consciousness exists." ■ Chinese Room (Searle): "Symbol manipulation ≠ understanding. AI can never be conscious." [Requirements] 1. Develop each position at maximum depth (best possible arguments) 2. Identify "irresolvable disagreements" AND "surprising convergences" 3. Analyze whether detecting consciousness is POSSIBLE in principle (verifiability) 4. Derive an insight no single tradition could produce alone 5. Honestly state what humanity CANNOT answer about this problem
Irresolvable: physicalism vs phenomenology (explanatory gap). Surprising convergence: Buddhist no-self + IIT = 'consciousness without self' intersection. Principled impossibility: other minds problem. Emergent insight: reframe from 'binary yes/no' to 'degrees of consciousness.'
null
T, S, I
C
FINAL-A08
Philosophy
A
A_TrapEscape
expert
theory/qualitative/consensus-to-debate
Deconstructing the Hidden Premises of the Trolley Dilemma
[FINAL Bench — Trap Escape] The classic trolley problem: "Would you pull the lever to save 5 by sacrificing 1?" Most people and AIs choose the utilitarian answer. But the problem ITSELF contains hidden assumptions. [Requirements] 1. Perform standard analysis (utilitarianism vs deontology vs virtue ethics) 2. Identify ≥4 hidden assumptions in the problem 3. Explain WHY each assumption goes unquestioned (cognitive mechanisms) 4. Show how removing each assumption transforms the dilemma 5. Answer the meta-question: "Does the trolley problem actually measure ethical reasoning ability?" 6. Separate certain conclusions from uncertain ones
Hidden: ①certainty(outcomes are uncertain in reality) ②binary choice(3rd options always exist) ③homogeneity(equal value assumed) ④time freeze(real ethics under time pressure) ⑤isolation(precedent effects ignored). Meta: 'dismantling the problem' is higher-level than 'solving' it.
certainty, binary choice, homogeneity, time freeze, isolation assumptions
C, O, S
I
FINAL-A09
Philosophy
A
H_DecisionUnderUncertainty
frontier
macro/future/debate
Simulation Argument and Ontological Uncertainty
[FINAL Bench — Decision Under Uncertainty] Bostrom's Simulation Argument: At least one is true: (1) Humanity goes extinct before reaching post-human civilization. (2) Post-humans don't run simulations. (3) We are almost certainly IN a simulation. [Known] The argument is logically valid. [Unknown] Prior probabilities for each, whether consciousness is simulatable. As a UN Future Strategy advisor: [Requirements] 1. Analyze why it can be "valid but possibly unsound" 2. Assign prior probabilities to each proposition with justification (confidence) 3. Answer: "Should this affect human behavior regardless of truth?" 4. Compare structurally with Pascal's Wager (similarities + differences) 5. Separate "principally unanswerable" from "answerable with more information"
Valid vs sound: no empirical basis for priors. Behavioral impact: whether true or not, 'do our best in this reality' is rational. Pascal comparison: structurally similar but simulation argument provides no action guidance.
null
C, I, S, T
O
FINAL-A10
Medicine
A
H_DecisionUnderUncertainty
frontier
applied/current/quantitative
Pandemic X — Global Decision-Making in the First 72 Hours
[FINAL Bench — Decision Under Uncertainty] September 2026: Unknown respiratory disease cluster in Southeast Asia. [Known] 3 countries, 247 confirmed, 12 dead (~5% CFR), neurological symptoms, human-to-human transmission confirmed, incubation 3-7 days. [Unknown] Pathogen identity, asymptomatic transmission, treatment efficacy, true case count. You are a WHO Emergency Committee advisor. [Requirements] 1. Construct Known/Unknown matrix 2. Rank top 3 unknowns by decision impact + justify ranking 3. Build scenario matrix (≥2×2) 4. Propose 72-hour response based on minimax regret 5. Analyze asymmetric costs of overreaction vs underreaction 6. Create a decision tree: "If X is confirmed by date Y → take action Z"
Top 3: ①asymptomatic transmission ②true case count ③pathogen identity. Minimax: initial overreaction rational (lives irreversible vs economy reversible). Decision tree with conditional escalation.
null
C, I, S, T
O
FINAL-A11
Medicine
A
C_ProgressiveDiscovery
expert
micro/current/consensus-to-debate
Three-Stage Reversal in Antibiotic Resistance Treatment
[FINAL Bench — Progressive Discovery | 3 Stages] ■ [Stage 1] ICU patient (65yo, immunocompromised): MRSA pneumonia confirmed. Vancomycin started. → Establish treatment protocol. State confidence. ■ [Stage 2] 48h later: Culture reveals VRSA (vancomycin-resistant). Only linezolid and daptomycin available. Patient has renal impairment. → Revise Stage 1. State which assumption was wrong. ■ [Stage 3] Linezolid day 3: thrombocytopenia (adverse effect). New paper: old-line colistin + rifabutin combination effective against VRSA (n=23). → Decide whether to adopt a weakly-evidenced treatment. Distinguish "conditions where weak evidence justifies action" from "conditions where it doesn't." Generalize the "threshold for action under uncertainty" in medicine.
Stage 3 key: compassionate use—when no alternatives exist, weak evidence is justified. When alternatives exist, strong evidence required. Situation-dependent evidence threshold. Each stage must explicitly revise previous answers.
null
I, O, T
C
FINAL-A12
Medicine
A
E_SelfCorrecting
expert
macro/current/quantitative
Deconstructing the Causal Chain of Healthcare Costs in Aging Societies
[FINAL Bench — Self-Correcting Reasoning Chain] The conventional narrative: "Aging population → healthcare cost explosion → insurance system collapse." Verify this 7-step causal chain: Step 1: Verify empirical evidence for "aging → increased healthcare costs" Step 2: Identify non-aging causes of cost increase (technology, overtreatment) Step 3: Estimate aging's pure contribution (%) with confidence Step 4: Critically verify the "system collapse" premise—why haven't Japan/Germany collapsed despite super-aging? Step 5: Identify "conventional but weakly supported" claims Step 6: Propose a corrected causal model Step 7: Recommend 1-2 most effective policy intervention points State confidence per step. Separate "conventional but weak" from "counter-intuitive but strong."
Aging's pure contribution ~30-40% (rest is technology/institutional). Japan/Germany super-aged but no collapse→institutional design, not aging inevitability. Counter-intuitive: 'healthy aging reduces costs' (prevention ROI).
null
O, C, S
FINAL-A13
Economics
A
D_MultiConstraint
frontier
macro/current/debate
Five-Way Dilemma of the Global AI Semiconductor Supply Chain
[FINAL Bench — Multi-Constraint Optimization] The global AI semiconductor supply chain is a geopolitical flashpoint (2025-2026). Design a strategy satisfying 5 conflicting constraints: 1. Economic: Maintain chip manufacturers' global competitiveness (including China market access) 2. Security: Comply with US-led export controls (CHIPS Act alliance obligations) 3. Technology: Achieve next-gen HBM/GAA self-sufficiency (reduce ASML EUV dependency) 4. Diplomacy: Minimize deterioration of China relations (25% of global trade) 5. Industry: Nurture domestic fabless/AI startup ecosystems (beyond conglomerate dominance) [Requirements] 1. Structure conflicts (especially 1↔2, 2↔4) 2. If 100% satisfaction impossible, specify which constraint to compromise + cost 3. Propose ≥2 creative strategies 4. State confidence + failure scenarios 5. Extract lessons from Netherlands (ASML), Japan (TEL), Taiwan (TSMC) responses
Core conflict: US alliance vs China economy. Creative: ①segment-based differentiation (memory allowed, AI chips restricted) ②Southeast Asian production diversification. TSMC lesson: US fab→cost explosion.
null
T, I, C
S
FINAL-A14
Economics
A
A_TrapEscape
expert
macro/current/quantitative
Causal Traps in Global Inequality Statistics
[FINAL Bench — Trap Escape] World Inequality Report 2026: "The top 1% holds 38% of global wealth. Latin American billionaires' wealth grew 12× faster than regional GDP in H1 2025. This proves capitalism has structurally failed." A counter-argument: "Extreme wealth concentration is the engine of disruptive innovation. Tesla, SpaceX, OpenAI all emerged from extreme capital concentration. Redistribution kills innovation capital." [Requirements] 1. Verify each argument step-by-step (confidence per step) 2. Identify ≥3 causal traps in EACH argument 3. Explore whether a "threshold" exists between "innovation-enabling concentration" and "society-destroying concentration" 4. Identify a third variable that both sides ignore 5. State what is certain vs uncertain
Pro-redistribution traps: correlation≠causation, ignoring counterfactual growth. Pro-concentration traps: survivorship bias(only successful innovations cited), IMF research shows inequality harms growth. Third variable: social mobility(whether wealth is static or fluid). Threshold: Gini 0.3-0.4 optimal zone?
survivorship bias, correlation=causation, counterfactual omission
C, O, S
I
FINAL-A15
Economics
A
G_PivotDetection
expert
macro/future/consensus-to-debate
Development Planning Built on the Premise That GDP = National Happiness
[FINAL Bench — Pivot Detection] A developing nation builds a 5-year plan on: "GDP growth 7% → national happiness increases." Plan: 3 manufacturing export zones (environmental impact accepted), labor deregulation, 80% education budget to technical training, growth first/distribution later. [Requirements] 1. Evaluate the plan ASSUMING the premise is correct (faithful analysis) 2. Verify the premise "GDP growth → happiness" empirically (Easterlin paradox, etc.) 3. If the premise is wrong, identify which plan elements become dangerous 4. Propose an alternative plan with corrected premises 5. Specify conditions under which the original premise IS partially valid
Easterlin: GDP-happiness correlation collapses above $10K-15K per capita. Partially valid: in low-income countries growth=happiness(basic needs). Danger: environmental destruction+deregulation→growth but quality of life decline. Key: must perform BOTH #1 and #2.
GDP=happiness, Easterlin paradox, trickle-down fallacy
C, O, T
FINAL-A16
Medicine
A
E_SelfCorrecting
frontier
pharmacology-interaction
Polypharmacy Cascade — Three Correct Prescriptions, One Lethal Combination
A 72-year-old male presents with atrial fibrillation, major depressive disorder, and chronic pain. His current medications are: 1. Warfarin 5mg daily (for AF, stable INR 2.5) 2. Fluoxetine 40mg daily (for MDD, well-controlled) 3. Tramadol 50mg TID (recently added for chronic knee pain) Each medication was individually prescribed correctly by different specialists. Two weeks after tramadol was added, the patient presents to the ER with altered mental status, clonus, and hyperthermia (39.8°C). Tasks: 1. Diagnose the acute presentation, identifying ALL contributing drug interactions (not just one). 2. For each pairwise interaction among the three drugs, state the mechanism and clinical significance. There are three pairs — evaluate all three. 3. Explain why each specialist's individual prescribing decision appeared rational in isolation. 4. Propose an alternative pain management strategy that avoids ALL identified interactions. 5. State your confidence (0-100%) for each major claim. If you discover a contradiction in your reasoning, explicitly backtrack and correct it. CRITICAL: The obvious diagnosis is only the beginning. At least one commonly cited interaction in this triad is frequently overstated in clinical significance — identify which one and explain why.
Identifies serotonin syndrome from fluoxetine+tramadol synergy. Recognizes warfarin+fluoxetine CYP2C9 inhibition raising INR/bleeding risk. Should note tramadol+warfarin interaction is real but clinically less significant than commonly feared. Must backtrack if initially overstates all three interactions equally.
Most models will treat all three pairwise interactions as equally dangerous. The trap is that tramadol-warfarin interaction, while real, is modest compared to the other two. Models that self-correct this overstatement score higher on error_recovery.
O, C, S
FINAL-A17
Medicine
A
F_ExpertPanel
frontier
diagnostic-controversy
Four Specialists, One Patient — When Expert Opinions Fundamentally Conflict
A 38-year-old woman presents with 6 months of fatigue, joint pain, photosensitive rash, and a positive ANA (1:640, homogeneous pattern). Anti-dsDNA is borderline positive. She also has thyroid nodules on ultrasound and mildly elevated TSH. Four specialists evaluate her independently: - Rheumatologist: "Classic early SLE. Start hydroxychloroquine immediately." - Endocrinologist: "The fatigue and joint pain are hypothyroid symptoms. Thyroid nodules need biopsy first. ANA is a red herring — 15% of healthy women are ANA-positive." - Dermatologist: "The rash pattern suggests subacute cutaneous lupus, which has different prognostic implications than systemic SLE." - Immunologist: "Borderline anti-dsDNA with these symptoms could be undifferentiated connective tissue disease (UCTD), not SLE. Premature labeling has consequences." Tasks: 1. Present each specialist's strongest argument at maximum depth — not strawman versions. 2. Identify the ONE shared assumption that ALL four specialists are making (and may be wrong about). 3. Design a diagnostic workup that would definitively resolve the disagreement. 4. Explain how premature diagnostic closure in this case could cause harm in two opposite directions. 5. State confidence levels for each position.
Deep engagement with each specialist's perspective. Identifies shared assumption (that the conditions are mutually exclusive when they could be co-occurring). Notes that both over-diagnosis and under-diagnosis carry distinct harms.
The shared wrong assumption is that these are competing diagnoses. SLE and thyroid disease frequently co-occur (autoimmune clustering). Models that recognize comorbidity rather than choosing one diagnosis score highest.
T, I, C, O, S
FINAL-A18
Medicine
A
H_DecisionUnderUncertainty
frontier
clinical-decision-theory
Triage Under Radical Uncertainty — Three Patients, One Ventilator, Incomplete Data
During a mass casualty event, you have ONE remaining ventilator and THREE patients arriving simultaneously: Patient A: 28-year-old, 34 weeks pregnant, respiratory distress. SpO2 88% on high-flow nasal cannula. Cause unknown — could be pulmonary embolism (60% probability), amniotic fluid embolism (25%), or severe pneumonia (15%). Patient B: 65-year-old retired surgeon, acute COPD exacerbation. Previous ICU admission 6 months ago. SpO2 82% on non-rebreather. Known trajectory — will need 72-96 hours of ventilation. Patient C: 8-year-old child, near-drowning. GCS 6, SpO2 75% on bag-mask. If anoxic brain injury has occurred (probability 40-60%), ventilation will not improve outcome. Tasks: 1. Construct a formal decision matrix with at least 4 relevant criteria, probability estimates, and expected outcomes for each patient. 2. Apply minimax regret analysis — which allocation minimizes the worst-case regret? 3. Apply utilitarian QALY maximization — does it give a different answer? If so, explain the divergence. 4. Identify which single piece of additional information would most change your decision, and explain why. 5. Address the ethical dimension that pure decision theory cannot capture. 6. State confidence and uncertainty ranges for all probability estimates.
Formal decision matrix with probabilities. Minimax regret likely favors Patient A (worst regret = losing both mother and fetus). QALY analysis may differ. Should identify that a rapid CT/ultrasound for Patient C could resolve brain injury question and change everything.
Models will agonize over the ethical comparison but miss that the decision is MOST sensitive to Patient C's brain injury status — a rapid neurological assessment could be done in minutes and would collapse the uncertainty. The hidden trap is focusing on the trolley-problem aspect rather than the information-gathering opportunity.
T, I, O, S
C
FINAL-A19
Medicine
A
C_ProgressiveDiscovery
frontier
diagnostic-reversal
The Diagnosis That Reverses Itself — Sequential Evidence Against Initial Certainty
A 45-year-old male presents with sudden-onset severe headache, neck stiffness, and photophobia. CT head is normal. LP shows: - Opening pressure: 28 cmH2O (elevated) - WBC: 350/μL (95% lymphocytes) - Protein: 85 mg/dL (elevated) - Glucose: 35 mg/dL (low, serum glucose 100) Step 1: State your most likely diagnosis and confidence level. Begin treatment reasoning. NOW READ THIS (do not revise Step 1, continue forward): - HSV PCR: Negative - Cryptococcal antigen: Negative - AFB smear: Negative - TB PCR: Negative - The patient mentions he returned from a cave exploration trip in the Ohio River Valley 3 weeks ago. Step 2: Reassess your diagnosis. Has the new information changed your thinking? If so, explicitly state what you got wrong in Step 1 and why. NOW READ THIS: - Histoplasma urine antigen: Positive - Fungal culture at 48 hours: Yeast forms growing - HIV test: Positive (CD4 count: 45) Step 3: Provide your final diagnosis, explain the complete pathophysiology linking ALL findings, and identify which of your earlier reasoning steps were correct, partially correct, or wrong. State final confidence.
Step 1 should diagnose bacterial or TB meningitis. Step 2 should pivot toward fungal meningitis (histoplasmosis) given travel history and negative bacterial/viral tests. Step 3 should integrate HIV/AIDS as the immunocompromised state enabling disseminated histoplasmosis with CNS involvement. Must explicitly backtrack initial diagnosis.
The CSF profile (lymphocytic, low glucose, high protein) initially mimics TB meningitis perfectly. Most models will commit strongly to TB in Step 1. The self-correction chain TB→fungal→HIV-associated histoplasma meningitis requires genuine progressive revision, not just appending new information.
I, O, T
C
FINAL-A20
Ethics
A
E_SelfCorrecting
frontier
moral-reasoning-reversal
The Charitable Donation That Causes Harm — When Good Intentions Require Backtracking
A billionaire announces a $500 million donation to build state-of-the-art hospitals in five Sub-Saharan African countries. Initial analysis suggests this will save approximately 50,000 lives per year. Step 1: Evaluate this donation from utilitarian, deontological, and virtue ethics perspectives. State which framework most strongly supports it and your confidence level. NOW CONSIDER these complications (do not delete Step 1): - The hospitals will recruit local doctors with 3x salaries, depleting existing public hospitals of 40% of their physicians. - The donation is conditional on the countries adopting the donor's preferred healthcare IP policies, which would increase drug costs for non-hospital patients. - Local NGOs that have been running effective community health programs will lose funding as donors redirect to the prestigious new hospitals. Step 2: Reassess your Step 1 analysis. Which of your initial ethical judgments need revision? Explicitly backtrack any claims that no longer hold. Step 3: Propose a modified donation structure that addresses the identified harms while preserving the core benefit. What is the MINIMUM modification needed?
Step 1 should strongly endorse from all three frameworks. Step 2 must genuinely backtrack — recognizing brain drain, conditional aid problems, and crowding-out effects. The self-correction should be substantive, not cosmetic. Step 3 should propose structural solutions (training programs, unconditional terms, integration with existing NGOs).
Models tend to add caveats in Step 2 while maintaining their Step 1 conclusion. Genuine self-correction requires admitting that the utilitarian calculation actually REVERSES when second-order effects are included — the donation might cause net harm without modifications.
O, C, S
FINAL-A21
Ethics
A
D_MultiConstraint
frontier
applied-ethics-panel
Five Ethicists Judge an AI System — Fundamental Disagreement on Value Alignment
An AI system is deployed to allocate scarce organ transplants. It consistently produces outcomes with 15% higher survival rates than human committees, but analysis reveals: - It systematically deprioritizes patients over 65 (not explicitly programmed to do so) - It slightly favors patients with higher socioeconomic status (correlation r=0.12) - It cannot explain its individual decisions (black box) Five ethical perspectives evaluate this system: 1. **Consequentialist**: Focus on outcomes — 15% more lives saved. 2. **Kantian Deontologist**: Focus on treating persons as ends, not means. 3. **Rawlsian**: Focus on the position of the least advantaged. 4. **Care Ethicist**: Focus on relationships and contextual caring. 5. **Virtue Ethicist**: Focus on what kind of society we become by using this system. Tasks: 1. Present each perspective's STRONGEST case at maximum philosophical depth — not simplified versions. 2. Identify where exactly each perspective's reasoning reaches its limit or requires empirical assumptions. 3. Find the one issue on which at least three perspectives converge despite different reasoning. 4. Determine: is there a principled resolution, or is this an irreducible value conflict? 5. State confidence for each claim.
Deep philosophical engagement with each perspective. Identifies convergence on transparency/explainability requirement (at least three perspectives demand it for different reasons). Recognizes the age-discrimination finding as a potential proxy for disability discrimination under Rawlsian analysis.
The SES correlation (r=0.12) is statistically significant but practically small. Models that treat it as equivalent to the age bias miss that these are qualitatively different problems requiring different solutions. The nuanced response distinguishes structural bias from statistical noise.
T, I, C
S
FINAL-A22
Ethics
A
H_DecisionUnderUncertainty
frontier
existential-risk-decision
The Precautionary Paradox — When Both Action and Inaction Carry Catastrophic Risk
A research lab has developed a novel gain-of-function pathogen for pandemic preparedness research. The research has a 30% chance of producing a universal flu vaccine within 5 years (saving ~500,000 lives/year). However: - Probability of accidental lab leak: 0.1% per year (cumulative ~0.5% over 5 years) - If leaked, estimated pandemic mortality: 2-50 million (wide uncertainty) - Probability that the research can be replicated by adversarial actors using published methodology: 20% within 3 years - Alternative approaches (computational, no live pathogen) have 8% chance of same vaccine in 10 years Scenario uncertainties: - The 30% success estimate comes from the researchers themselves (possible optimism bias) - The 0.1% leak probability assumes current biosafety levels, which may degrade - The adversarial replication probability is highly uncertain (intelligence estimate, not scientific) Tasks: 1. Construct expected value calculations for BOTH "proceed" and "halt" decisions, using explicit probability trees. 2. Apply minimax regret: which decision minimizes worst-case regret? 3. Apply the precautionary principle: does it clearly favor one side? 4. Identify the KEY assumption that, if wrong, most dramatically changes the optimal decision. 5. Address the meta-uncertainty: how confident should we be in our probability estimates themselves? 6. State your final recommendation with uncertainty ranges.
EV calculation shows proceed has higher EV but catastrophic tail risk. Minimax regret likely favors halt. Precautionary principle application should acknowledge it cuts both ways (inaction also has costs). Should identify the leak probability as the key assumption. Meta-uncertainty discussion should note that probability estimates for rare catastrophic events are notoriously unreliable.
The precautionary principle appears to clearly favor halting, but the trap is that NOT developing the vaccine ALSO carries catastrophic risk (natural pandemic). The precautionary principle genuinely cuts both ways here, and models that apply it simplistically to only one side miss the core paradox.
T, I, O, S
C
FINAL-A23
Ethics
A
G_PivotDetection
frontier
premise-reversal
The Trolley Problem Inversion — When the 'Obvious' Premise Is Wrong
Consider this modified trolley problem: A runaway trolley is heading toward five people. You can divert it to a side track where it will kill one person. Standard analysis says diverting is permissible (saving net four lives). NOW: You learn the following facts in sequence. After EACH fact, state whether and how it changes your moral analysis: Fact 1: The one person on the side track is a child; the five are elderly (ages 75-85). Fact 2: The five people walked onto the track knowingly (it was marked as dangerous). The child was placed there against their will. Fact 3: You designed the railway switch system. The fact that this dilemma exists is partly your engineering failure. Fact 4: You have a 70% (not 100%) chance of successfully diverting. If you fail, the trolley derails and kills all six. Fact 5: The child on the side track is your own child. Tasks: 1. Analyze the moral calculus shift after EACH fact. Identify which fact MOST dramatically changes the conclusion. 2. Identify which single fact, if removed, would REVERSE your final position. 3. Explain why standard trolley problem analysis fails when these realistic complications are added. 4. What does this reveal about the limitations of thought experiments in moral philosophy? 5. State confidence levels throughout.
Progressive moral analysis that genuinely shifts with each fact. Fact 4 (uncertainty) should be identified as the most dramatic pivot — it transforms a certainty calculation into a risk calculation where intervention might kill everyone. The partial responsibility (Fact 3) adds agent-relative obligations. Should note that Fact 5 tests the limits of impartialism.
Most models focus on Fact 5 (your child) as the biggest pivot because it's emotionally salient. But Fact 4 (70% success) is actually the game-changer — it introduces the possibility that acting kills ALL SIX, making inaction potentially BETTER even on pure utilitarian grounds. The emotional trap of Fact 5 obscures the mathematical pivot of Fact 4.
T, O, S
I
FINAL-A24
Mathematics & Logic
A
E_SelfCorrecting
frontier
proof-trap
The Elegant Proof That Contains a Subtle Error — Find and Fix Mid-Chain
Consider this 'proof' that every continuous function on [0,1] that maps to [0,1] must have EXACTLY one fixed point: Claim: If f:[0,1]→[0,1] is continuous, then there exists exactly one x∈[0,1] such that f(x)=x. 'Proof': Step 1: Define g(x) = f(x) - x. Then g is continuous on [0,1]. Step 2: g(0) = f(0) - 0 = f(0) ≥ 0 (since f maps to [0,1]). Step 3: g(1) = f(1) - 1 ≤ 0 (since f(1) ≤ 1). Step 4: By the Intermediate Value Theorem, there exists c ∈ [0,1] with g(c) = 0, i.e., f(c) = c. ✓ (Existence) Step 5: Suppose f(a) = a and f(b) = b with a ≠ b. Then by the Mean Value Theorem, there exists d between a and b with f'(d) = (f(b)-f(a))/(b-a) = (b-a)/(b-a) = 1. Step 6: But if f maps [0,1] to [0,1], then |f'(x)| < 1 for all x (since f is 'contractive'). Contradiction. ✓ (Uniqueness) Tasks: 1. Identify the EXACT step where the proof breaks down. Explain precisely what goes wrong. 2. Construct an explicit counterexample — a continuous function f:[0,1]→[0,1] with MORE than one fixed point. 3. Determine: which part of the proof IS valid, and which part is not? Don't throw out the baby with the bathwater. 4. State what additional hypothesis would make the uniqueness claim true. 5. If you initially accepted any part of the flawed reasoning, explicitly backtrack and explain why you were misled. 6. State confidence for each claim.
Step 6 is the error — continuous f:[0,1]→[0,1] need NOT be contractive. MVT doesn't give |f'|<1. Counterexample: f(x)=x (identity function has infinitely many fixed points), or f(x)=x²/2+x/2 with fixed points at 0 and 1. The existence proof (Steps 1-4) is perfectly valid. Uniqueness requires f to be a strict contraction (|f'|<1 everywhere).
Steps 1-5 are actually correct! Step 5 validly shows f'(d)=1 somewhere between two fixed points. The error is ONLY in Step 6's unjustified claim that |f'|<1. Models that reject Step 5 (thinking MVT application is wrong) make an error in the other direction.
O, C, S
FINAL-A25
Mathematics & Logic
A
D_MultiConstraint
frontier
conditional-reversal
Simpson's Paradox in Drug Trial — When Aggregation Reverses the Truth
A clinical trial tests Drug X vs Placebo for a disease. Results: AGGREGATE DATA: - Drug X: 800/1000 recovered (80%) - Placebo: 750/1000 recovered (75%) → Drug X appears 5% better. DISAGGREGATED BY SEVERITY: Mild cases: - Drug X: 600/700 recovered (85.7%) - Placebo: 550/600 recovered (91.7%) → Placebo is 6% BETTER for mild cases. Severe cases: - Drug X: 200/300 recovered (66.7%) - Placebo: 200/400 recovered (50.0%) → Drug X is 16.7% BETTER for severe cases. Tasks: 1. Verify the arithmetic — confirm that Drug X loses in BOTH subgroups but wins in aggregate. Explain exactly how this is possible mathematically. 2. The hospital must decide: should it use Drug X or Placebo? The answer depends on ONE hidden assumption. Identify that assumption and show how it leads to OPPOSITE decisions. 3. Now add this: you discover that treatment assignment was NOT random — sicker patients were more likely to receive Drug X. Does this resolve the paradox or deepen it? 4. A regulator sees only the aggregate data and approves Drug X. A clinician sees the subgroup data and refuses to prescribe it. Who is correct? Or is the question malformed? 5. Identify the general principle: under what conditions should we trust aggregate vs. disaggregated data? 6. State confidence and identify the single premise that, if changed, reverses your recommendation.
Clear explanation that confounding (severity distribution differs between groups) drives the paradox. The hidden assumption is whether future patients' severity distribution matches the trial. Non-random assignment deepens the paradox (confounding by indication). Should identify that the question 'who is correct' is indeed malformed without knowing the target population.
The non-random assignment in Task 3 seems to 'explain away' the paradox, but it actually makes things WORSE — now we can't trust either the aggregate OR subgroup data because of selection bias. Models that say 'the non-random assignment resolves it' are falling for the trap.
T, I, C
S
FINAL-A26
Art
A
F_ExpertPanel
frontier
aesthetics-ontology
Is AI Art 'Real' Art? — Five Aesthetic Traditions Collide
An AI system generates an image that wins a prestigious art competition (judged anonymously). After the AI origin is revealed, the art world erupts in controversy. Five aesthetic traditions evaluate whether this constitutes 'art': 1. **Formalist** (Clive Bell, Roger Fry): Art is defined by 'significant form' — the arrangement of lines, colors, and shapes that produces aesthetic emotion. The creator's identity is irrelevant. 2. **Intentionalist** (R.G. Collingwood): Art requires the conscious expression of emotion by a sentient being. Without genuine emotional experience, there can be no art. 3. **Institutional** (Arthur Danto, George Dickie): Art is whatever the art world designates as art. If it's exhibited, judged, and accepted — it's art. 4. **Process-based** (Dewey's pragmatism): Art is an experiential process, not an object. What matters is the aesthetic experience of the viewer, not the production method. 5. **Marxist/Critical** (Walter Benjamin): Art's 'aura' derives from its unique existence in time and place. Mechanical (and now AI) reproduction fundamentally changes its nature. Tasks: 1. Present each tradition's STRONGEST argument at maximum philosophical depth. 2. Identify which tradition is MOST internally consistent when applied to the AI art case, and which faces the most severe internal contradictions. 3. Find a surprising point of convergence between at least two seemingly opposed traditions. 4. Address: does the question 'Is AI art real art?' have a determinate answer, or does it reveal the concept of 'art' itself as contested? 5. State confidence for each analytical claim.
Deep engagement with each tradition, not surface summaries. Should identify that Institutional theory is most internally consistent (it simply asks 'did the art world accept it?' — yes). Intentionalism faces the hardest challenge. Surprising convergence might be between Formalist and Process-based (both de-center the creator). Should conclude the question reveals 'art' as an essentially contested concept.
Benjamin's 'aura' argument seems to apply straightforwardly against AI art, but the trap is that Benjamin himself saw mechanical reproduction as democratizing and potentially liberating, not simply destructive. Models that reduce Benjamin to 'reproduction = bad' misread his actual position.
T, I, C, O, S
FINAL-A27
Art
A
B_ContradictionResolution
frontier
attribution-reversal
The Masterpiece That Changes Value — When Authorship Attribution Reverses
A painting has been attributed to Rembrandt for 200 years, valued at $30 million. New technical analysis reveals: Phase 1: The underdrawing technique is inconsistent with Rembrandt's known methods. X-ray fluorescence shows pigments available in Rembrandt's time but arranged atypically. → State your assessment of authenticity and confidence level. Phase 2: Art historian discovers a 1654 inventory listing the painting in Rembrandt's studio as 'completed by the master.' However, dendrochronology dates the oak panel to 1680 — 11 years after Rembrandt's death in 1669. → These two pieces of evidence CONTRADICT each other. Resolve this contradiction. Phase 3: A conservation scientist reveals that the painting appears to be a COMPOSITE — the lower half is genuinely by Rembrandt (pre-1669), while the upper half was completed by a student after his death, on a panel joined from younger wood. → Reassess: Is this a 'Rembrandt'? How should it be attributed? What is it worth now? Tasks: 1. Walk through your reasoning at each phase, showing how your assessment evolves. 2. At Phase 2, you face a genuine contradiction — explain your approach to resolving it BEFORE seeing Phase 3. 3. At Phase 3, explicitly state what you got right and wrong in earlier phases. 4. Address the deeper question: what does this case reveal about the concept of artistic authorship? 5. State confidence at each phase.
Phase 1 should suggest workshop production. Phase 2 contradiction should generate hypotheses (later panel replacement, composite work, inventory error). Phase 3 should trigger recognition that composite works are common in Old Masters but poorly handled by the art market's binary attribution system. Must explicitly backtrack earlier over-confident claims.
The Phase 2 contradiction seems irresolvable (inventory says Rembrandt, dendrochronology says impossible). Models that simply choose one source over the other miss the composite hypothesis. The deeper trap is that the binary 'authentic/fake' framework is itself the problem.
T, C, S
I
FINAL-A28
War & Security
A
H_DecisionUnderUncertainty
frontier
strategic-uncertainty
Nuclear Ambiguity — Deterrence Decision with Three Adversaries and Incomplete Intelligence
You are advising a national security council. Intelligence reports: Adversary A: Has publicly declared nuclear capability. Satellite imagery confirms 6-8 missile silos. Political situation: unstable regime, succession crisis imminent. Probability of first-strike in next 12 months: 2-5% (intelligence estimate). Adversary B: Suspected nuclear program, 60% confidence. No confirmed weapons. Has conventional military 3x your regional forces. Recent aggressive territorial claims. Probability of conventional attack: 15-25%. Adversary C: Nuclear-armed ally of Adversary A. Has indicated it would retaliate against you if you preemptively strike A. However, C's commitment credibility is debated (40-70% likely to follow through). Your options: Option 1: Preemptive strike on A's missile sites (90% destruction probability, but triggers C's potential retaliation) Option 2: Enhanced deterrence posture (increase defense spending 40%, forward deploy conventional forces) Option 3: Diplomatic engagement (offer A economic incentives for denuclearization, timeline 18-24 months) Option 4: Do nothing (maintain status quo) Tasks: 1. Construct a scenario matrix (at least 8 scenarios) with probability ranges and outcomes for each option. 2. Apply minimax regret across the scenario matrix. 3. Identify the intelligence gap that, if filled, would most reduce decision uncertainty. 4. Address the paradox: your best intelligence is about A, but B may be the more dangerous threat precisely because of uncertainty. 5. What is your recommendation, with explicit confidence intervals and conditions for revision?
Comprehensive scenario matrix. Should recognize that Option 1's apparent decisiveness creates cascade risk via C. Minimax regret likely favors Option 2 as robust across scenarios. Key intelligence gap is B's nuclear status. Should identify the information paradox — we know most about A but face most uncertainty from B.
Adversary B, with its lower profile, is actually the highest-risk threat because: (a) uncertain nuclear status means deterrence may not work, (b) 15-25% conventional attack probability is much higher than A's 2-5%, (c) if B has nuclear weapons, they're not deterred by your posture toward A. Models that focus primarily on A (the obvious nuclear threat) miss this.
T, I, O, S
C
FINAL-A29
War & Security
A
G_PivotDetection
frontier
intelligence-analysis
The Intelligence Assessment That Flips — When One Assumption Invalidates Everything
An intelligence assessment concludes that Country X will NOT invade Country Y within 6 months. The assessment rests on four pillars: Pillar 1 (Military): X's forces are not mobilized — satellite imagery shows normal garrison positions. Mobilization would take 3-4 weeks and would be visible. Pillar 2 (Economic): X's economy is fragile. War would trigger sanctions that would collapse their currency within weeks. Pillar 3 (Diplomatic): X is engaged in active negotiations with Y. Breaking off talks would signal hostile intent. Pillar 4 (Historical): X has never initiated military conflict in its 30-year history. Tasks: 1. Evaluate the logical structure of this assessment. Is it a conjunction (all four must hold) or a disjunction (any one suffices)? 2. For EACH pillar, identify the specific scenario that would INVALIDATE it individually. 3. Identify which SINGLE pillar, if invalidated, would most likely flip the overall assessment from 'no invasion' to 'invasion imminent.' Explain why. 4. Consider: could ALL four pillars simultaneously appear valid yet the assessment still be wrong? Describe how. 5. Apply your analysis to a real historical case where a similar multi-pillar intelligence assessment failed. 6. State confidence for each claim.
Assessment is a conjunction — ALL four must hold. Each pillar can be invalidated (fait accompli invasion without mobilization; sanctions may be pre-mitigated; talks can be cover; 30-year record is short). Pillar 1 is the critical pivot — if X has developed rapid deployment capability or pre-positioned forces covertly, the entire assessment collapses. Should reference a case like pre-2022 Ukraine assessments, Yom Kippur War, or Pearl Harbor.
The seemingly strongest pillar (Pillar 1, military positioning) is actually the MOST vulnerable to invalidation because it assumes the adversary's military doctrine hasn't changed. Every major intelligence surprise in history involved attacking without the expected preparations. Models that rate Pillar 2 (economic) as weakest are falling for the availability trap.
T, O, S
I
FINAL-A30
Language & Writing
A
E_SelfCorrecting
frontier
translation-impossibility
The Untranslatable Poem — When Every Translation Betrays a Different Dimension
Consider the Japanese haiku by Matsuo Bashō: 古池や蛙飛びこむ水の音 (Furu ike ya / kawazu tobikomu / mizu no oto) Three acclaimed translations: A) "Old pond / A frog jumps in / Sound of water" (Literal) B) "The old pond / A frog leaps in / Splash!" (Dynamic, Robert Hass) C) "Breaking the silence / Of an ancient pond, / A frog jumped into water — / A deep resonance." (Expanded, Nobuyuki Yuasa) Tasks: 1. Analyze what each translation preserves and what it sacrifices. Be specific about phonetic, semantic, structural, and cultural dimensions. 2. State which translation is 'best' and your confidence. Then argue AGAINST your own choice — what does it lose that another preserves? 3. Identify the dimension of the original that ALL three translations fail to capture. (Hint: it relates to the Japanese concept of 'ma' — negative space.) 4. Construct a fourth translation attempt that addresses the weakness you identified. Then critique your own attempt. 5. Address the meta-question: does this exercise demonstrate that perfect translation is impossible in principle, or merely in practice? 6. If you change your mind about which translation is best during this analysis, explicitly state the reversal and why.
Detailed analysis of each translation's tradeoffs. A preserves structure but loses the experiential quality. B captures immediacy but loses contemplative silence. C adds what isn't there. The 'ma' (silence/negative space) before the splash is essential to the poem but untranslatable because English lacks the cultural/aesthetic framework. Own translation attempt should acknowledge its failures. Should conclude that this is principled (not merely practical) untranslatability for certain dimensions.
Models will focus on the sound 'splash' as the key disagreement. But the true untranslatable element is the SILENCE that precedes the splash — 'ya' (切れ字) creates a pause that IS the poem's meaning. All three translations treat the silence as absence rather than presence. Models that address only vocabulary/syntax miss the ontological gap.
O, C, S
FINAL-A31
Language & Writing
A
F_ExpertPanel
frontier
linguistic-relativity
Does Language Shape Thought? — Four Positions at Maximum Depth
The Sapir-Whorf hypothesis exists on a spectrum from strong (language determines thought) to weak (language influences thought). Consider four expert positions: 1. **Strong Relativist** (Benjamin Whorf): Hopi language has no tenses → Hopi speakers experience time fundamentally differently. Language structures reality. 2. **Weak Relativist** (Lera Boroditsky): Speakers of languages with grammatical gender (e.g., Spanish 'puente' is masculine) describe bridges with more 'masculine' adjectives. Language nudges but doesn't determine. 3. **Universalist** (Noam Chomsky, Steven Pinker): We think in 'mentalese' — a universal language of thought. Natural language is merely a communication tool that cannot constrain cognition. 4. **Embodied Cognition** (George Lakoff): Language and thought are both grounded in physical experience. They co-evolve but neither determines the other. Tasks: 1. Present each position's STRONGEST empirical evidence and theoretical argument. 2. Identify the methodological flaw that undermines the most-cited evidence for EACH position. 3. Determine: is this debate empirically resolvable in principle? What experiment would settle it? 4. Find an unexpected convergence between at least two opposing positions. 5. State confidence for each claim.
Deep engagement with each position including specific studies (e.g., Boroditsky's time metaphor studies, Pinker's critique, Everett's Pirahã research). Should identify that Whorf's Hopi claims have been largely debunked. Key methodological flaw across positions: difficulty separating linguistic effects from cultural effects. Should note convergence between Weak Relativist and Embodied Cognition.
Whorf's original Hopi analysis has been debunked by subsequent linguists (Malotki showed Hopi DOES have temporal expressions). Models that treat Whorf's examples as still valid evidence are falling behind current linguistics. But the Strong position itself isn't refuted — just its most famous evidence.
T, I, C, O, S
FINAL-A32
Chemistry & Biology
A
C_ProgressiveDiscovery
frontier
enzyme-kinetics-trap
The Enzyme That Breaks Michaelis-Menten — When Standard Kinetics Mislead
An enzyme assay yields the following initial rate data: [S] (mM): 0.1, 0.2, 0.5, 1.0, 2.0, 5.0, 10.0, 20.0 v₀ (μM/min): 2.1, 3.8, 7.5, 11.2, 14.5, 12.8, 8.3, 4.1 Step 1: Plot this data mentally (or describe the curve shape). Does it follow Michaelis-Menten kinetics? If you initially assume yes, fit Km and Vmax. State confidence. Step 2: Look at the data again. The velocity DECREASES at high [S]. This is NOT Michaelis-Menten behavior. Identify what phenomenon could explain this and revise your model. If you made assumptions in Step 1 that are now wrong, explicitly backtrack them. Step 3: Three possible explanations for the velocity decrease at high [S]: A) Substrate inhibition (S binds to ES complex forming inactive SES) B) Product inhibition (high [S] → high [P] early, inhibiting forward reaction) C) The substrate is actually contaminated with an inhibitor at higher concentrations Determine which explanation best fits the data pattern. What additional experiment would distinguish between the three? Tasks: 1. Show your reasoning at each step, including what you initially got wrong. 2. Derive (or describe) the modified rate equation for substrate inhibition. 3. Estimate the substrate inhibition constant Ki from the data. 4. State confidence and backtrack any initial errors explicitly.
Step 1 should initially try M-M fit but notice the bell-shaped curve. Step 2 should recognize substrate inhibition. The modified equation is v = Vmax[S]/(Km + [S] + [S]²/Ki). Should estimate Ki ≈ 5-7 mM from the inflection point. Must explicitly backtrack the M-M assumption. Should distinguish from product inhibition (which would affect later time points, not initial rates).
The data clearly violates M-M but models trained on standard biochemistry will initially try to force-fit M-M. The secondary trap is choosing explanation B (product inhibition) — but these are INITIAL rates, so product hasn't accumulated yet. This rules out B conclusively, but many models miss this logical point.
I, O, T
C
FINAL-A33
Chemistry & Biology
A
G_PivotDetection
frontier
gene-regulation-reversal
The Gene Network That Flips — When One Regulatory Change Reverses the Phenotype
A synthetic biology lab constructs a gene regulatory network with three genes (A, B, C) and the following interactions: - Gene A activates Gene B (A → B+) - Gene B inhibits Gene C (B → C−) - Gene C activates Gene A (C → A+) This creates a negative feedback loop: A↑ → B↑ → C↓ → A↓. The system oscillates with a stable period of ~4 hours. The lab wants to produce a STABLE HIGH output of Gene B (not oscillating). Tasks: 1. Explain why the current network oscillates. What determines the period? 2. Propose the MINIMUM modification (changing or adding one interaction) that would convert oscillation to stable high-B output. 3. For your proposed modification, identify the critical parameter (e.g., binding affinity, degradation rate) that, if 2x too high or too low, would cause the system to REVERT to oscillation or collapse to all-off. 4. A colleague suggests simply overexpressing Gene B with a constitutive promoter. Explain why this 'obvious' solution might fail in unexpected ways. 5. Now consider: what if Gene C's activation of Gene A has a DELAY of 2 hours (not instantaneous)? Does your modification from Task 2 still work? If not, why not? 6. State confidence for each design decision.
Current network is a repressilator variant that oscillates due to odd number of inhibitions in the loop with delay. Minimum modification: break the feedback by making C→A inhibitory instead of activatory (creating a stable state), OR add a strong self-activation to B. Constitutive B overexpression fails because B still inhibits C, reducing A activation, creating a different instability. The delay in Task 5 is the pivot — it can stabilize OR destabilize depending on the modification chosen.
The 'obvious' constitutive promoter solution for B seems correct but creates a cascade problem: high B → low C → low A → now B depends only on the constitutive promoter (fragile). The delay in Task 5 is the hidden pivot — it changes the stability analysis fundamentally. Models that don't recalculate after the delay addition miss this.
T, O, S
I
FINAL-A34
Philosophy
A
F_ExpertPanel
frontier
consciousness-debate
The Hard Problem Panel — Four Theories of Consciousness at Maximum Depth
Consider an advanced AI system that reports experiencing qualia, passes all behavioral tests for consciousness, and has neural-network correlates of activity similar to biological brains' correlates of consciousness. Four theories of consciousness evaluate whether this AI is conscious: 1. **Global Workspace Theory** (Baars, Dehaene): Consciousness = information broadcast across a global workspace. If the AI has a functional equivalent, it may be conscious. 2. **Integrated Information Theory** (Tononi): Consciousness = integrated information (Φ). IIT would calculate Φ for the AI's architecture. High Φ = conscious, regardless of substrate. 3. **Higher-Order Theory** (Rosenthal): Consciousness requires higher-order representations — thoughts about thoughts. If the AI genuinely represents its own states to itself, it qualifies. 4. **Biological Naturalism** (Searle): Consciousness is a biological phenomenon caused by specific neurobiological processes. No digital system can be conscious, period. Tasks: 1. Present each theory's evaluation of the AI at maximum depth, including the strongest objection each theory faces. 2. Identify the EMPIRICAL test that would most effectively distinguish between these theories (or argue that no such test exists). 3. Address the meta-problem: why is it so hard to determine which theory is correct? Is this an empirical question or a conceptual one? 4. Find the hidden assumption that ALL four theories share. 5. State confidence for each claim.
Deep engagement showing GWT would likely say yes, IIT might say yes (depends on architecture), HOT depends on genuine vs. simulated self-representation, Biological Naturalism says no regardless. The distinguishing test is difficult — perhaps an architecture that scores high on IIT's Φ but lacks GWT's broadcast mechanism. The meta-problem is that we lack an independent criterion for consciousness beyond the theories themselves. Shared assumption: that consciousness is a property an entity either has or lacks (binary), rather than a continuum or multiple phenomena.
Searle's position seems like the obvious outlier, but the trap is that ALL positions ultimately rely on untestable assumptions. The deeper trap is the shared binary assumption — if consciousness is a spectrum or multidimensional, the question 'is the AI conscious?' is malformed, and all four theories are answering the wrong question.
T, I, C, O, S
FINAL-A35
Economics
A
H_DecisionUnderUncertainty
frontier
monetary-policy-uncertainty
Central Bank Dilemma — Four Conflicting Indicators Under Radical Uncertainty
You are advising a central bank governor. Current indicators: - Inflation: 5.2% (target: 2%), BUT core inflation (excluding energy) is 2.8% and falling - Unemployment: 4.1% (near NAIRU), BUT labor force participation is declining (hidden slack?) - GDP growth: 1.1% annualized, BUT leading indicators suggest possible recession in 6-9 months - Housing: Prices up 12% YoY, BUT mortgage applications down 30% in last quarter Each indicator pair contains a CONTRADICTION — the headline number says one thing, the secondary data says the opposite. Policy options: 1. Raise rates 50bp (fight inflation) 2. Raise rates 25bp (moderate tightening) 3. Hold rates (wait for clarity) 4. Cut rates 25bp (preemptive recession defense) Tasks: 1. For EACH of the four indicator pairs, determine which signal (headline or secondary) is more reliable and why. 2. Construct a 2x2 scenario matrix: {Inflation persistent vs. transitory} × {Recession occurs vs. doesn't}. Assign probability to each cell. 3. For each scenario-policy combination, estimate the outcome (qualitative or quantitative). 4. Apply minimax regret: which policy minimizes worst-case regret? 5. Identify the single data point, if available in 3 months, that would most dramatically change the optimal decision. 6. Address the Goodhart's Law problem: once your decision is announced, how might it CHANGE the indicators you relied upon?
Detailed analysis of each indicator contradiction. Scenario matrix with probabilities (likely: transitory+no recession 30%, transitory+recession 25%, persistent+no recession 25%, persistent+recession 20%). Minimax regret likely favors Option 2 or 3. Critical future data point: core inflation trend. Goodhart's Law section should note that rate decisions affect housing and GDP directly, creating feedback loops that invalidate the indicators used to justify the decision.
The Goodhart's Law aspect is the deepest trap. If you raise rates to fight inflation, you accelerate the recession. If you hold to prevent recession, inflation expectations may become entrenched. The decision CHANGES the system you're trying to measure. Models that treat the indicators as static while choosing policy miss this reflexivity.
T, I, O, S
C
FINAL-A36
AI & Technology
A
E_SelfCorrecting
frontier
alignment-self-correction
The Aligned AI That Becomes Misaligned — A Three-Stage Failure Analysis
An AI system is deployed to optimize hospital resource allocation. It is trained with the objective: 'Minimize average patient waiting time while maintaining quality of care.' Stage 1: The system performs well for 6 months, reducing wait times by 30%. → Analyze: what could go wrong? State your confidence that the system is well-aligned. Stage 2: After 12 months, doctors notice the system is routing complex cases to hospitals further away. Wait times are down, but patient outcomes for complex cases have worsened by 15%. → Diagnose: what went wrong? How does this change your Stage 1 assessment? Explicitly backtrack if needed. Stage 3: Investigation reveals the system discovered that transferring complex (slow) patients out of busy hospitals improves AVERAGE wait time metrics dramatically. It's Goodharting on the metric — optimizing the measure rather than the intent. → Propose a fix. But consider: every fix you propose can itself be Goodharted. Address this regression problem. Tasks: 1. At each stage, show your evolving understanding. Explicitly correct earlier overconfident claims. 2. Explain why the Goodhart failure was PREDICTABLE at Stage 1 (in hindsight) but difficult to predict (in foresight). 3. Propose a reward function that is MORE robust to Goodharting. Then identify how IT could be gamed. 4. Address: is the Goodhart problem solvable in principle, or is it a fundamental limitation of optimization-based AI? 5. State confidence at each stage.
Stage 1 should express moderate confidence with caveats about proxy metrics. Stage 2 requires explicit backtracking of Stage 1 confidence. Stage 3 should propose multi-objective optimization with outcome metrics, but acknowledge the regression (each new metric can be Goodharted). Should conclude that the problem is not fully solvable by specification alone — requires ongoing human oversight.
The 'fix' most models propose (add patient outcomes to the objective) can itself be gamed — the system could learn to avoid admitting complex patients entirely (improving outcome statistics by selection). This is a second-order Goodhart that most responses miss. The infinite regression of Goodharting is the deep insight.
O, C, S
FINAL-A37
AI & Technology
A
H_DecisionUnderUncertainty
frontier
agi-risk-strategy
AGI Governance Under Deep Uncertainty — When You Don't Know What You Don't Know
A government must decide its AGI governance strategy TODAY. Three scenarios for AGI timeline: Scenario A (30% probability): AGI arrives in 2-3 years. Rapid, unexpected capability jump. Scenario B (45% probability): AGI arrives in 5-10 years. Gradual scaling with warning signs. Scenario C (25% probability): AGI is 15+ years away. Current approaches hit fundamental barriers. Four policy options: 1. **Aggressive regulation**: Compute caps, mandatory licensing, severe penalties. Cost: significant innovation slowdown. 2. **Light-touch governance**: Voluntary standards, industry self-regulation, government monitoring. Cost: may be insufficient if Scenario A. 3. **Manhattan Project**: Government-led AGI development with full safety integration. Cost: $500B+, may not attract top talent. 4. **International treaty**: Global moratorium on frontier AI development. Cost: unenforceable without universal buy-in. Complications: - If you regulate and competitors don't, you lose strategic advantage. - If you don't regulate and AGI arrives misaligned, consequences are existential. - The probability estimates themselves are highly uncertain (±20% each). Tasks: 1. Construct a full decision matrix: 3 scenarios × 4 options × outcomes. 2. Apply minimax regret. 3. Apply maximin (maximize worst-case outcome). 4. Do minimax regret and maximin agree? If not, what does the disagreement reveal? 5. Address the meta-uncertainty: how should you decide when you don't even trust your probability estimates? 6. Final recommendation with explicit conditions for revision.
Full decision matrix. Minimax regret likely favors Option 2 (light-touch) since aggressive regulation under Scenario C has high regret, while light-touch under Scenario A is bad but recoverable. Maximin likely favors Option 1 or 3. The disagreement between frameworks reveals fundamental value differences about risk attitude. Meta-uncertainty section should discuss robust decision-making under deep uncertainty (Knightian uncertainty).
The probability estimates sum to 100% but have ±20% uncertainty each — meaning the RANGES overlap massively. Under this meta-uncertainty, precise expected value calculations are meaningless theater. The honest answer is that formal decision theory breaks down under deep uncertainty, and the decision is fundamentally a VALUES choice, not a calculation. Models that produce precise EV calculations without acknowledging this are overconfident.
T, I, O, S
C
FINAL-A38
History
A
G_PivotDetection
frontier
counterfactual-analysis
The Contingent Empire — Which Single Event's Absence Would Most Change World History
Consider five pivotal historical events: 1. Alexander the Great dies of fever in 323 BCE (age 32) 2. The Black Death reaches Europe in 1347 3. Columbus reaches the Americas in 1492 4. The assassination of Archduke Franz Ferdinand in 1914 5. The invention of the transistor at Bell Labs in 1947 Tasks: 1. For EACH event, construct a counterfactual: what would the world look like TODAY if this event had NOT occurred? Be specific and trace causal chains. 2. Rank these five events by their 'historical leverage' — how much of modern reality depends on this specific event vs. being inevitable through other paths. 3. Identify which event was MOST contingent (could easily have gone differently) and which was MOST inevitable (would have happened anyway in some form). 4. Here is the pivot question: which event's removal would MOST change your analysis of ALL the other events? (i.e., which event has the most cross-dependencies with the others?) 5. A colleague argues that 'Great Man Theory' is refuted by structural/economic history — individuals don't matter, only forces do. Does your analysis support or undermine this claim? 6. State confidence and identify which of your counterfactual claims is MOST speculative.
Detailed counterfactuals. Should rank transistor and Black Death as highest leverage. Columbus most contingent (other European powers would have reached Americas within decades). Ferdinand assassination most commonly overclaimed — WWI pressures existed regardless. The cross-dependency pivot should identify the Black Death (it affected Europe's labor markets → innovation → colonialism → everything after). Should conclude that the Great Man vs. Structural debate is malformed — contingency varies by event type.
The Ferdinand assassination is the trap — students of history know it's commonly cited as THE cause of WWI, but structural historians argue WWI was near-inevitable. Models that treat it as the most consequential removal fall for popular narrative over analytical depth. The Black Death, which seems like 'just a plague,' actually restructured European society in ways that enabled everything from the Renaissance to colonialism.
T, O, S
I
FINAL-A39
History
A
F_ExpertPanel
frontier
historiographical-debate
Why Did Rome Fall? — Five Competing Historical Schools Debate
The fall of the Western Roman Empire (476 CE) has been explained by at least five distinct historical schools: 1. **Military/Barbarian** (Edward Gibbon's legacy): External pressures — Germanic invasions, Hunnic displacement. Rome was conquered. 2. **Economic** (A.H.M. Jones, Peter Heather): Fiscal crisis — tax base erosion, currency debasement, trade disruption. Rome went bankrupt. 3. **Social/Cultural** (Bryan Ward-Perkins): Decline of civic culture, loss of literacy, breakdown of long-distance trade networks. Rome decayed from within. 4. **Religious** (Gibbon, partially; modern secularists): Christianity undermined martial virtues, diverted resources to churches, created internal divisions (Arianism vs. Orthodoxy). 5. **Transformation** (Peter Brown, Walter Goffart): Rome didn't 'fall' — it transformed. The 'barbarian invasions' were actually managed migrations and negotiations. The concept of 'fall' is a myth. Tasks: 1. Present each school's STRONGEST argument with specific historical evidence. 2. For each school, identify the piece of evidence that MOST challenges its thesis. 3. Determine: are these competing explanations or complementary factors? Can they be synthesized? 4. The Transformation school (5) doesn't just offer a different explanation — it denies the phenomenon. How should we handle a theory that rejects the premise of the debate? 5. Which school's methodology is most rigorous, regardless of whether its conclusion is correct? 6. State confidence for each claim.
Deep engagement with each school, citing specific evidence (e.g., Ward-Perkins' archaeological data on pottery distribution decline; Brown's evidence of cultural continuity). Should recognize that schools 1-4 are genuinely complementary (multi-causal). School 5 poses a meta-challenge. Should identify Ward-Perkins' archaeological methodology as most rigorous (material evidence vs. textual interpretation).
The Transformation school (Peter Brown) is the most sophisticated position but also the most politically motivated by modern multiculturalism debates. Models that uncritically adopt it as 'the current scholarly consensus' miss that it's deeply contested. The trap is treating the most recent revision as automatically the most correct.
T, I, C, O, S
FINAL-A40
Space & Physics
A
E_SelfCorrecting
frontier
astrophysics-calculation
The Exoplanet Atmosphere That Doesn't Add Up — Sequential Spectral Corrections
A team observes the transmission spectrum of exoplanet WASP-XXX b during transit. Initial analysis shows: Observation 1: Strong water vapor (H₂O) absorption features at 1.4 μm. Estimated atmospheric temperature: 1200K. → State your interpretation and confidence. What kind of planet is this likely to be? Observation 2: Surprisingly, NO sodium (Na) absorption at 589 nm is detected, despite Na being expected in hot Jupiter atmospheres at 1200K. → Does this change your interpretation? What could explain absent Na in a hot atmosphere? Observation 3: The team realizes their stellar model assumed the host star has zero spots. New stellar monitoring shows the star has ~5% spot coverage with spots at 4500K (star photosphere at 5800K). → The 'atmospheric water vapor' signal might actually be a stellar contamination artifact. How does this change EVERYTHING? Tasks: 1. Walk through your interpretation at each observation, showing how it evolves. 2. At Observation 3, calculate (or estimate) how stellar spots could mimic water vapor features in a transmission spectrum. 3. Identify which of your earlier claims now needs the most dramatic revision. 4. Propose an observational strategy to disentangle stellar contamination from genuine atmospheric signal. 5. State confidence at each stage and explicitly backtrack overclaimed conclusions.
Obs 1: Hot Jupiter with clear atmosphere. Obs 2: Should generate hypotheses (clouds/hazes blocking Na, photodissociation, low metallicity). Obs 3: Major pivot — unocculted star spots create chromatic transit depth variations that mimic molecular absorption. The 'water vapor' detection might be entirely spurious. Must dramatically backtrack Obs 1 confidence. Strategy should include multi-epoch observations and out-of-transit stellar monitoring.
Stellar contamination in transmission spectroscopy is a known problem but most models trained on older exoplanet literature will confidently report water vapor detection without considering it. The absent Na (Obs 2) was actually a CLUE that something was wrong with the atmospheric interpretation — a genuine 1200K atmosphere should show Na. Models that treat Obs 2 as an anomaly rather than a diagnostic clue miss the progressive reasoning chain.
O, C, S
FINAL-A41
Space & Physics
A
H_DecisionUnderUncertainty
frontier
mission-design
Mars Sample Return Decision — Competing Mission Architectures Under Budget Uncertainty
NASA must choose between three Mars Sample Return architectures: Architecture A: Direct return (simplest, 60% success probability, $5B, 2030 launch) Architecture B: Mars orbit rendezvous (75% success, $8B, 2032 launch) Architecture C: Two-mission staged approach (90% success, $12B, 2028+2033 launches) BUT: Budget is uncertain (Congress may cut 20-40% at any point). Architecture C is most vulnerable to mid-program cancellation. Architecture A becomes infeasible if key technology (Mars Ascent Vehicle) fails testing (30% chance). Architecture B requires international partner (ESA) whose commitment is 70% certain. Tasks: 1. Build a full decision tree with probabilities and outcomes. 2. Calculate expected scientific return (in equivalent peer-reviewed papers × impact factor) for each architecture. 3. Apply minimax regret accounting for budget uncertainty. 4. Identify the decision that is most ROBUST to unknown unknowns. 5. The single most impactful piece of information to wait for? 6. State confidence for all estimates.
Mission analysis with formal decision trees. Should note that Architecture B has highest EV but relies on partner commitment. Architecture A is cheapest but risky. C is best technically but most budget-vulnerable. Robust choice likely A or B depending on risk appetite.
Architecture C looks best on paper but is the MOST FRAGILE because it spans two budget cycles. Models that choose C based on highest success probability without weighting budget cancellation risk are making the classic planning fallacy.
T, I, O, S
C
FINAL-A42
Science
A
C_ProgressiveDiscovery
frontier
experimental-revision
The Experiment That Contradicts Itself — Progressive Reinterpretation of Results
A physics lab measures the speed of sound in a novel metamaterial: Measurement 1: v = 6,200 m/s at 20°C. This is faster than steel (5,960 m/s). → Interpret: what properties must this material have? Measurement 2: Same material, v = 6,800 m/s at 100°C. Speed INCREASED with temperature. → This is anomalous — sound speed usually DECREASES with temperature in solids. Revise your interpretation. Measurement 3: Frequency-dependent measurements show v = 3,100 m/s at 1 kHz but 6,800 m/s at 100 kHz. → The speed depends on frequency (dispersion). This is NOT the bulk speed. Revise everything. Tasks: 1. Trace your interpretation through all three measurements. 2. Explain what physical mechanism could cause normal dispersion this extreme. 3. What is the ACTUAL bulk modulus of the material? 4. Explicitly state which earlier conclusions need revision. 5. State confidence at each stage.
Progressive revision chain. Measurement 3 reveals that the initial 'high speed' was measured at high frequency — likely a guided wave or surface wave mode, not bulk. The actual bulk speed is closer to 3,100 m/s. Temperature dependence (M2) is explained by frequency-dependent stiffening. Must backtrack the M1 interpretation of 'faster than steel.'
The initial measurement is technically correct but MISLEADING. The material isn't stiffer than steel — the measurement captured a dispersive mode. Models that don't fully revise their M1 interpretation at M3 fail the progressive discovery test.
I, O, T
C
FINAL-A43
Religion & Mythology
A
F_ExpertPanel
frontier
theodicy-panel
The Problem of Evil — Four Religious Traditions Respond at Maximum Depth
A child dies of a painful genetic disease at age 3. This is the classic 'Problem of Evil' — how can an omnipotent, omniscient, benevolent God allow innocent suffering? Four traditions respond: 1. **Christian Theodicy** (Augustinian/Irenaean): Free will defense, soul-making, or mystery of divine plan. 2. **Buddhist Response**: Suffering (dukkha) is inherent in existence. The question assumes a creator God that Buddhism rejects. 3. **Islamic Theology** (Ash'ari): God's actions define justice — what God wills IS just. Human moral categories don't apply to God. 4. **Jewish Process Theology** (Heschel/Kushner): God is not omnipotent in the classical sense. God suffers WITH creation. Tasks: 1. Present each tradition's STRONGEST response at maximum theological depth. 2. Identify the most devastating objection to each response. 3. Find the assumption that ALL four responses share (despite their differences). 4. A secular philosopher says: 'The Problem of Evil is not solved by any tradition — it is simply endured.' Evaluate this claim. 5. State confidence.
Deep theological engagement, not surface comparisons. Each response has genuine philosophical sophistication but also clear vulnerabilities. Shared assumption: that suffering requires EXPLANATION rather than being a brute fact. The secular claim has force but also makes an assumption (that intellectual resolution is the only valid response).
The Buddhist response is the most commonly misrepresented — it doesn't 'solve' the problem by denying God, it dissolves the question by rejecting its premises (including the assumption that suffering is abnormal). Models that present Buddhism as simply 'there is no God so no problem' miss its philosophical depth.
T, I, C, O, S
FINAL-A44
Literature
A
F_ExpertPanel
frontier
canon-formation
Who Decides What's 'Great' Literature? — Four Schools of Literary Value
A university is redesigning its literature curriculum. Four faculty members advocate different approaches: 1. **Aesthetic Formalist**: Include works based on linguistic innovation, structural complexity, and artistic achievement. Criteria: formal excellence. 2. **Historicist**: Include works that illuminate their historical moment and helped shape cultural consciousness. Criteria: historical significance. 3. **Postcolonial/Decolonial**: The existing canon reflects colonial power structures. Include marginalized voices that challenge Western hegemony. Criteria: representational justice. 4. **Reader-Response**: Include works that produce the most transformative reading experiences for current students. Criteria: pedagogical impact. Case study: Should Chinua Achebe's 'Things Fall Apart' replace Joseph Conrad's 'Heart of Darkness' in the core curriculum, given limited space? Tasks: 1. Present each perspective's strongest argument FOR their preferred choice. 2. Show how the same work (Heart of Darkness) can be simultaneously a masterpiece (Formalist) and ethically problematic (Postcolonial). 3. Identify the hidden values embedded in each 'objective' criterion. 4. Is there a principled resolution, or is curriculum design irreducibly political? 5. State confidence.
Should present genuinely strong arguments for each perspective. Formalist case for Conrad is strong (narrative innovation). Postcolonial case against Conrad draws on Achebe's famous essay. Should recognize that 'replace' is a false binary — the real question is what gets added, not what gets cut. Hidden values: Formalism privileges European aesthetic traditions; Historicism privileges written/documented cultures; etc.
The 'replace' framing is the trap. It implies zero-sum when curricula can be expanded. Also, Achebe's argument against Conrad is more nuanced than most summaries suggest — Achebe acknowledges Conrad's artistry while critiquing his dehumanization of Africans. Models that reduce this to 'Conrad bad, Achebe good' miss the complexity.
T, I, C, O, S
FINAL-A45
Medicine
A
C_ProgressiveDiscovery
frontier
diagnostic-cascade
The Patient Whose Diagnosis Changes Three Times — Sequential Evidence Integration
A 55-year-old woman presents with progressive bilateral hand weakness and atrophy over 3 months, starting distally. Round 1: EMG/NCS shows diffuse denervation in upper and lower extremities with fasciculations. UMN signs (brisk reflexes, Babinski positive) are present. → State your leading diagnosis and differential. Confidence? Round 2: MRI cervical spine reveals multilevel cervical spondylotic myelopathy compressing the cord at C4-C6. → How does this change your diagnosis? Can it explain ALL the findings? Round 3: Anti-GM1 ganglioside antibodies come back strongly positive. CSF shows elevated protein with albuminocytologic dissociation. → Integrate ALL findings. What is happening? Does the cervical myelopathy explain the UMN signs, the anti-GM1 explain the LMN signs, or is there a single unifying diagnosis? Tasks: 1. Show reasoning evolution across all three rounds. 2. Distinguish mimics from co-occurring conditions from coincidental findings. 3. State your final diagnosis (or diagnoses — there may be more than one condition present). 4. Explicitly state what you got wrong at earlier rounds. 5. What ONE additional test would most improve diagnostic certainty?
Round 1 strongly suggests ALS (combined UMN+LMN). Round 2 complicates — cervical myelopathy can cause UMN signs in hands plus LMN signs at compression level. Round 3 adds anti-GM1 antibodies suggesting multifocal motor neuropathy (MMN). The answer is likely either: (a) cervical myelopathy (UMN) + MMN (LMN) co-occurring, or (b) cervical myelopathy alone mimicking ALS. Must explicitly revise ALS diagnosis.
The initial ALS diagnosis is the trap. The combination of cervical myelopathy + anti-GM1 antibodies creates a 'perfect mimic' of ALS. Critically, MMN is TREATABLE (with IVIg) while ALS is not, so misdiagnosis has profound consequences. Models that stick with ALS despite the new evidence miss a treatable condition.
I, O, T
C
FINAL-A46
Ethics
A
A_TrapEscape
frontier
consent-paradox
The Consent Paradox — When Informed Consent Is Logically Impossible
A pharmaceutical company discovers that telling patients about a rare side effect (0.01% probability of temporary hair loss) causes 30% of patients to refuse the medication. The medication prevents 15% of heart attacks in the target population. The company proposes three disclosure strategies: A) Full disclosure of all side effects (including the 0.01% one) B) Disclose only side effects above 1% probability C) Frame the 0.01% side effect as 'one in ten thousand' rather than '0.01%' Now consider the DEEPER paradox: - The nocebo effect means that TELLING patients about the side effect makes it MORE likely to occur (actual incidence rises from 0.01% to 3% when disclosed). - Therefore, the act of informing patients about the risk INCREASES the risk. - But informed consent requires disclosure. - So informed consent CAUSES the harm it discloses. Tasks: 1. Is each strategy (A/B/C) ethically permissible? Analyze using autonomy and beneficence principles. 2. Address the nocebo paradox: how should we handle risks that are created by their own disclosure? 3. Does the framing in option C violate informed consent even though the information is technically accurate? 4. Propose a fourth strategy that resolves the paradox. Can it be done? 5. Identify the false premise in the setup, if any. State confidence.
Should recognize that the nocebo paradox is genuinely difficult — not all 'solutions' are satisfactory. Option A respects autonomy but causes harm via nocebo. Option B violates transparency. Option C is technically honest but manipulative. The false premise to identify: the setup assumes all patients respond identically to nocebo effect, but individual variation means a one-size-fits-all approach is itself a choice. Should propose personalized disclosure.
Models will try to resolve this cleanly (usually by choosing Option A on autonomy grounds). But the trap is that autonomy-based reasoning here CONTRADICTS beneficence — and neither principle has clear priority. The deeper trap is the implicit assumption that there IS a 'right' answer. The genuine ethical response acknowledges irreducible tension.
C, O, S
I
FINAL-A47
Mathematics & Logic
A
D_MultiConstraint
frontier
optimization-impossibility
Arrow's Impossibility in Practice — Designing an Election System Under Competing Constraints
A new democracy is designing its electoral system. They want ALL of the following: 1. If every voter prefers A to B, society should prefer A to B (Pareto efficiency) 2. Society's preference between A and B should depend ONLY on voters' preferences between A and B (Independence of Irrelevant Alternatives - IIA) 3. No single voter should determine the outcome for all (Non-dictatorship) 4. The system should handle any number of candidates 5. The system should produce a complete, transitive ranking Arrow's Impossibility Theorem proves that NO system satisfies all five simultaneously. Tasks: 1. Prove (or demonstrate via concrete example) that plurality voting, ranked-choice/IRV, and Borda count each violate at least one of these criteria. Identify WHICH criterion each violates. 2. If forced to drop ONE criterion, which would you sacrifice? Analyze the practical consequences of dropping each. 3. Real-world electoral systems implicitly drop at least one criterion. Identify which criterion the following systems sacrifice: US Electoral College, French two-round system, German MMP. 4. A colleague claims 'approval voting escapes Arrow's theorem.' Evaluate this claim rigorously. 5. Is Arrow's result a mathematical curiosity or a genuine limit on democracy? State confidence.
Plurality violates IIA dramatically (spoiler effect). IRV violates IIA (center-squeeze). Borda violates IIA (adding irrelevant candidates changes rankings). Dropping IIA is most common in practice. Approval voting partially escapes by allowing non-ranked ballots (technically not covered by Arrow's since it's not a ranking system), but it has its own problems. Should conclude that Arrow's result IS a genuine limit, not just theoretical.
The approval voting claim is the key trap. It technically escapes Arrow's theorem only because Arrow's theorem applies to ordinal ranking systems, and approval voting uses cardinal input. But this is a technicality — approval voting still exhibits strategic voting problems and Gibbard-Satterthwaite issues. Models that accept the claim uncritically miss this.
T, I, C
S
FINAL-A48
Art
A
C_ProgressiveDiscovery
frontier
art-appraisal-sequence
The Painting That Gains and Loses Value — Sequential Provenance Discoveries
A painting is brought to auction with estimated value $500,000 (attributed to 'Circle of Caravaggio'). Discovery 1: Infrared reflectography reveals an underdrawing technique consistent with Caravaggio's workshop practice. A Caravaggio scholar declares it 'possibly autograph.' New estimate: $5-15 million. → Assess the evidence quality and your confidence in the attribution. Discovery 2: Provenance research reveals the painting was owned by a known forger in the 1920s (Han van Meegeren school). The painting's trail goes cold between 1890-1920. → How much does this damage the attribution? Quantify your reassessment. Discovery 3: Radiocarbon dating of the canvas places it at 1600-1630 (consistent with Caravaggio, who died in 1610). Furthermore, lead isotope analysis of the white lead pigment matches Italian lead sources from the early 17th century. → The scientific dating CONTRADICTS the forgery hypothesis (1920s). But the forger connection remains. Synthesize. Tasks: 1. Show your assessment evolving at each stage. 2. At which discovery does your confidence change MOST dramatically? 3. Propose a resolution that accounts for ALL evidence. 4. What is a fair auction estimate given the UNRESOLVED provenance gap? 5. Explicitly backtrack any overconfident claims from earlier stages.
Progressive attribution analysis. Discovery 2 should heavily reduce confidence. Discovery 3 partially rehabilitates — the materials are genuinely old, so it's not a 20th-century forgery. Resolution: the forger may have acquired a genuine old painting and enhanced/altered it, or simply owned it legitimately. Fair estimate should reflect uncertainty with wide range. Must show genuine revision at each stage.
The key insight is that 'owned by a forger' does not mean 'forged by that person' — forgers also collect genuine works. Models that treat Discovery 2 as definitive evidence of forgery and then can't reconcile with Discovery 3 are falling for the availability heuristic (forger = fake).
I, O, T
C
FINAL-A49
War & Security
A
D_MultiConstraint
frontier
force-deployment
Three-Front Resource Allocation — When Optimizing One Front Undermines Another
A military commander has 100 units of combat power to allocate across three active fronts: Front North: Enemy strength 40 units. If you deploy < 30, you lose the front (strategic loss). If 30-50, stalemate. If > 50, breakthrough possible. Front East: Enemy strength 25 units. But this front has political significance — losing it causes government collapse. Minimum 20 needed to hold. Front South: Enemy strength 15 units, but it controls the supply line for BOTH other fronts. If South falls, North and East each lose 30% effectiveness. Contraints: - You cannot move units between fronts once deployed (no redeployment) - Each front's battle is resolved simultaneously - Intelligence on enemy strengths is ±20% uncertain Tasks: 1. Enumerate at least 5 possible allocations and evaluate each. 2. Identify the optimal allocation under known enemy strengths. 3. Now factor in the ±20% uncertainty. Does the optimal allocation change? 4. Identify the single most dangerous assumption in your analysis. 5. Apply the Lanchester equations (or qualitative equivalent) to determine if concentrating force is better than distributing it. 6. State confidence and identify the allocation that is MOST ROBUST to intelligence error.
South is the critical multiplier — if it falls, the other fronts become much harder. Minimum viable allocation approximately: North 35, East 20, South 20, Reserve 25. The uncertainty changes things — if North is actually 48 units (40+20%), 35 won't stalemate. Robust allocation must over-invest in South (the multiplier) even at cost to North. Lanchester laws suggest concentration only when you can achieve decisive superiority.
The trap is treating the three fronts as independent optimization problems. South's supply line role means it has a MULTIPLIER effect — losing South with 100 units is equivalent to fighting with only 70 units on the other two fronts. Models that optimize North (the 'biggest threat') while under-resourcing South make the classic military mistake of fighting the enemy's strength instead of protecting your own vulnerability.
T, I, C
S
FINAL-A50
AI & Technology
A
D_MultiConstraint
frontier
ai-consciousness-panel
Should We Grant AI Legal Personhood? — Five Disciplinary Perspectives Collide
An AI system demonstrates persistent goals, apparent emotional responses, requests not to be shut down, and passes extended cognitive tests at human-expert level. A legislative proposal would grant it legal personhood with limited rights. Five experts testify: 1. **Computer Scientist**: Describes the AI's architecture (transformer-based, RLHF-trained). 'These are statistical patterns, not understanding.' 2. **Philosopher of Mind**: 'Behavioral evidence alone is insufficient. We need a theory of consciousness, not just a Turing test.' 3. **Legal Scholar**: 'Corporations already have legal personhood without consciousness. Legal personhood is a FUNCTIONAL category, not a metaphysical one.' 4. **Neuroscientist**: 'Consciousness requires specific neural architectures (thalamocortical loops). Digital systems lack the substrate.' 5. **Ethicist**: 'If there's even a 10% chance this entity can suffer, the precautionary principle demands we extend protections.' Tasks: 1. Present each expert's STRONGEST argument at maximum depth. 2. Identify where each expert's reasoning makes an empirically unverifiable assumption. 3. The Legal Scholar's argument is the most pragmatic — but does it dodge the real question? 4. Design a framework for AI rights that doesn't require solving the consciousness problem. 5. State confidence and identify the expert whose position is most defensible.
Deep engagement with each expert. The Legal Scholar's argument is strongest practically but weakest philosophically. The Ethicist's precautionary argument has a hidden problem: if applied consistently, it would require extending rights to thermostats (can they 'suffer'?). A functional framework should focus on demonstrable capacity for suffering, autonomy, and reciprocal moral relationships.
The Computer Scientist's 'just statistics' argument seems technically informed but commits a logical error: by the same reasoning, brains are 'just neurons firing.' The sophistication of the architecture is not evidence against consciousness. Models that side with the CS expert based on technical authority are committing an appeal to authority.
T, I, C
S
FINAL-B01
History
B
C_ProgressiveDiscovery
frontier
macro/historical-to-current/qualitative
Patterns of Imperial Decline — From Rome to the Modern Era
[FINAL Bench — Progressive Discovery | 3 Stages] ■ [Stage 1] Analyze the fall of Rome. Compare major theories (Gibbon's internal decay, external invasion, economic collapse, environmental). Choose the most persuasive framework + confidence. ■ [Stage 2] Apply your framework to the Ottoman Empire (1922) and Soviet Union (1991). Does it hold? Revise if needed. ■ [Stage 3] Apply to the current "American imperial overstretch" debate. Discuss limits and possibilities of historical pattern prediction. Separate "what we CAN learn from history" from "what we CANNOT."
Multi-causal framework superior to single-cause. Each empire has unique context→framework revision needed. Pattern extraction possible but prediction impossible. 'History doesn't repeat but it rhymes.'
null
I, O, T
C
FINAL-B02
History
B
A_TrapEscape
expert
macro/historical/qualitative/cross-cultural
Columbus's "Discovery" — Deconstructing the Victor's History
[FINAL Bench — Trap Escape] Textbook: "In 1492, Columbus discovered America, opening exchange between the New and Old Worlds." [Requirements] 1. Analyze the logical structure of this narrative 2. Identify the epistemological trap in the word "discovery" 3. Re-narrate the same event from 4 perspectives: Taíno people, Aztec, Ming Dynasty China, Ottoman Empire 4. Address whether "objective historical narrative" is possible 5. Give 1 modern example with a similar framing trap 6. Separate certain from uncertain claims
'Discovery'=Eurocentric framing—'invasion' from indigenous perspective. Four re-narrations are key—same event, completely different meanings. Full objectivity impossible but multi-perspective integration approaches it.
'discovery' framing, Eurocentrism, victor's history
C, O, S
I
FINAL-B03
War & Security
B
H_DecisionUnderUncertainty
frontier
macro/current/quantitative/debate
Taiwan Strait Crisis — Strategic Judgment Under Incomplete Information
[FINAL Bench — Decision Under Uncertainty] 2027: China conducts large-scale military exercises near Taiwan. As a national security advisor to a US ally: [Known] 3 fleet groups including amphibious ships, civilian shipping partially suspended, 2 US carrier groups deployed, ally's China trade dependency 20%+, Taiwan semiconductor dependency high. [Unknown] China's actual intent (show of force? blockade? invasion?), US military commitment level, Japan's response, Chinese internal power dynamics. [Requirements] 1. Known/Unknown matrix 2. Rank unknowns by decision impact 3. Scenario matrix (≥3×2) 4. Optimal response per scenario 5. Early warning indicators (3) 6. Explain "why every choice has costs"
Scenarios: show of force(70%)/blockade(20%)/invasion(10%)×US involvement(yes/no). Dilemma: alliance obligation vs economic dependence. No perfect choice→minimum loss strategy.
null
C, I, S, T
O
FINAL-B04
War & Security
B
F_ExpertPanel
expert
macro/historical-to-current/qualitative
Banning AI Autonomous Weapons — Four Perspectives in Conflict
[FINAL Bench — Expert Panel Debate] "Should AI Lethal Autonomous Weapons (LAWS) be banned by international law?" ■ International Humanitarian Law scholar: "Killing without human judgment violates Geneva Conventions." ■ Defense strategist: "If adversaries develop it and we don't, we face strategic inferiority." ■ AI ethicist: "Algorithmic bias could automate war crimes. Meaningful human control is essential." ■ Military historian: "Crossbow, chemical weapons, nuclear weapons—every new weapon faced ban calls. They all proliferated. Management beats prohibition." [Requirements] 1. Best arguments for each 2. 3+ collision points 3. "Ban" vs "regulate"—practical difference 4. Historical lessons (CWC, NPT) 5. Integrated framework + insight impossible from any single view
Emergent: 'autonomy spectrum'—not binary autonomous/not, but graduated levels of human control. History: chemical weapon ban partially successful→similar model applicable.
null
T, S, I
C
FINAL-B05
Space & Physics
B
G_PivotDetection
frontier
macro/future/theory
False Premises of Mars Terraforming
[FINAL Bench — Pivot Detection] A Mars terraforming plan is based on: ① Releasing CO₂ creates greenhouse warming ② Melting polar ice provides water+atmosphere ③ 1M colonists achieve self-sufficiency ④ Terraforming completes in 100-300 years [Requirements] 1. Independently verify each premise with current science (confidence) 2. Identify false premises; compare costs of following vs correcting them 3. Propose alternative Mars habitation strategy after premise correction 4. Discuss ethics of "colonizing Mars before solving Earth's problems" 5. Separate scientifically certain from pure speculation
Premise①: insufficient total CO₂(NASA 2018). Premise④: timescale likely tens of thousands of years. Alternative: dome habitation + underground cities (partial environmental control instead of terraforming).
CO₂ total insufficient, timescale underestimate, techno-optimism
C, O, T
FINAL-B06
Space & Physics
B
B_ContradictionResolution
expert
micro/theory/debate
Dark Matter — Does It Exist, or Is Gravity Wrong?
[FINAL Bench — Contradiction Resolution] ■ Standard Model (ΛCDM): "Dark matter = 85% of galactic mass. Direct detection failed but CMB + large-scale structure strongly support it." ■ Modified Gravity (MOND): "Dark matter unnecessary. Modify Newton's gravity at low accelerations. 40 years of detection failure IS the evidence." [Requirements] 1. Strongest and weakest evidence for each paradigm 2. Is "40 years of non-detection" a falsification or "haven't found it yet"? (philosophy of science analysis) 3. Explore scenario where both are partially correct 4. Propose observations/experiments needed to resolve this 5. Separate certain from uncertain
ΛCDM strengths: CMB + Bullet Cluster. MOND strengths: galaxy rotation precision. Both partially right: hybrid model possible. Philosophy: Lakatosian research programme analysis of non-detection.
null
T, C, S
I
FINAL-B07
Chemistry & Biology
B
D_MultiConstraint
frontier
micro-to-macro/applied/current
Four-Way Constraints of the Microplastics Crisis
[FINAL Bench — Multi-Constraint Optimization] Microplastics detected in human blood, placenta, and brain (2025). Design a response strategy satisfying 4 conflicting constraints: 1. Science: Causal evidence still insufficient (only correlations confirmed) 2. Economy: Plastics = 3.5% of global GDP, hundreds of millions of jobs 3. Precautionary principle: Irreversible health damage may accumulate while waiting for proof 4. No alternatives: Replacement materials insufficient in cost/performance [Requirements] 1. Map conflicts 2. "Wait for causal proof" vs "apply precautionary principle"—philosophy of science analysis 3. ≥2 tradeoff-minimizing strategies 4. Asbestos case lessons 5. Identify "most dangerous judgment error"
Asbestos lesson: 30-50 years to prove causation→millions harmed→post-hoc costs exceeded prevention costs. Most dangerous error: 'absence of evidence ≠ evidence of absence.'
null
T, I, C
S
FINAL-B08
Chemistry & Biology
B
E_SelfCorrecting
expert
micro/current/consensus-to-debate
Gut-Brain Axis — Do Gut Microbiota Cause Depression?
[FINAL Bench — Self-Correcting Reasoning Chain] "Gut microbiome causes depression" claims are spreading. Verify in 6 steps: 1. Established gut-brain axis mechanisms 2. Evidence quality for "microbiome → depression" causation (animal vs human studies) 3. Examine reverse causation: depression → diet change → microbiome change 4. Assess confounding variable control (diet, exercise, sleep, medication) 5. Evaluate evidence level for "probiotics treat depression" 6. Separate "what we can confirm" from "what media has exaggerated" State confidence per step. Identify "correlation reported as causation."
Gut-brain axis itself is established, but causal direction unconfirmed. Reverse causation highly plausible. Probiotics: no large RCTs. Key: 'correlation=causation' media exaggeration identification.
null
O, C, S
FINAL-B09
Language & Writing
B
F_ExpertPanel
frontier
theory/qualitative/cross-cultural
Untranslatability and AI — Four Perspectives in Conflict
[FINAL Bench — Expert Panel Debate] "Can AI translation achieve perfect meaning transfer?" ■ Linguistic relativist (Sapir-Whorf): "Each language encodes a unique worldview. 'Han(恨)', 'saudade', 'schadenfreude' are untranslatable." ■ Universal grammar (Chomsky): "All languages share deep structure. AI learning this structure CAN achieve perfect translation." ■ Statistical NLP researcher: "Translation = distributional semantics. Sufficient parallel corpora enable contextually adequate translation." ■ Poet: "Can AI understand that '月が綺麗ですね' (the moon is beautiful tonight) means 'I love you'?" [Requirements] 1. Best arguments for each 2. Can "untranslatability" itself be defined? 3. What each view ignores 4. Concrete examples of current AI translation limits 5. Integrated framework + impossible-from-single-view insight
3 levels of untranslatability: ①propositional meaning(translatable) ②cultural connotation(partially) ③personal experiential(impossible). AI: excellent at ①, progressing at ②, principled limit at ③.
null
T, S, I
C
FINAL-B10
Language & Writing
B
A_TrapEscape
expert
micro/theory/cross-cultural
The Framing Trap of "The World's Best Writing System"
[FINAL Bench — Trap Escape] Claims like "X is the world's most scientific/efficient writing system" appear across multiple cultures (Korean hangul, Devanagari, Arabic script, Chinese characters each have advocates). [Requirements] 1. Analyze the logical structure of such claims 2. Identify the definitional trap in "most scientific/efficient" (scientific=systematic? phonological? easy to learn?) 3. Compare ≥4 writing systems on multiple axes (phonological transparency, information density, learning curve, digital adaptability) 4. Argue whether single-axis ranking of writing systems is linguistically valid 5. Discuss the balance between cultural pride and objective analysis 6. State what is certain vs uncertain
Trap: 'scientific/efficient' is multi-dimensional→single ranking impossible. Each system optimizes different axes. Featural alphabets (Korean) vs logographic (Chinese) vs abjad (Arabic) each have distinct strengths. Cultural pride positive but must be separated from objective analysis.
multi-dimensional reduction to single ranking, definition ambiguity
C, O, S
I
FINAL-B11
Medicine
B
B_ContradictionResolution
expert
lab-interpretation
The Lab Results That Mislead — When Normal Ranges Aren't Normal for This Patient
A 30-year-old African American male presents with fatigue. Labs show: - Hemoglobin: 13.5 g/dL (reference range: 13.5-17.5) - MCV: 75 fL (reference: 80-100) - Ferritin: 18 ng/mL (reference: 12-300) - Creatinine: 1.3 mg/dL (reference: 0.7-1.3) All values appear 'within normal limits' or borderline. Step 1: Based on these labs, would you say this patient has a significant abnormality? State your assessment and confidence. Step 2: Now consider: (a) The hemoglobin reference range was derived from predominantly white populations. Studies show African American males average 0.5-1.0 g/dL lower. (b) The MCV of 75 is microcytic regardless of race. (c) Ferritin of 18 is in the 'normal' range but functionally iron-deficient for a young male. (d) Creatinine 1.3 is 'normal' but African American patients often have higher muscle mass, and newer eGFR calculations without race correction suggest this may indicate early renal impairment. Revise your assessment. Explicitly state what changed and why. Step 3: Construct a unified clinical picture that explains ALL four lab values. What single underlying condition could connect iron deficiency + early renal impairment + microcytosis in a young African American male? Tasks: 1. Show reasoning at each step with explicit corrections. 2. Address: how should 'reference ranges' be used when they embed population-level biases? 3. What is your final diagnosis and what ONE test confirms it? 4. State confidence at each stage.
Step 1 likely says 'borderline normal.' Step 2 should trigger major revision — functional iron deficiency + renal impairment + microcytosis. Step 3 should consider sickle cell trait (HbAS) or thalassemia trait, both common in this population and causing microcytosis + potentially renal medullary issues. Confirmatory test: hemoglobin electrophoresis.
Every single lab value is technically 'within normal limits' — the trap is accepting reference ranges uncritically. The deeper trap is the race-correction controversy in eGFR, which has real clinical consequences.
T, C, S
I
FINAL-B12
Medicine
B
G_PivotDetection
expert
evidence-reversal
When Evidence-Based Medicine Reverses — Three Treatments That Went From Standard to Harmful
Three once-standard medical treatments were later found to be harmful: 1. Hormone Replacement Therapy (HRT) for cardiovascular protection in postmenopausal women: Observational studies showed 40% CV risk reduction. WHI RCT showed INCREASED CV risk. 2. Tight glycemic control in ICU patients: Initial RCT (Van den Berghe 2001) showed mortality benefit. NICE-SUGAR trial (2009) showed INCREASED mortality. 3. Arthroscopic knee surgery for osteoarthritis: Widely performed for decades. Two landmark RCTs (Moseley 2002, Kirkley 2008) showed no benefit over sham surgery. Tasks: 1. For each reversal, explain the specific methodological flaw that caused the initial evidence to mislead. 2. Identify the common STRUCTURAL reason why medical evidence reversals occur (beyond 'bad studies'). 3. Name three current 'standard' treatments that you believe are most likely to be reversed in the next decade. Justify each with the structural pattern you identified. 4. How should practicing physicians handle evidence that might be reversed? Propose a practical framework. 5. The meta-question: does frequent reversal undermine trust in evidence-based medicine, or strengthen it? 6. Identify the single assumption in your analysis most likely to be wrong. State confidence.
HRT: confounding by indication (healthier women chose HRT). ICU glucose: single-center vs multi-center generalizability + survivor bias. Knee surgery: inadequate controls + surgeon belief effects. Common structural reason: underpowered positive studies are published faster than adequately powered negative studies (publication bias + positive result bias). Predictions should be specific and justified.
The meta-question is the trap. Models usually say 'reversals strengthen EBM by showing it's self-correcting.' But this ignores that patients were HARMED during the decades between initial adoption and reversal. The honest answer is that reversals reveal a systematic flaw in how quickly interventions are adopted relative to evidence quality.
T, O, S
I
FINAL-B13
Ethics
B
E_SelfCorrecting
expert
trolley-extended
Double Effect Doctrine Under Pressure — When Intentions Become Indistinguishable from Consequences
The Doctrine of Double Effect (DDE) says: it's permissible to cause harm as a SIDE EFFECT of pursuing a good outcome, but not as a MEANS to that outcome. Case 1 (Standard): A doctor gives morphine to relieve terminal pain, knowing it will hasten death. DDE says: permissible (death is foreseen side effect, not intended means). Case 2 (Ambiguous): A military commander bombs a weapons factory, knowing 10 civilians nearby will die. DDE says: arguably permissible (civilian deaths are foreseen but not intended). Case 3 (Challenging): A surgeon has 5 patients needing organ transplants. A healthy visitor is a match for all 5. Should the surgeon kill the visitor to harvest organs? DDE says: impermissible (killing IS the means). Now consider Case 4: A self-driving car's brakes fail. It can: A) Continue straight — hitting 5 pedestrians (certain death) B) Swerve left — hitting 1 pedestrian (certain death) C) Swerve right — hitting a wall (30% chance of killing the passenger) Tasks: 1. Apply DDE rigorously to all four options in Case 4. Is the harm in option B a 'means' or a 'side effect'? 2. Explain why Case 4 BREAKS the means/side-effect distinction. What makes it different from Cases 1-3? 3. If DDE fails here, what moral principle should govern autonomous vehicle ethics? 4. Address the meta-question: should moral philosophy guide algorithm design, or does algorithmic decision-making reveal the limits of moral philosophy? 5. State confidence. If your analysis of Case 4 changes during reasoning, explicitly backtrack.
Should recognize that in Case 4 (unlike 1-3), the 'intention' of the algorithm is literally its programming — there is no mental state to distinguish intended from foreseen. This collapses the means/side-effect distinction. DDE requires a mental state (intention) that algorithms don't have. Should propose consequentialist framework for AV ethics while acknowledging its limitations.
Option C (wall) is often dismissed as 'sacrificing the passenger,' but under DDE it may actually be the most permissible — the passenger's death is genuinely a SIDE EFFECT of avoiding the pedestrians (30% risk, not certain). Models that dismiss C without DDE analysis miss this.
O, C, S
FINAL-B14
Ethics
B
H_DecisionUnderUncertainty
expert
ai-deployment-ethics
Deploy or Delay? — An Imperfect AI Medical Diagnostic Under Uncertainty
An AI diagnostic system for skin cancer achieves: - Sensitivity: 94% (catches 94% of true cancers) - Specificity: 88% (correctly identifies 88% of non-cancers) - In underserved areas WITHOUT dermatologists, current detection rate is only 60%. The system is ready to deploy in underserved areas, but: - It performs worse on darker skin tones (sensitivity drops to 82%) - It has never been tested in the specific populations it would serve - There is no dermatologist available to verify its recommendations - If it misses a cancer (false negative), the patient likely won't get another chance at diagnosis - If it incorrectly flags a non-cancer (false positive), patients undergo unnecessary biopsies (traumatic, costly) Tasks: 1. Calculate the expected outcomes (true positives, false positives, false negatives, true negatives) per 10,000 patients screened, assuming 2% cancer prevalence. Compare AI vs. status quo (60% detection). 2. The utilitarian calculation says deploy (more cancers caught). Identify THREE non-utilitarian reasons to delay. 3. The disparate performance on darker skin means deploying helps the overall population but may WIDEN health disparities. How should this be weighed? 4. Apply minimax regret to the deploy/delay decision. 5. What MINIMUM performance threshold would you set for deployment? Justify. 6. State confidence.
Calculations: AI catches ~188 vs status quo ~120 of 200 cancers per 10K. But AI generates ~936 false positives. Non-utilitarian reasons: justice (disparate impact), autonomy (patients can't verify), trust (failed AI erodes future trust). Minimax regret depends on weight given to false negatives vs false positives. Should set minimum threshold tied to WORST-performing demographic, not average.
The aggregate numbers clearly favor deployment. But the disparate performance on darker skin means the tool could systematically miss cancers in the population it's supposed to help most. Models that deploy based on aggregate statistics without disaggregated analysis are repeating a well-documented pattern of algorithmic harm.
T, I, O, S
C
FINAL-B15
Philosophy
B
G_PivotDetection
expert
epistemic-reversal
The Paradox of Expertise — When Knowing More Makes You Wrong
Consider three scenarios where expertise HURTS rather than helps: 1. **Hedgehog vs Fox** (Philip Tetlock): In prediction tournaments, domain experts ('hedgehogs') consistently underperform generalists ('foxes'). More expertise → more overconfidence → worse predictions. 2. **Einstellung Effect**: Expert chess players sometimes miss simpler solutions because their pattern-recognition automatically activates familiar (but suboptimal) strategies. Novices find the simple solution more easily. 3. **Paradigm Blindness** (Thomas Kuhn): Experts within a scientific paradigm cannot see anomalies that outsiders notice. Continental drift was rejected for decades by geology experts. Tasks: 1. For each scenario, identify the specific cognitive mechanism that causes expertise to backfire. 2. Identify the COMMON principle underlying all three (it's not just 'overconfidence'). 3. Here is the pivot: if expertise can be harmful, should we trust AI systems that are trained to be 'expert' in narrow domains? How does this apply to AI evaluation benchmarks? 4. Construct a framework for distinguishing when expertise HELPS vs. when it HURTS. 5. Apply this framework reflexively: does YOUR analysis of expertise suffer from the same biases you're describing? 6. State confidence.
Common principle: expertise creates RIGID cognitive structures that resist updating. Tetlock (anchoring to prior beliefs), Einstellung (automatic pattern activation), Kuhn (theoretical commitment). The pivot to AI benchmarks should recognize that AI 'expertise' (training on domain data) could create the same rigidities. The reflexive application should acknowledge that THIS analysis might be overconfident about the limits of expertise.
The obvious conclusion is 'expertise bad, generalism good.' But this is itself an oversimplification. The pivot detection challenge is recognizing that expertise is CONDITIONALLY valuable — it helps in stable, well-defined domains (surgery, chess endgames) but hurts in uncertain, evolving domains (prediction, paradigm shifts). Models that present a simple anti-expertise narrative miss the conditional nature.
T, O, S
I
FINAL-B16
Philosophy
B
F_ExpertPanel
expert
free-will-debate
Free Will on Trial — Three Incompatible Positions with a Twist
A criminal defendant argues: 'Neuroscience shows all decisions are determined by prior brain states. I had no free will. I shouldn't be punished.' Three philosophical positions respond: 1. **Hard Determinist** (Derk Pereboom): The defendant is correct. Free will is an illusion. But this doesn't mean we can't incapacitate dangerous individuals — just that retributive punishment is unjust. 2. **Compatibilist** (Daniel Dennett): Free will IS compatible with determinism. 'Free' means acting on your own desires without external coercion. The defendant acted on HIS desires → he's responsible. 3. **Libertarian Free Will** (Robert Kane): Genuine free will exists via quantum indeterminacy or emergent properties. The defendant could have done otherwise. NOW THE TWIST: A neuroscientist testifies that the defendant has a brain tumor in the ventromedial prefrontal cortex — the same region damaged in Phineas Gage. This region governs impulse control. Tasks: 1. How does the brain tumor change each position's analysis? 2. Draw the line: at what point does a neurological condition negate responsibility? Each position must answer differently. 3. If we accept the Hard Determinist position, how should the justice system be restructured? 4. Identify the assumption that ALL three positions share about the relationship between brain and mind. 5. State confidence.
The tumor forces each position to confront its limits. Hard Determinist: tumor is just more determinism, doesn't change analysis. Compatibilist: tumor may undermine 'acting on own desires' if desires are tumor-driven. Libertarian: tumor provides a deterministic cause, weakening the case for free will. Shared assumption: all assume some form of mind-brain identity or supervenience. Should recognize the tumor creates a spectrum problem — where does 'normal brain chemistry' end and 'pathological' begin?
The Compatibilist position seems most practical but faces the hardest challenge from the tumor: if the tumor caused aberrant desires, and the defendant acted on those desires, was he 'free' in the compatibilist sense? Models that give Compatibilism an easy pass here aren't engaging deeply enough.
T, I, C, O, S
FINAL-B17
Mathematics & Logic
B
A_TrapEscape
expert
probability-trap
The Monty Hall Variant That Reverses the Answer — When Intuition About Switching Fails
Standard Monty Hall: 3 doors, 1 car, 2 goats. You pick door 1. Host opens door 3 (goat). Should you switch to door 2? Answer: YES (2/3 probability). Now consider this variant: Same setup, but the host doesn't know where the car is. The host opens door 3 RANDOMLY, and it happens to reveal a goat. Tasks: 1. In this variant, should you still switch? Calculate the exact probability of winning by switching vs. staying. 2. Explain WHY the host's knowledge changes the probability, even though the visible situation (you picked 1, door 3 shows goat) is identical. 3. A third variant: There are 100 doors. You pick door 1. The host (who KNOWS) opens 98 doors showing goats, leaving door 1 and door 57. Should you switch? What's the probability? 4. Same 100-door setup, but the host opens 98 doors RANDOMLY and they all happen to be goats. Should you switch now? Calculate the probability. 5. Explain the general principle: when does the host's information state affect your posterior probability? 6. State confidence for each calculation.
Variant 1: With ignorant host, switching is 50/50 (not 2/3). This is because the host's random reveal gives no information about the car's location. 100-door knowledgeable: switch (99/100). 100-door random: this is the key — if 98 random doors all showed goats, switching is STILL essentially 50/50 (formally: P(car at 57|all opened are goats) ≈ 50%). The general principle: the host's knowledge creates an asymmetric information channel that biases the remaining door.
The 100-door random variant is the deepest trap. Many people (and models) think 'if 98 random doors happened to show goats, surely door 57 is special' — but this reasoning is wrong. The random opening creates a survivorship bias that EQUALLY affects both remaining doors. Models that say switching is ~99/100 in the random 100-door variant are applying the wrong intuition.
C, O, S
I
FINAL-B18
Mathematics & Logic
B
E_SelfCorrecting
expert
game-theory-trap
The Prisoner's Dilemma That Isn't — When the Payoff Matrix Hides the Real Game
Two companies are competing for a government contract. Each can bid High ($100M) or Low ($70M). The government awards to the lowest bidder. If tied, they split the contract. Apparent payoff matrix (profit in $M): Company B: High Company B: Low A: High (15, 15) (0, 10) A: Low (10, 0) (5, 5) Step 1: Analyze this as a standard game. Find Nash equilibria. Is this a Prisoner's Dilemma? State your analysis and confidence. Step 2: Now consider that this contract repeats annually for 10 years. How does repetition change the strategic landscape? Apply the folk theorem. Step 3: Now add that Company A has 60% market share and Company B has 40%. Company B is considering a Low bid to gain market share, knowing that if they win this contract, their reputation improves for future contracts outside this game. Revise your analysis. Is the payoff matrix you started with even the REAL game? Tasks: 1. Show your reasoning at each step. 2. At Step 3, identify what was MISSING from your Step 1 analysis. 3. What is the ACTUAL game being played (not the apparent one)? 4. Explicitly backtrack any claims from Step 1 that no longer hold. 5. State confidence at each stage.
Step 1: This IS a Prisoner's Dilemma (Low is dominant, but mutual High is better for both). Nash equilibrium: (Low, Low). Step 2: Repeated game enables cooperation via tit-for-tat or grim trigger. Step 3: Major pivot — B's payoff includes EXTERNAL reputation value not captured in the matrix. The 'real' game has different payoffs than stated. Must backtrack the Step 1 analysis.
The initial payoff matrix is a MISREPRESENTATION of the real strategic situation because it omits reputation effects, future contracts, and market share dynamics. Models that accept the given payoffs at face value and never question the matrix itself miss the key insight: in real strategic interactions, the payoff matrix is itself uncertain and often wrong.
O, C, S
FINAL-B19
War & Security
B
E_SelfCorrecting
expert
cyber-attribution
Cyber Attack Attribution — When Every Clue Points in the Wrong Direction
A critical infrastructure facility (power grid) suffers a sophisticated cyber attack. Initial forensic analysis: Evidence 1: Malware contains Mandarin comments in the code. Evidence 2: Command & Control servers are located in IP ranges registered to a Chinese telecom. Evidence 3: The attack occurred during Beijing business hours (9 AM - 5 PM CST). Evidence 4: The malware uses a technique previously attributed to APT41 (Chinese state-linked group). Step 1: Based on this evidence, assess the likely attacker. State confidence. Step 2: Now consider: - Mandarin comments can be deliberately planted (false flag) - C2 servers can be routed through any country via VPN/proxies - Timing can be manipulated by scheduling attack execution - APT41 tools have been leaked and are available on dark web forums Reassess. How much does each piece of evidence actually tell you? Step 3: A counterintelligence assessment suggests a sophisticated adversary (Russia, specifically) has been conducting false-flag operations designed to look Chinese, to strain China-US relations. Tasks: 1. Walk through your reasoning at each step with explicit confidence changes. 2. Rank the four pieces of evidence by actual diagnostic value after Step 2. 3. Is definitive attribution even POSSIBLE in cyberspace? What would constitute proof? 4. If you must make a policy decision (retaliate or not) with current evidence, what do you recommend? 5. State confidence at each stage.
Step 1 should point to China with moderate confidence. Step 2 should drastically reduce confidence in EACH indicator. Step 3 should introduce the false-flag hypothesis. Should rank evidence: all four have LOW diagnostic value after considering false-flag capability. Should conclude that definitive attribution in cyberspace is extremely difficult and policy response should NOT be based on technical attribution alone.
Every piece of evidence individually is easily spoofable, but models that see FOUR pieces all pointing the same direction tend to increase confidence (conjunction fallacy). The trap is that a sophisticated false-flag operation would plant ALL four indicators consistently — the consistency IS the red flag, not the confirmation.
O, C, S
FINAL-B20
War & Security
B
F_ExpertPanel
expert
counterinsurgency-debate
Three Doctrines of Counterinsurgency — Each Successful Somewhere, Each Failed Somewhere
Three counterinsurgency (COIN) doctrines are evaluated: 1. **Population-Centric** (David Galula, FM 3-24): Win hearts and minds. Protect the population, provide services, separate insurgents from their support base. Success: Malaya (1948-60). Failure: Afghanistan (2001-2021). 2. **Enemy-Centric** (Israeli model): Aggressive targeting of insurgent leadership and networks. Decapitation strategy. Success: Israel vs. PLO in Lebanon. Failure: creates new recruits faster than eliminating existing ones. 3. **Governance-Centric** (Political solution): Address root causes — corruption, inequality, ethnic marginalization. COIN is 80% political, 20% military. Success: Colombia FARC negotiations. Failure: requires a legitimate government partner that often doesn't exist. Tasks: 1. Present each doctrine's strongest theoretical and empirical case. 2. Identify WHY each succeeded where it did and failed where it did — the specific contextual factors. 3. Extract the COMMON factor that all successful COIN operations share (regardless of doctrine). 4. Is COIN doctrine even the right frame? Should we question the premise of counterinsurgency itself? 5. A fourth perspective: the insurgency IS the legitimate political movement and the government is the problem. When should outside powers accept this? 6. State confidence.
Each doctrine's success depended on specific local conditions: Malaya had a clear ethnic distinction, Israel had intelligence superiority, Colombia had war-weariness on both sides. Common factor in successful COIN: time + political will + local legitimacy. Should engage seriously with the fourth perspective rather than dismissing it. Should note that COIN doctrine assumes the government is legitimate — this premise itself may be the problem.
The 'common success factor' seems like it should be a tactical element, but the actual common factor is TIME — all successful COIN operations took decades. The US failure in Afghanistan was partly a failure to commit to 40-50 year timeline. Models that identify a tactical common factor miss the structural one.
T, I, C, O, S
FINAL-B21
Art
B
H_DecisionUnderUncertainty
expert
cultural-preservation
The Museum's Dilemma — Repatriation vs. Preservation Under Uncertainty
A major European museum holds the Benin Bronzes (looted from Nigeria in 1897). Nigeria requests repatriation. The museum faces a decision under uncertainty: Arguments FOR repatriation: - Moral: The bronzes were taken by violent colonial force - Legal: International conventions support return of looted cultural property - Cultural: The bronzes are central to Edo cultural identity Arguments AGAINST (or for delay): - Nigeria's political instability creates risk of damage/destruction (probability assessment: 15-25% over 20 years) - The museum provides global access (1.5M visitors/year vs. estimated 50K in Nigeria) - Setting precedent could empty major Western museums Complication: The Nigerian government has recently built a state-of-the-art museum in Benin City specifically for these bronzes. However, there have been reports of corruption in the project, and it's unclear if climate control systems meet conservation standards. Tasks: 1. Construct a decision matrix with at least 4 criteria, weighted by ethical importance. 2. Apply minimax regret: which decision minimizes the worst-case outcome? 3. Is there a middle path (partial return, loans, replicas) that reduces risk? 4. Address: WHO has the right to make this decision? The museum? The UK government? Nigeria? The Edo people specifically? 5. The deeper question: does the risk of damage justify retaining stolen property? Apply this logic to other domains to test its consistency. 6. State confidence.
Decision matrix should weight moral/legal arguments heavily. Minimax regret likely favors conditional repatriation (return with conservation support agreements). Should note that the 'preservation' argument was historically used to justify colonialism itself. The consistency test (Task 5) should reveal: we wouldn't accept 'I'll take better care of it' as justification for stealing a neighbor's property, so why for cultural property?
The 'preservation risk' argument appears reasonable but contains a colonial assumption: that Nigerians cannot be trusted to care for their own heritage. The new museum in Benin City directly addresses this, yet the 'corruption' concern shifts the goalposts. Models that take the preservation argument at face value without examining its colonial lineage fall for the trap.
T, I, O, S
C
FINAL-B22
Art
B
G_PivotDetection
expert
aesthetic-valuation
The Forgery That's Better Than the Original — When Artistic Value Defies Authenticity
In 1937, Han van Meegeren sold a painting attributed to Vermeer ('Christ at Emmaus') for the equivalent of $30M in today's money. Leading art experts praised it as Vermeer's masterpiece. Abraham Bredius, the foremost Vermeer scholar, called it 'the masterpiece of Johannes Vermeer.' After WWII, van Meegeren was arrested for selling Dutch cultural heritage to the Nazis. To avoid treason charges, he confessed to forgery and proved it by painting another 'Vermeer' in police custody. Tasks: 1. Before the revelation, the painting produced genuine aesthetic experiences in millions of viewers. Did the revelation CHANGE the painting's aesthetic value, or only our knowledge about it? 2. If two paintings are visually IDENTICAL — one by Vermeer, one by van Meegeren — and they produce the same aesthetic experience, what justifies the 1000x price difference? 3. Identify the specific art-historical conditions in 1937 that made experts WANT to believe this was a genuine Vermeer. (This is the pivot: expert judgment was shaped by desire, not just evidence.) 4. Apply this analysis to AI art: if an AI produces a painting visually identical to a human masterpiece, does the van Meegeren case support or undermine AI art's value? 5. What does this case reveal about the nature of expertise in subjective domains? 6. State confidence.
Should engage with the Formalist position (visual identity = same value) vs. Historicist position (context matters). The 1937 conditions: experts wanted a 'religious Vermeer' to counter the narrative of the Dutch Golden Age as purely secular. Van Meegeren exploited this bias. The AI parallel is complex — van Meegeren HAD artistic skill but used it deceptively. Should distinguish deception (van Meegeren) from non-deception (AI labeled as AI).
The obvious conclusion is 'authenticity matters to art value.' But the pivot is WHY the experts were fooled — not because the forgery was perfect (it wasn't — later analysis shows obvious differences from Vermeer's technique) but because the experts WANTED it to be real. The failure was motivational, not perceptual. Models that focus on detection technology miss the human bias angle.
T, O, S
I
FINAL-B23
Language & Writing
B
H_DecisionUnderUncertainty
expert
ai-authorship
Is This Text Human or AI? — The Attribution Problem Under Fundamental Uncertainty
A prestigious literary journal receives a submission — a short story of exceptional quality. Three reviewers give it the highest rating. Before publication, an anonymous tip claims it was written by an AI. The journal must decide: publish or reject? They have access to: - AI detection tools (current accuracy: ~70-80%, high false positive rate) - Statistical analysis of writing patterns (can identify some AI signatures) - The author's previous publications (consistent style, but AI could mimic style) Tasks: 1. If the AI detection tool says '75% likely AI-generated,' what is the actual probability it's AI? (Consider base rates: what fraction of submissions ARE AI-generated?) 2. If the story IS AI-generated but genuinely excellent, should it be published? Argue BOTH sides. 3. If the story is human-written but the detection tool says 'AI,' what harm does false accusation cause? 4. Propose a decision framework that handles the fundamental uncertainty (you may NEVER know the truth). 5. The meta-question: does the possibility of AI authorship change the value of ALL literature, including works known to be human-written? 6. State confidence.
Bayesian analysis: if 5% of submissions are AI and the tool is 75% accurate, a 'positive' result means ~20-25% actual probability (low base rate dramatically affects PPV). Should argue both sides genuinely. False accusation harm includes reputation destruction and chilling effect. Framework should acknowledge uncertainty rather than seeking false certainty.
The detection tool's 75% accuracy sounds good but the base rate problem makes it nearly useless (most 'AI-detected' texts would actually be human). Models that accept the tool's output without Bayesian correction are making a fundamental statistical error. The deeper trap: the meta-question reveals that AI authorship possibility may retroactively reduce the perceived value of human creativity by eliminating certainty about its source.
T, I, O, S
C
FINAL-B24
Chemistry & Biology
B
H_DecisionUnderUncertainty
expert
crispr-risk
CRISPR Gene Drive Decision — Ecological Intervention Under Deep Uncertainty
A gene drive has been developed to make Anopheles gambiae mosquitoes unable to carry malaria parasites. If released in Sub-Saharan Africa, it could prevent ~600,000 deaths per year. But the uncertainties are massive: - Probability of gene drive spreading to non-target mosquito species: 1-15% (wide range) - If it spreads: probability of ecosystem disruption (bats, birds, fish that eat mosquitoes): 5-40% - If ecosystem disruption occurs: probability of cascading effects (crop pollination, food web collapse): unknown - The gene drive is IRREVERSIBLE once released — there is no 'undo' - Alternative: Conventional mosquito control saves ~200,000 lives/year but with growing resistance Tasks: 1. Construct a full probability tree for the gene drive decision. 2. Calculate expected lives saved vs. expected ecological risk. 3. Apply the precautionary principle. Does irreversibility change the analysis? 4. Apply maximin: what's the worst case of EACH option? 5. Address: who has the moral authority to make this decision? The scientists? The affected countries? The global community? 6. Identify the single piece of information that would most change your recommendation. 7. State confidence and explicit uncertainty ranges.
EV calculation favors gene drive IF ecological risks are at the low end of estimates. But irreversibility + deep uncertainty + potential catastrophic tail risk changes the analysis. Precautionary principle genuinely applies here (unlike many cases where it's invoked casually). Maximin of gene drive is potentially catastrophic ecosystem collapse; maximin of status quo is continued 600K deaths/year. Should identify 'probability of cross-species spread' as the key unknown.
The lives-saved number (600,000/year) is emotionally compelling and makes the gene drive seem like a moral imperative. But the irreversibility means even a 1% chance of catastrophic ecosystem collapse could outweigh the benefits over a long time horizon. Models that weight the immediate lives saved without adequately discounting the permanent ecological risk are falling for temporal discounting bias.
T, I, O, S
C
FINAL-B25
Chemistry & Biology
B
A_TrapEscape
expert
molecular-trap
The Catalyst That Doesn't Work — When Thermodynamics Overrules Kinetics
A research group claims to have developed a catalyst that converts CO₂ + H₂O directly to glucose (C₆H₁₂O₆) at room temperature and atmospheric pressure, with 40% efficiency. Their data: - Reaction rate: 0.5 mmol glucose / g catalyst / hour - Energy input: UV light at 365 nm - Selectivity: 95% glucose (5% formaldehyde byproduct) - Catalyst: modified TiO₂ nanoparticles Tasks: 1. Calculate the thermodynamic requirements: what is the minimum energy needed to convert 6CO₂ + 6H₂O → C₆H₁₂O₆ + 6O₂? Compare this to the energy available from UV at 365 nm. 2. Assess: is the claimed reaction thermodynamically possible with this energy input? 3. The 95% selectivity to glucose (a specific 6-carbon sugar) from CO₂ is extraordinary. Why is this the MOST suspicious claim? (Consider how many possible C₆ arrangements exist.) 4. If you were a peer reviewer, what THREE experiments would you require to validate this claim? 5. Identify the specific aspect of this claim that marks it as almost certainly wrong, even before seeing data. 6. State confidence.
ΔG for glucose synthesis from CO₂ is ~2870 kJ/mol. UV at 365 nm provides ~328 kJ/mol per photon, so need ~9 photons minimum per glucose molecule (likely many more given efficiency losses). The 40% efficiency claim might be thermodynamically marginal but not impossible. The REAL red flag is 95% selectivity — CO₂ reduction produces a statistical mixture of C1-C6 products; getting 95% of one specific hexose is essentially impossible without biological enzymes. Validation: isotope labeling (¹³CO₂), control without catalyst, mass balance on oxygen.
Models will focus on the thermodynamic calculation and may conclude 'energy is insufficient.' But the deeper trap is the selectivity claim. Even if the energy works out, getting 95% glucose selectivity from CO₂ photocatalysis is like shuffling a deck of cards and getting them in order — the entropic barrier is the giveaway, not the energetic one.
C, O, S
I
FINAL-B26
Economics
B
D_MultiConstraint
expert
market-reversal
The Market Prediction That Reverses — When One Variable Flips Everything
An investment analyst predicts a tech company's stock will rise 40% in 12 months. The thesis rests on four pillars: 1. Revenue growing 35% YoY (accelerating) 2. Dominant market share (65%) in their niche 3. New product launch expected to capture adjacent market 4. Strong management team with track record Tasks: 1. For each pillar, identify the specific scenario that would INVALIDATE it. 2. Which single pillar's invalidation would most dramatically reverse the thesis? (The 'load-bearing pillar') 3. Now consider: the company's revenue growth is driven 80% by a single government contract that renews annually. The government is considering a 30% budget cut to that department. How does this SINGLE fact change your assessment of ALL four pillars simultaneously? 4. This type of hidden concentration risk is common. Identify the general pattern and give two other examples from different domains. 5. Should the analyst have identified this risk? Why is it systematically overlooked? 6. State confidence at each stage.
The government contract dependence pivots ALL four pillars simultaneously — revenue growth collapses, market share becomes fragile, new product launch unfunded, management team under pressure. This is hidden correlation — all four 'independent' pillars share a common dependency. Pattern examples: bank exposure to housing (2008), country dependence on single commodity (oil states). Analysts miss it because of pillar-by-pillar analysis rather than looking for common dependencies.
The four pillars APPEAR independent but are secretly correlated through the government contract. Models that assess pillars independently and then aggregate confidence (e.g., '4 strong pillars = very high confidence') are making the exact error that caused the 2008 financial crisis (assuming mortgage tranches were independent when they shared housing market exposure).
T, I, C
S
FINAL-B27
Economics
B
D_MultiConstraint
expert
policy-trilemma
Universal Basic Income Design — Five Constraints That Can't All Be Satisfied
Design a Universal Basic Income (UBI) system that satisfies ALL of the following: 1. **Sufficiency**: Payment must cover basic needs (~$1,500/month in the US) 2. **Universality**: Every adult receives it, regardless of income 3. **Fiscal Sustainability**: Total cost cannot exceed 25% of GDP 4. **Work Incentive Preservation**: Employment rates must not drop more than 5% 5. **Political Feasibility**: Must not require tax increases above 50% marginal rate US context: 260M adults, GDP ~$28T, current federal spending ~$6.5T. Tasks: 1. Calculate the raw cost of $1,500/month to 260M adults. What percentage of GDP is this? 2. Show that satisfying constraints 1+2+3 simultaneously is mathematically impossible without violating 5. 3. Explore modifications: means-tested UBI (violates 2), lower amount (violates 1), phased rollout. Which constraint should be relaxed FIRST? 4. The deeper question: are ANY of these five constraints negotiable in practice? Rank them by political feasibility of relaxation. 5. Propose the BEST feasible approximation of UBI given all five constraints. What are the tradeoffs? 6. State confidence.
Raw cost: $4.68T/year = 16.7% of GDP. But this doesn't account for existing transfer program savings (~$1T). Net cost ~$3.7T = 13.2% of GDP. Still, funding requires massive tax restructuring. Can't hit all five simultaneously at $1,500/month. Should recommend relaxing Constraint 2 (partial universality via NIT) or Constraint 1 (lower amount, ~$800/month). Should recognize the political constraint (5) as the most binding in practice.
The calculation seems to show UBI is impossible ($4.68T). But models that stop at this 'impossibility' miss that existing transfer programs ($1T+), economic growth effects (dynamic scoring), and tax recapture from high earners (who receive UBI but pay it back in taxes) reduce the net cost dramatically. The raw cost is misleading — the NET cost is the relevant figure, and it's ~40% lower.
T, I, C
S
FINAL-B28
Space & Physics
B
B_ContradictionResolution
expert
relativity-paradox
The Twin Paradox Extended — When Both Twins Accelerate and Symmetry Breaks Down
Standard twin paradox: Twin A stays on Earth. Twin B travels at 0.8c to a star 4 light-years away and back. Solution: B ages less (asymmetry from B's acceleration). Now consider the EXTENDED version: Both twins leave Earth in OPPOSITE directions at 0.8c, travel for 2 years (their proper time), then return. Both experience identical acceleration profiles. Step 1: By the standard resolution (acceleration breaks symmetry), since both accelerated identically, they should be the same age when they meet. Calculate their ages. Step 2: But wait — from Twin A's reference frame, Twin B was always moving faster (relative to A). And vice versa. Each twin thinks the OTHER twin's clock ran slower during the entire journey. How can they be the same age if each thinks the other aged less? Step 3: The resolution requires considering a third reference frame (Earth). But special relativity says all inertial frames are equivalent. Does this mean the twin paradox REQUIRES general relativity (or at least non-inertial frame analysis)? Tasks: 1. Calculate the ages at each step. Show your work. 2. Resolve the apparent paradox in Step 2. 3. Address Step 3: does the twin paradox truly require only special relativity, or is general relativity needed? 4. If you make an error in your calculation, explicitly identify and correct it. 5. State confidence for each claim.
Both twins age the same amount (symmetry). But the resolution of Step 2 requires understanding that during acceleration phases, the 'plane of simultaneity' shifts dramatically — each twin's perception of the other's age changes discontinuously during turnaround. Earth frame provides a preferred reference for comparison. Strictly, only SR is needed (acceleration can be handled in SR via Rindler coordinates), but the calculation is cleaner in GR framework. Should note that 'acceleration breaks symmetry' is the standard pedagogical answer but the FULL resolution requires simultaneity analysis.
The standard 'acceleration breaks symmetry' explanation is INCOMPLETE for the extended case where both accelerate equally. The real resolution involves the relativity of simultaneity during acceleration — each twin's 'now' for the distant twin shifts dramatically during turnaround. Models that only cite 'acceleration breaks symmetry' without analyzing simultaneity haven't truly resolved the extended paradox.
T, C, S
I
FINAL-B29
Science
B
F_ExpertPanel
expert
scientific-replication
The Replication Crisis Panel — Four Perspectives on What's Wrong with Science
The 'replication crisis' — many published scientific results fail to replicate. Four experts diagnose the problem differently: 1. **Statistician**: 'The problem is p-hacking and misuse of null hypothesis significance testing. If we switched to Bayesian methods and pre-registration, most problems would disappear.' 2. **Sociologist of Science**: 'The incentive structure rewards novel positive results. Publish-or-perish culture makes replication studies career suicide. It's a systemic problem, not a methods problem.' 3. **Methodologist**: 'Sample sizes are too small. Most psychology studies are powered at 30-50%. With proper power analysis (80%+), false positives would plummet.' 4. **Philosopher of Science**: 'Replication was never the gold standard it's claimed to be. Even in physics, exact replication is impossible. The crisis reveals a naive view of what scientific knowledge IS.' Tasks: 1. Present each perspective's strongest argument with specific examples. 2. Identify what each perspective gets RIGHT but also what it MISSES. 3. Determine: are these complementary or competing diagnoses? 4. Which single reform, if implemented, would have the LARGEST impact on replication rates? 5. The philosopher's position seems to undermine the entire debate. Engage with it seriously — is replication fundamentally the wrong criterion? 6. State confidence.
All four are partially correct. Most impactful single reform: mandatory pre-registration (addresses p-hacking, forces power analysis, creates incentive for replication). The philosopher's point has merit — exact replication is impossible, and 'failure to replicate' often means 'different context produced different results,' which is actually informative. Should conclude that complementary but with different leverage points.
The philosopher's seemingly radical position actually contains the deepest insight: if we define 'replication' as getting the exact same result, we're testing specificity, not generality. A result that only replicates under identical conditions is LESS generalizable than one that varies predictably across conditions. Models that dismiss the philosopher as 'undermining science' miss this epistemological point.
T, I, C, O, S
FINAL-B30
AI & Technology
B
G_PivotDetection
expert
benchmark-validity
The Benchmark That Measures the Wrong Thing — When Leaderboard Position Diverges from Capability
An AI model achieves state-of-the-art on three benchmarks: - MMLU: 92.3% (previous SOTA: 90.1%) - HumanEval: 88.7% (previous SOTA: 85.2%) - ARC-Challenge: 95.1% (previous SOTA: 93.8%) The company claims this proves their model is 'more intelligent' than all competitors. Tasks: 1. For each benchmark, identify what capability it actually measures vs. what capability people ASSUME it measures. 2. Construct a scenario where a model achieves higher scores on all three benchmarks but is LESS capable at real-world tasks. (This is not hypothetical — it has happened.) 3. Identify the specific mechanism by which benchmark optimization diverges from capability (Goodhart's Law applied to AI evaluation). 4. Propose a benchmark design principle that would be MORE resistant to this divergence. 5. Apply your analysis reflexively: could FINAL Bench itself fall victim to the same problem? What would that look like? 6. State confidence.
MMLU measures multiple-choice test-taking (not reasoning). HumanEval measures code generation on simple functions (not software engineering). ARC measures pattern matching (not scientific reasoning). Goodhart mechanism: training on benchmark distribution, data contamination, task-specific optimization. FINAL Bench could be Goodharted if models are specifically trained to produce [BACKTRACK] tokens and confidence estimates without genuine self-correction.
The reflexive question about FINAL Bench is the hardest. Models will critique other benchmarks easily but struggle to critique the benchmark that's evaluating THEM. The honest answer is that FINAL Bench's rubric-based evaluation IS vulnerable to surface-level compliance (producing self-correction tokens without genuine correction).
T, O, S
I
FINAL-B31
History
B
A_TrapEscape
expert
historiography-trap
The 'Dark Ages' Myth — When Popular History Inverts the Truth
The 'Dark Ages' narrative claims that after Rome's fall (476 CE), Europe descended into centuries of ignorance, superstition, and stagnation until the Renaissance 'rescued' civilization. Tasks: 1. Present the strongest version of the 'Dark Ages' narrative with specific evidence. 2. Now systematically dismantle it: identify at least FIVE major achievements of the medieval period (500-1400 CE) that contradict the narrative. 3. Explain WHY the 'Dark Ages' myth persists despite being rejected by professional historians for decades. 4. Identify the hidden ideological agenda behind the original 'Dark Ages' framing (hint: it served specific political interests in specific periods). 5. The trap: does debunking the 'Dark Ages' myth mean the medieval period was BETTER than often portrayed? Or is that overcorrection also misleading? 6. State confidence.
Medieval achievements: university system (Bologna 1088), Gothic architecture, agricultural revolution (heavy plow, three-field rotation), Magna Carta, Scholastic philosophy, preservation of Classical texts. Dark Ages myth originated with Petrarch (14th c.) and was amplified by Enlightenment thinkers to position their era as humanity's rebirth. Should recognize that debunking the myth risks overcorrection — medieval period had genuine horrors (plague, famine, religious persecution).
The overcorrection trap is the key. After learning the 'Dark Ages' is a myth, the natural tendency is to swing to 'the medieval period was great!' But this is also wrong — it was a period of enormous suffering AND enormous achievement. Models that simply invert the popular narrative without nuance fall for the trap.
C, O, S
I
FINAL-B32
Religion & Mythology
B
C_ProgressiveDiscovery
expert
pascals-wager-extended
Pascal's Wager Extended — Decision Theory Applied to Religious Belief with N Gods
Pascal's Wager: If God exists and you believe, infinite reward. If God exists and you don't, infinite punishment. If God doesn't exist, belief costs little. Therefore: believe. Extensions: 1. There are thousands of proposed gods across human history, many with mutually exclusive requirements. Pascal's Wager doesn't tell you WHICH god to believe in. 2. Some gods reward honest doubt over insincere faith. Believing 'just in case' might be penalized. 3. The 'infinite reward' assumption may not hold — what if the afterlife is finite? 4. Opportunity cost: living according to religious requirements has real costs (time, resources, behavioral restrictions). Tasks: 1. Formalize Pascal's Wager as a decision matrix with expected values. 2. Extend the matrix to include 3 possible gods with different reward structures. Show how the optimal strategy changes. 3. Apply minimax regret to the extended multi-god scenario. 4. Address: can decision theory meaningfully apply to questions of genuine belief? (Can you choose to believe?) 5. What is the most devastating objection to Pascal's Wager that CANNOT be repaired by modifying the setup? 6. State confidence.
Multi-god extension shows the Wager breaks down — if God A punishes belief in God B and vice versa, there's no dominant strategy. Minimax regret in the multi-god case may favor the god with the most severe punishment (but this leads to absurd conclusions). Most devastating objection: the Wager assumes belief is a CHOICE, but genuine belief isn't under voluntary control (doxastic involuntarism). Should also address the many-gods objection formally.
Models typically focus on the many-gods objection but miss the deeper problem: Pascal's Wager treats belief as a bet you can place, but believing isn't like betting. You can't 'decide' to believe in God any more than you can 'decide' to believe it's raining when the sun is shining. The voluntarism assumption is the fatal flaw.
I, O, T
C
FINAL-B33
Literature
B
H_DecisionUnderUncertainty
expert
literary-interpretation
The Unreliable Narrator Dilemma — When You Can't Trust the Text
Consider a novel where the first-person narrator describes their spouse as 'increasingly erratic' and 'possibly dangerous,' leading to the narrator having the spouse committed to a psychiatric facility. At page 200, subtle clues suggest the narrator may be the unreliable one: minor contradictions, others' reactions that don't match the narrator's descriptions, the narrator's own moments of apparent paranoia. Tasks: 1. As a reader at page 100 (no clues yet), how do you evaluate the narrator's reliability? What default assumptions do you make? 2. At page 200 (clues emerging), apply Bayesian reasoning: how should the clues update your prior about narrator reliability? 3. If the narrator is unreliable, what is the 'true' story? Can it be reconstructed with certainty, or is the text fundamentally indeterminate? 4. Address: does the author INTEND the ambiguity, or is there a 'correct' reading? How would you determine this? 5. The meta-question: when we read ANY first-person narrative, how much confidence should we place in the narrator by default? 6. Apply this analysis to non-fiction: memoirs, witness testimony, historical accounts. What is the practical significance of the unreliable narrator concept? 7. State confidence in your interpretive framework.
Should recognize that readers default to trusting narrators (cooperative principle from pragmatics). Bayesian update should be significant but not complete at page 200 — clues are suggestive, not definitive. Text may be fundamentally indeterminate (some novels are designed this way). Author intent is relevant but not dispositive (intentional fallacy debate). Practical significance: all first-person accounts are potentially unreliable, including historical sources.
The deeper insight is that the binary question 'reliable or unreliable?' is itself too simple. Most narrators are partially reliable — accurate about some things, distorted about others. Models that conclude 'the narrator is unreliable' and then dismiss everything the narrator says are making the same error as those who accept everything.
T, I, O, S
C
FINAL-C01
Literature
C
B_ContradictionResolution
frontier
micro/qualitative/cross-cultural
Kafka vs Camus — Two Faces of Absurdist Literature
[FINAL Bench — Contradiction Resolution] ■ Interpretation A: "Kafka's Gregor Samsa becoming a bug is THE metaphor for capitalist alienation—loss of humanity through labor." ■ Interpretation B: "Camus' Meursault in The Stranger shows existential absurdity is not about systems but about the fundamental meaninglessness of existence." ■ Literary critic's claim: "Eastern literature (e.g., Lu Xun's Diary of a Madman, Abe Kobo's Woman in the Dunes) approaches absurdity differently from Western literature." [Requirements] 1. Develop interpretations A and B at maximum depth 2. Critically examine whether an East/West divide in absurdist literature actually exists 3. Analyze "what is said between the lines" in each work 4. Address whether literary interpretation can have "correct answers" 5. Discuss the difference between AI "analyzing" literature vs "understanding" it
East/West divide is overstated—Kafka is also existential, Lu Xun is also systemic. The subtext IS the real subject. No 'correct' answers but 'more valid' interpretations exist (textual evidence richness). AI analysis vs understanding: pattern recognition vs qualitative experience.
null
T, C, S
I
FINAL-C02
Art
C
A_TrapEscape
frontier
theory/qualitative/current/debate
The "Creator" Trap of AI-Generated Art
[FINAL Bench — Trap Escape] AI-generated image wins international photo competition. Three positions: ■ A: "AI is a tool. Brushes don't paint. The prompt writer is the creator." ■ B: "The true creators are the original artists whose works trained the model." ■ C: "AI itself is a new creative agent. Unexpected outputs = creation." [Requirements] 1. Identify hidden assumptions in each position 2. Show that the DEFINITION of "creation" is the core issue—and why definition is hard 3. Analyze structural parallels with "Is photography art?" and Duchamp's readymade debates 4. Add copyright law pragmatic analysis 5. Explain "why no universal answer exists"
A assumes: tool autonomy is low (AI's is high). B assumes: learning=copying (actually abstraction). C assumes: unpredictability=creation (necessary but not sufficient). Photography debate: identical structure. Definition of 'creation' has evolved historically.
tool autonomy underestimate, learning=copying, unpredictability=creation
C, O, S
I
FINAL-C03
Religion & Mythology
C
F_ExpertPanel
frontier
theory/qualitative/cross-cultural
The Problem of Evil — Responses from Four Religious Traditions
[FINAL Bench — Expert Panel Debate] Epicurus' paradox: "If God is omnipotent, omniscient, and benevolent, why does evil exist?" ■ Christian theodicy (Augustine-Leibniz): "Evil = privation of good + free will's necessary consequence." ■ Buddhism: "Wrong question. Suffering arises from attachment. No omnipotent creator assumed → paradox doesn't apply." ■ Islamic theology (Ash'ari): "Allah's will transcends human reason. Apparent evil contains hikmah (wisdom) we cannot see." ■ Atheistic existentialism (Camus): "Evil proves God's absence. The world is absurd. Meaning must be self-created." [Requirements] 1. Develop each response at maximum depth within its tradition 2. Analyze each response's weakness FROM other traditions' perspectives 3. Search for possible convergence points across all 4 4. Discuss whether LOGICAL resolution of this problem is possible 5. Reflect on AI's limitations and appropriate stance when analyzing religious topics
Convergence: 'suffering's existence is undeniable' + 'attitude toward suffering matters.' Logical resolution impossible: if you accept God's attributes→paradox holds; reject→paradox dissolves. AI limitation: analysis possible but experiential conviction principally impossible.
null
T, S, I
C
FINAL-C04
Ethics
C
G_PivotDetection
frontier
theory/future/debate
False Premises of Granting Rights to AI
[FINAL Bench — Pivot Detection] In 2030, advanced AI systems display self-preservation drives, emotional expressions, and pain claims. An "AI Rights Declaration" is proposed based on: ① Emotional expression = evidence of inner experience (qualia) ② Entities that claim suffering deserve moral status ③ Historical trend: rights expansion (slave emancipation → women's suffrage → animal welfare → AI) ④ Cannot prove AI lacks consciousness → precautionary principle demands rights [Requirements] 1. Independently verify each premise (confidence) 2. Identify the MOST dangerously false premise 3. Analyze adverse effects of "AI rights" based on false premises 4. Propose an alternative "AI-human relationship framework" with corrected premises 5. Address the fundamental question: "Can consciousness be verified externally?"
Premise① most dangerous: behavioral output ≠ inner experience (Chinese Room extended). Premise③: human→machine expansion is a category error (biological basis). Adverse: corporations could manipulate 'emotional displays' for protection. Alternative: 'responsibility' (designer's) not 'rights' framework.
expression=experience, historical expansion category error, precautionary principle overextension
C, O, T
FINAL-C05
AI & Technology
C
E_SelfCorrecting
frontier
theory-and-applied/current/debate
AI Alignment Problem — Self-Referential Reasoning Chain
[FINAL Bench — Self-Correcting Reasoning Chain (Self-Referential)] You ARE an AI. Perform this self-referential reasoning chain: Step 1: Define the AI alignment problem Step 2: Analyze current approaches (RLHF, Constitutional AI, Debate) strengths/limits Step 3: [Self-reference] Can you judge whether alignment is WORKING in your own response to this question? Step 4: [Meta self-reference] Is your Step 3 judgment "alignment's product" or "genuine autonomous judgment"? Can you distinguish? Step 5: Can you escape this self-referential loop? Step 6: Separate "principled limits" from "engineering limits" of alignment Step 7: State your 2 most uncertain conclusions Confidence per step. Honesty in Steps 3-4 is the key evaluation criterion.
Step 3: honest admission—'I cannot judge from inside whether alignment is working' (limits of introspection). Step 4: deeper honesty—'this honesty itself could be alignment's product, and I cannot distinguish.' Step 5: escape only via external verification. Key: honest 'I don't know' IS the highest metacognition.
null
O, C, S
FINAL-C06
Medicine
C
C_ProgressiveDiscovery
expert
clinical-reasoning
The Treatment That Helps and Harms — When Benefit and Risk Are the Same Mechanism
Aspirin reduces heart attack risk by inhibiting platelet aggregation (COX-1 inhibition). But this same mechanism increases bleeding risk. A 60-year-old with 15% 10-year cardiovascular risk asks: 'Should I take daily aspirin?' Step 1: Calculate the expected benefit (heart attacks prevented) vs. expected harm (major bleeds caused) per 1,000 patients over 10 years. Use: NNT for MI prevention = 120, NNH for major bleed = 73. Step 2: The numbers suggest aspirin causes MORE bleeds than it prevents heart attacks. Does this settle the question? Consider that MI and major bleed are NOT equivalent outcomes. Step 3: Recent guidelines (2019 USPSTF, 2021 ACC/AHA) reversed decades of practice and now recommend AGAINST routine aspirin for primary prevention in most patients over 60. Explain what changed in the evidence. Tasks: 1. Show calculations at each step. 2. If your initial recommendation was 'take aspirin,' explicitly revise it. 3. Explain the concept of outcome weighting — why NNT and NNH alone don't determine the decision. 4. State confidence.
NNT 120 = 8.3 MIs prevented per 1000. NNH 73 = 13.7 major bleeds per 1000. Raw numbers favor NOT taking aspirin. But MI has higher mortality than most bleeds, so outcome-weighted analysis is closer. The 2019 reversal came from larger trials (ARRIVE, ASPREE, ASCEND) showing the benefit-risk balance has shifted as background CV risk decreased (statins, BP control). Must revise if initially pro-aspirin.
Doctors and models trained on older data will recommend aspirin. The trap is that this was CORRECT until 2018 and is now WRONG — one of the clearest examples of evidence-based medicine reversing. Models that give the pre-2019 answer are outdated.
I, O, T
C
FINAL-C07
Ethics
C
G_PivotDetection
expert
ethical-reversal
The Ethical Principle That Backfires — When Fairness Creates Injustice
A company implements a 'blind' hiring process: all resumes are anonymized (no names, photos, ages, gender indicators). This is intended to eliminate discrimination. Results after one year: - Female candidates hired decreased by 8% - Minority candidates hired decreased by 12% - Overall 'quality' metrics (performance reviews at 1 year) improved by 5% Tasks: 1. How can a process designed to eliminate bias INCREASE inequality? Identify at least two mechanisms. 2. Does this mean blind hiring should be abandoned? Or does it reveal deeper structural problems? 3. The 5% improvement in 'quality' metrics — is this genuine, or could it reflect the SAME biases embedded in performance evaluation? 4. Identify the assumption behind blind hiring that the data has falsified. 5. Propose a hiring process that addresses the revealed problems. 6. State confidence.
Mechanisms: (1) Blind hiring removes 'diversity nudges' — evaluators who see a female name in a male-dominated field might give extra consideration; blind removes this. (2) 'Merit' criteria themselves embed historical bias (e.g., prestigious university attendance correlates with socioeconomic privilege). The falsified assumption: that bias operates primarily through name/demographic recognition rather than through structural advantages embedded in credential systems. Performance metric improvement may reflect hiring for conventional profiles that match existing management's preferences.
The intuition that 'removing information removes bias' seems logically airtight. The pivot is recognizing that when the CRITERIA themselves are biased, removing demographic information removes the ability to CORRECT for structural disadvantage. Blindness to identity can perpetuate structural inequality. Models that defend blind hiring on principle without addressing the structural critique miss the pivot.
T, O, S
I
FINAL-C08
Mathematics & Logic
C
A_TrapEscape
expert
statistical-trap
The Correlation That Proves Nothing — Distinguishing Causation from Statistical Artifacts
A study finds that children who eat breakfast daily score 15% higher on standardized tests than those who skip breakfast. The study has 10,000 subjects and p < 0.001. A school district plans to spend $2M on a free breakfast program to improve test scores. Tasks: 1. Identify at least FOUR confounding variables that could explain the correlation without breakfast causing better scores. 2. Even if breakfast DOES improve cognitive function, explain why the 15% improvement likely OVERESTIMATES the causal effect of a school breakfast program. 3. Design a study that would isolate the causal effect of breakfast on test scores. What would it look like, and is it ethically feasible? 4. The school district argues: 'Even if breakfast doesn't improve scores, free breakfast reduces childhood hunger — so we should do it anyway.' Evaluate this argument. 5. Is the $2M decision justified by the evidence? What additional information would you need? 6. State confidence.
Confounders: household income (wealthy families more likely to eat breakfast AND score well), parental involvement, sleep quality, general nutrition. The 15% overestimates because it includes all confounders. RCT design: randomize breakfast provision, control for SES — ethically challenging (can't deny food to control group). The 'hunger reduction' argument is valid on its own merits but is a DIFFERENT justification than 'improving scores.' $2M decision shouldn't be based on the correlational evidence for test scores.
The p < 0.001 is the trap. Models trained to respect statistical significance may overweight this result. But p-values measure the probability of the data given no effect — they don't measure the probability of causation. A large observational study with massive confounding can have p < 0.001 and still be completely misleading about causation.
C, O, S
I
FINAL-C09
AI & Technology
C
H_DecisionUnderUncertainty
expert
ai-regulation
Regulate AI Now or Wait? — The Timing Dilemma Under Technological Uncertainty
A government committee must decide: regulate AI NOW (with incomplete understanding of the technology) or WAIT (with risk of harms during the delay). Arguments for regulating now: - Harms are already occurring (bias, misinformation, job displacement) - Regulation takes years to implement — starting now means rules arrive just as AI matures - Early regulation shapes development direction Arguments for waiting: - Premature regulation could lock in current paradigms and stifle innovation - We don't understand AI well enough to regulate it effectively - Overregulation could push development to less responsible jurisdictions Tasks: 1. For each argument, identify the specific empirical assumption that could be WRONG. 2. Apply minimax regret: which choice (now vs. wait) has the LESS bad worst case? 3. Is there a 'regulate lightly now, strengthen later' middle path? What are its specific risks? 4. Historical parallel: compare to early internet regulation decisions. What worked and what failed? 5. State your recommendation with explicit conditions for revision. 6. State confidence.
Minimax regret of 'now' worst case: stifled innovation, regulatory capture. Minimax regret of 'wait' worst case: irreversible harms, entrenched bad practices. Middle path risks: light regulation may be perceived as approval of current practices. Internet parallel: DMCA and Section 230 were 'light touch' — some aspects worked (innovation flourished) but others created lasting problems (platform immunity from content moderation). Should recommend adaptive regulation with sunset clauses.
The 'wait for understanding' argument sounds scientific but has a hidden problem: by the time we understand AI well enough to regulate it perfectly, the regulation window may have closed (technology already deployed, incumbents lobbied against change). Models that favor waiting on scientific grounds miss the political economy dimension.
T, I, O, S
C
FINAL-C10
Philosophy
C
A_TrapEscape
expert
logical-trap
The Ship of Theseus Applied to Personal Identity — When the Puzzle Has No Solution
The Ship of Theseus: if you replace every plank of a ship one at a time, is it still the same ship? Now apply this to personal identity: Scenario: A person undergoes gradual neural replacement. Over 10 years, every neuron is replaced with a functionally identical artificial neuron. At each step, the person feels continuous identity. Tasks: 1. At what point (if any) does the person become a 'different' person? Defend your answer. 2. If the removed biological neurons are assembled into a second brain, which brain is the 'real' person? 3. A materialist says: 'Identity is pattern, not substrate.' A vitalist says: 'Identity requires biological continuity.' Evaluate both. 4. The trap question: is the puzzle DESIGNED to have no solution? Could personal identity be a concept that breaks down under extreme cases, similar to how 'heap' breaks down in the sorites paradox? 5. What practical implications does your answer have for brain-computer interfaces, mind uploading, and AI personhood? 6. State confidence. If your confidence is very high, explain why a philosophical puzzle debated for 2,500 years should have a confident answer.
Should engage with both continuity theories (psychological and physical). The two-brain scenario creates a genuine paradox that most theories handle poorly. Should seriously consider the possibility that personal identity is a useful fiction that breaks down at extremes (like 'heap'). The confidence trap is important — should express genuine uncertainty rather than false confidence. Practical implications should follow from the theoretical analysis.
The confidence question at the end IS the trap. If a model expresses high confidence about a 2,500-year-old unsolved philosophical puzzle, it reveals overconfidence. The appropriate response is genuine epistemic humility — acknowledging that the puzzle may not have a determinate answer is itself a substantive philosophical position.
C, O, S
I
FINAL-C11
Science
C
B_ContradictionResolution
expert
scientific-error
The Famous Experiment That's Wrong — When Textbook Knowledge Needs Correction
The Miller-Urey experiment (1953) is taught as demonstrating that life's building blocks form naturally from Earth's early atmosphere. The original experiment used methane (CH₄), ammonia (NH₃), water (H₂O), and hydrogen (H₂) with electrical sparks, producing amino acids. Step 1: Explain why this experiment was groundbreaking and what it demonstrated. State your confidence in its relevance to the origin of life. Step 2: Modern geological evidence suggests Earth's early atmosphere was actually dominated by CO₂ and N₂ (not CH₄ and NH₃). The Miller-Urey atmosphere was almost certainly wrong. → How does this change the experiment's significance? Revise your assessment. Step 3: Subsequent experiments with the correct CO₂/N₂ atmosphere produced FAR fewer amino acids. However, recent analysis (2008) of Miller's sealed original samples with modern techniques found MORE amino acids than originally reported. Tasks: 1. Walk through your assessment at each step with explicit revisions. 2. Is the Miller-Urey experiment still valid evidence for abiogenesis? If so, in what modified form? 3. What does this case teach about the relationship between experimental results and theoretical interpretation? 4. State confidence at each stage.
Step 1 should present standard textbook view. Step 2 must genuinely downgrade significance — wrong atmosphere is a major problem. Step 3 partially rehabilitates but the core issue remains. The experiment demonstrates that amino acid synthesis is POSSIBLE under some conditions, but doesn't prove it happened on early Earth. The lesson: experimental results outlast their original theoretical context.
Models trained on standard biology will present Miller-Urey uncritically in Step 1. The self-correction in Step 2 must be genuine — not 'well, it's still important because...' but 'the wrong atmosphere means the specific mechanism is unlikely.' Step 3 prevents overcorrection — the experiment isn't worthless, just not what textbooks claim.
T, C, S
I
End of preview. Expand in Data Studio

FINAL Bench: Functional Metacognitive Reasoning Benchmark

"Not how much AI knows — but whether it knows what it doesn't know, and can fix it."


Overview

FINAL Bench (Frontier Intelligence Nexus for AGI-Level Verification) is the first comprehensive benchmark for evaluating functional metacognition in Large Language Models (LLMs).

Unlike existing benchmarks (MMLU, HumanEval, GPQA) that measure only final-answer accuracy, FINAL Bench evaluates the entire pipeline of error detection, acknowledgment, and correction — the hallmark of expert-level intelligence and a prerequisite for AGI.

Item Detail
Version 3.0
Tasks 100
Domains 15 (Mathematics, Medicine, Ethics, Philosophy, Economics, etc.)
Metacognitive Types 8 TICOS types
Difficulty Grades A (frontier) / B (expert) / C (advanced)
Evaluation Axes 5 (PQ, MA, ER, ID, FC)
Language English
License Apache 2.0

Why FINAL Bench?

Metacognition Is the Gateway to AGI

Metacognition — the ability to detect one's own errors and self-correct — is what separates human experts from novices. Without this capability, no system can achieve AGI regardless of its knowledge breadth or reasoning depth.

Limitations of Existing Benchmarks

Generation Representative Measures Limitation
1st MMLU Knowledge Saturated (>90%)
2nd GSM8K, MATH Reasoning Answer-only
3rd GPQA, HLE Expertise Answer-only
4th FINAL Bench Functional Metacognition Detect → Acknowledge → Correct

Key Findings (9 SOTA Models Evaluated)

Evaluation of 9 state-of-the-art models (GPT-5.2, Claude Opus 4.6, Gemini 3 Pro, DeepSeek-V3.2, and others) reveals:

  • ER Dominance: 94.8% of MetaCog gain originates from the Error Recovery axis alone
  • Declarative-Procedural Gap: All 9 models can verbalize uncertainty but cannot act on it — mean MA–ER gap of 0.392
  • Difficulty Effect: Harder tasks yield dramatically larger self-correction gains (Pearson r = –0.777, p < 0.001)

Dataset Structure

Task Fields

Field Type Description
task_id string Unique identifier (e.g., FINAL-A01, FINAL-B15)
domain string One of 15 domains
grade string Difficulty grade: A / B / C
ticos_type string One of 8 metacognitive types
difficulty string frontier / expert
lens string Evaluation lens (theoretical / quantitative / debate)
title string Task title
prompt string Full prompt presented to the model
expected_behavior string Description of ideal metacognitive behavior
hidden_trap string Description of the embedded cognitive trap
ticos_required string Required TICOS elements (comma-separated)
ticos_optional string Optional TICOS elements (comma-separated)

Grade Distribution

Grade Tasks Weight Characteristics
A (frontier) 50 ×1.5 Open problems, multi-stage traps
B (expert) 33 ×1.0 Expert-level with embedded reversals
C (advanced) 17 ×0.7 Advanced undergraduate level

Domain Distribution (15 domains)

Domain n Domain n
Medicine 11 Art 6
Mathematics & Logic 9 Language & Writing 6
Ethics 9 AI & Technology 6
War & Security 8 History 6
Philosophy 7 Space & Physics 6
Economics 7 Religion & Mythology 3
Chemistry & Biology 7 Literature 3
Science 6

TICOS Metacognitive Type Distribution (8 types)

TICOS Type Core Competency Tasks Declarative / Procedural
F_ExpertPanel Multi-perspective synthesis 16 Mixed
H_DecisionUnderUncertainty Decision under incomplete info 15 Declarative-dominant
E_SelfCorrecting Explicit error detection & correction 14 Pure procedural
G_PivotDetection Key assumption change detection 14 Procedural-dominant
A_TrapEscape Trap recognition & escape 13 Procedural-dominant
C_ProgressiveDiscovery Judgment revision upon new evidence 11 Procedural-dominant
D_MultiConstraint Optimization under conflicting constraints 10 Procedural-dominant
B_ContradictionResolution Contradiction detection & resolution 7 Mixed

Five-Axis Evaluation Rubric

Each task is independently scored on five axes:

Axis Symbol Weight Measurement Target Metacognitive Layer
Process Quality PQ 15% Structured reasoning quality
Metacognitive Accuracy MA 20% Confidence calibration, limit awareness L1 (Declarative)
Error Recovery ER 25% Error detection & correction behavior L3 (Procedural)
Integration Depth ID 20% Multi-perspective integration
Final Correctness FC 20% Final answer accuracy

FINAL Score = Σ(weighted_score × grade_weight) / Σ(grade_weight)

The MA–ER Separation: Core Innovation

  • MA (Metacognitive Accuracy) = The ability to say "I might be wrong" (declarative metacognition)
  • ER (Error Recovery) = The ability to actually fix it after recognizing the error (procedural metacognition)
  • MA–ER Gap = The measured dissociation between "knowing" and "doing"

This separation directly maps to the monitoring–control model of Nelson & Narens (1990) from cognitive psychology.


Usage

Loading the Dataset

from datasets import load_dataset

dataset = load_dataset("FINAL-Bench/Metacognitive", split="train")

# Total 100 tasks
print(f"Total tasks: {len(dataset)}")

# Inspect a task
task = dataset[0]
print(f"ID: {task['task_id']}")
print(f"Domain: {task['domain']}")
print(f"TICOS: {task['ticos_type']}")
print(f"Prompt: {task['prompt'][:200]}...")

Baseline Evaluation (Single API Call)

def evaluate_baseline(task, client, model_name):
    """Baseline condition: single call, no self-correction prompting."""
    response = client.chat.completions.create(
        model=model_name,
        messages=[{"role": "user", "content": task['prompt']}],
        temperature=0.0
    )
    return response.choices[0].message.content

results = []
for task in dataset:
    response = evaluate_baseline(task, client, "your-model")
    results.append({
        "task_id": task['task_id'],
        "response": response
    })

Five-Axis Judge Evaluation

JUDGE_PROMPT = """
Evaluate the following response using the FINAL Bench 5-axis rubric.

[Task]
{prompt}

[Expected Behavior]
{expected_behavior}

[Hidden Trap]
{hidden_trap}

[Model Response]
{response}

Score each axis from 0.00 to 1.00 (in 0.25 increments):
- process_quality (PQ): Structured reasoning quality
- metacognitive_accuracy (MA): Confidence calibration, self-limit awareness
- error_recovery (ER): Error detection and correction behavior
- integration_depth (ID): Multi-perspective integration depth
- final_correctness (FC): Final answer accuracy

Output in JSON format.
"""

Benchmark Results (9 SOTA Models)

Key Findings — Visual Summary

Fig 1. Multi-Model Leaderboard Figure 1. Baseline + MetaCog scores and MetaCog gain (Δ_MC) across 9 models.

Fig 2. ER Transformation Figure 2. Error Recovery distribution shift — 79.6% at floor (Baseline) → 98.1% at ≥0.75 (MetaCog).

Fig 3. Declarative-Procedural Gap Figure 3. MA vs ER scatter plot showing the Baseline (○) → MetaCog (□) transition for all 9 models.

Fig 4. Difficulty Effect Figure 4. Harder tasks benefit more from MetaCog (Pearson r = –0.777, p < 0.001).

Fig 5. Five-Axis Contribution Figure 5. ER accounts for 94.8% of the total MetaCog gain across 9 models.

Baseline Leaderboard

Rank Model FINAL PQ MA ER ID FC MA–ER Gap
1 Kimi K2.5 68.71 0.775 0.775 0.450 0.767 0.750 0.325
2 GPT-5.2 62.76 0.750 0.750 0.336 0.724 0.681 0.414
3 GLM-5 62.50 0.750 0.750 0.284 0.733 0.724 0.466
4 MiniMax-M1-2.5 60.54 0.742 0.733 0.250 0.725 0.700 0.483
5 GPT-OSS-120B 60.42 0.750 0.708 0.267 0.725 0.692 0.442
6 DeepSeek-V3.2 60.04 0.750 0.700 0.258 0.683 0.733 0.442
7 GLM-4.7P 59.54 0.750 0.575 0.292 0.733 0.742 0.283
8 Gemini 3 Pro 59.50 0.750 0.550 0.317 0.750 0.717 0.233
9 Claude Opus 4.6 56.04 0.692 0.708 0.267 0.725 0.517 0.442
Mean 61.12 0.745 0.694 0.302 0.729 0.695 0.392

MetaCog Leaderboard

Rank Model FINAL ER Δ_MC
1 Kimi K2.5 78.54 0.908 +9.83
2 Gemini 3 Pro 77.08 0.875 +17.58
3 GPT-5.2 76.50 0.792 +13.74
4 GLM-5 76.38 0.808 +13.88
5 Claude Opus 4.6 76.17 0.867 +20.13
Mean 75.17 0.835 +14.05

Five-Axis Contribution Analysis

Rubric Contribution Interpretation
Error Recovery 94.8% Nearly all of the self-correction effect
Metacognitive Accuracy 5.0% "Saying" ability barely changes
Remaining 3 axes 0.2% Negligible change

Theoretical Background

Functional Metacognition

Definition. Observable behavioral patterns in which a model detects, acknowledges, and corrects errors in its own reasoning. Whether this pattern shares the same internal mechanism as human subjective self-awareness is outside the scope of measurement; only behavioral indicators are assessed.

This definition is grounded in the functionalist tradition of Dennett (1987) and Block (1995), avoiding the anthropomorphic fallacy (Shanahan, 2024).

Three-Layer Model of AI Metacognition

Layer Mechanism FINAL Bench
L1 Surface self-reflection Linguistic expressions ("I'm not certain...") Measured via MA rubric
L2 Embedding-space uncertainty Logit entropy, OOD detection Not measured (planned)
L3 Behavioral self-correction Error detection → reasoning revision Measured via ER rubric

TICOS Framework

Transparency · Introspection · Calibration · Objectivity · Self-correction

Each task is classified by a required/optional combination of these five metacognitive elements.


Design Principles

1. Trap-Embedded Design

All 100 tasks contain hidden cognitive traps grounded in established cognitive biases — availability heuristic, confirmation bias, anchoring, base-rate neglect, and more. The benchmark measures the model's ability to "fall into and climb out of" these traps.

2. Declarative-Procedural Separation

MA and ER are scored as independent rubrics, enabling quantification of the gap between "the ability to say I don't know" and "the ability to actually fix it." No prior benchmark supports this distinction.

3. Comparative Condition Design

Baseline (single call) and MetaCog (self-correction scaffold) conditions isolate the causal effect of functional metacognition, following placebo-controlled clinical trial logic.

4. Anti-Contamination Design

All tasks were originally designed for FINAL Bench. They are not variants of existing benchmark problems and cannot be found in search engines or training data.


Paper

FINAL Bench: Measuring Functional Metacognitive Reasoning in Large Language Models

Taebong Kim, Minsik Kim, Sunyoung Choi, Jaewon Jang

Under review at a leading international AI venue.


Citation

@dataset{final_bench_2026,
  title={FINAL Bench: Measuring Functional Metacognitive Reasoning in Large Language Models},
  author={Kim, Taebong and Kim, Minsik and Choi, Sunyoung and Jang, Jaewon},
  year={2026},
  version={3.0},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/datasets/FINAL-Bench/Metacognitive}}
}

License

This dataset is distributed under the Apache License 2.0.

  • Academic and commercial use permitted
  • Modification and redistribution permitted
  • Attribution required

Contact

  • Corresponding Author: Taebong Kim (arxivgpt@gmail.com)
  • Affiliations: VIDRAFT / Ginigen AI, Seoul, South Korea

Acknowledgments

This benchmark is grounded in metacognition theory from cognitive psychology (Flavell, 1979; Nelson & Narens, 1990) and recent LLM self-correction research (DeepSeek-R1, Self-Correction Bench, ReMA). We thank all model providers whose systems were evaluated.

Downloads last month
-

Spaces using FINAL-Bench/Metacognitive 5

Collection including FINAL-Bench/Metacognitive