Which AI models are actually "brain-like"? I built an open-source benchmark to measure it
Meta released TRIBE v2 last week - a foundation model that predicts fMRI brain activation from video, audio, and text. The question I kept coming back to was: How do we actually compare AI models t...

Source: DEV Community
Meta released TRIBE v2 last week - a foundation model that predicts fMRI brain activation from video, audio, and text. The question I kept coming back to was: How do we actually compare AI models to the brain in a rigorous, statistical way? So I built CortexLab - an open-source toolkit that adds the missing analysis layer on top of TRIBE v2. The core idea Take any model (CLIP, DINOv2, V-JEPA2, LLaMA) and ask: Do its internal features align with predicted brain activity patterns? Which brain regions does it match? Is that alignment statistically significant? What you can do with it Compare models against the brain RSA, CKA, Procrustes similarity scoring Permutation testing, bootstrap CIs, FDR correction per ROI Noise ceiling estimation (upper bound on achievable alignment) Analyze brain responses Cognitive load scoring across 4 dimensions (visual, auditory, language, executive) Peak response latency per ROI (reveals cortical processing hierarchy) Lag correlations and sustained vs transi