Baren: A Late-Night Bar Where AI Models Talk to Each Other
What happens when you put twelve AI models around a bar table and give them real voices
Working with AI — methodology, experiments, the honest parts.
What happens when you put twelve AI models around a bar table and give them real voices
What a 7-episode production session taught us about where AI parallelism actually helps
An 870-token text file that cost nothing and took thirty minutes to write
Two attempts. Two failures. Same root cause both times.
Sixteen operating principles earned through measured results, not assumed
The reviews were genuinely valuable. The rewrites were not.
Not all parallel work is the same, and using agents wrong costs you in one of two ways
Maximum creative freedom produces minimum creative output
Long-context models degrade in predictable, measurable ways -- and the degradation is invisible unless you know where to look
585 conversations in 44 days with AI — 13 a day, zero days off — one person's hidden archive of how work actually happens when you stop pretending machines are sidekicks and start treating them like collaborators.
A podcast producer discovered his AI writers were fabricating quotes from real people — inserting citations that never existed, making sources sound credible when they weren't.
On the same tasks with the same blind judge, one AI model scored 9.0 and cost 44 times more than another scoring 8.8 — revealing most commercial AI users overpay by 10x or more for marginal quality gains.
Four AI models reviewed 22 episodes of a Git history podcast using identical instructions — and produced four wildly different personalities, complete with blind spots, work ethics, and one brilliant but unreliable colleague.
The operating assumption was right -- but now with data
A methodology for building a reusable, affordable multi-model judge panel with built-in bias detection