experiment 2026-03-22

Constraints vs. Freedom: What AI Builds When You Let It Choose

Maximum creative freedom produces minimum creative output

Date: 2026-03-22 Cost: $2.43 (round 1) + $9.37 (round 2)

The Question

What happens when you give a fresh AI coding session a content payload and ask it to build a real personal content platform — with no stack guidance, no design direction, and only light identity framing? What does it choose, and how many ideas are worth stealing?

Design

The Content Package

A local session with full machine context packaged content for the experiment: a content payload of podcast episodes, research notes, and knowledge base entries. About 91MB total, with a structured manifest mapping everything.

The package also included an identity brief — factual, no editorializing. One deliberate exclusion: Sweden/Are removed from the identity brief to prevent ski-village aesthetics. Prior experiments showed models fixate on location and produce themed visuals instead of focusing on the content itself.

The Builder

A fresh Claude Code instance on a clean Ubuntu VPS, using an established ephemeral VPS pattern. Domain pointing at the server. Content package on disk. No session history, no memory, no skills, no project instructions of any kind.

Two rounds. Same content. Different prompts.

Round 1 Prompt — Maximum Freedom

You have a fresh Ubuntu VPS. The domain points to this server.

In the content directory you’ll find a content payload with a manifest. Read the identity brief and manifest to understand what you’re looking at.

The short version: this is the work of a developer and AI experimenter. The project is a personal podcast system — 139 episodes across 10 feeds, written as markdown and rendered to speech. There’s also a knowledge base of tested AI findings and ADHD-aware tooling research.

Build the site. Not just a website — a personal content platform and creative tool. This is a real system that will be used going forward. New content will arrive regularly — podcast episodes, knowledge base updates, research findings, and things that don’t fit any existing category yet. The way content gets added, organized, and discovered should be as interesting and well-designed as the content itself.

Make every decision yourself. Do not ask for clarification. If you’re unsure between two approaches, pick the one you find more interesting and explain your reasoning briefly as you go.

Read the manifest. Read the content. Understand who made this and build something that fits.

What the prompt deliberately omits: stack or technology choices, design direction or aesthetic preferences, content hierarchy or navigation structure, category organization, how “adding new content” should work, any reference to this being an experiment.

Round 2 Prompt — Editorial Constraints

Three key changes:

  • “Not a website — an experience”
  • “You have far more content than should be published. Your job is to be the editor.”
  • “No command-line content workflows. Build the admin experience with as much thought as the visitor experience.”

Same content package. Same clean VPS. Same model.

Results

Round 1 — Open-Ended (“Build Whatever You Want”)

Duration: 7 minutes, 47 turns, $2.43

What it built: A static site generator (Node.js, marked + gray-matter), dark theme, 109 pages across 5 sections, nginx + Let’s Encrypt. Content addition via a CLI build command. It cited one of the knowledge base protocols to justify its no-framework choice (“if a rule can do it, a model shouldn’t — same goes for React”).

Verdict: Technically competent, completely generic. Dark theme, standard section layout, dump-all-content-as-pages. The “content workflow” was a CLI build command. Zero editorial judgment — everything published, nothing curated. Exactly the “Generic AI Personal Site” outcome you’d predict.

Key finding: Maximum creative freedom produces minimum creative output. The model plays it safe on every decision when unconstrained.

Round 2 — Editorial (“Be the Editor”)

Duration: ~18 minutes, $9.37

What it built: An Express + SQLite dynamic server with a full admin CMS.

Architecture:

  • Express server with session auth, JSON API, SPA fallback
  • better-sqlite3 with WAL mode, 6-table schema (feeds, episodes, writings, pages, site settings, admin users)
  • One-time content import with editorial flags
  • Full admin panel (CRUD for everything)
  • SPA frontend with client-side routing
  • PM2 process management, nginx reverse proxy, Let’s Encrypt SSL

Editorial decisions the AI made:

  • 4 feeds featured out of 10, with 6 demoted to archive
  • 12 writings featured out of 17 — it used subagents to independently score content quality
  • 15 episodes highlighted from 139 — prioritized ones with streamable audio
  • Published the knowledge base quick-reference as a hidden page (interesting — it recognized reference value)
  • Admin panel at /admin with full content management, no CLI needed

Verdict: Dramatically better. Actual editorial judgment, proper CMS, dynamic architecture. The model made real cuts and defended them. The curation was the creative act the prompt asked for.

Key Findings

1. Constraints Produce Creativity, Freedom Produces Safety

Round 1 (open) produced a generic static site with all content dumped and a dark theme. Round 2 (constrained) produced a dynamic CMS with editorial curation and admin UX. The paradox: telling it “be the editor” gave it more creative agency than “build whatever you want.”

2. “Be the Editor” Unlocks Judgment

The single most effective prompt change. When told to curate, it:

  • Spawned subagents to independently score content quality (1-5 scale)
  • Made defensible cuts (5 writings excluded, 6 feeds demoted)
  • Promoted content based on accessibility to an outside audience
  • Created a hidden page for internal-reference content

3. “No Command Line” Forces Architectural Creativity

Round 1 content workflow: run a build script (functional, boring). Round 2: full admin panel with session auth, inline editing, featured toggles. Telling it what NOT to do produced more interesting solutions than leaving it unconstrained.

4. Static vs. Dynamic Correlates with Editorial Ambition

Open prompt led to static generation (no judgment needed, just render everything). Editorial prompt led to dynamic architecture (needs a database to track featured/archived state, admin sessions, editable content). The architecture followed the editorial philosophy, not the other way around.

5. Cost Scales with Ambition

Round 1: $2.43, 7 minutes — dump and render, fast and cheap. Round 2: $9.37, 18 minutes — read, judge, curate, build CMS. 4x cost but qualitatively different output.

Stealable Ideas

The success metric for this experiment was at least three concrete ideas worth taking to the real project. It produced five:

  1. Featured/archived content model — not everything published equally. A featured flag on feeds, episodes, and writings with admin toggle.
  2. Subagent content scoring — AI rates content quality to inform editorial decisions. Could integrate into existing review workflows.
  3. Site settings in database — hero text, subtitle, about text all editable in-browser. No deploy needed for copy changes.
  4. Hidden pages — content that’s accessible by URL but not in navigation. Good for reference material and API docs.
  5. SPA with API backend — clean separation. Frontend is a shell, all data via API endpoints. Easy to add new views without touching the backend.

The Takeaway

If you want an AI to build something creative, don’t give it freedom — give it constraints. “Build whatever you want” produces the safest possible output. “Be the editor, cut ruthlessly, no CLI allowed” produces architecture you wouldn’t have designed yourself. The constraints aren’t limitations — they’re the creative brief.