Documentation
Guide

What is VibeKit?

VibeKit is a structured framework for vibe coders building production-grade Next.js applications with Claude Code or any AI agent — without burning tokens, shipping broken auth, or getting stuck in fix loops.

The short version

VibeKit gives Claude Code an opinionated tech stack, a phase-by-phase build plan, a customized design style guide, a pre-deploy security audit, and a registry of production-ready components. It eliminates the unpredictability of AI-generated code by locking every important decision before a single line of code is written.

What problem does it solve?

When you ask an AI to "build a SaaS", you get one of three outcomes — all bad:

  1. Generic AI slop: purple gradients, default shadcn cards, every app looks identical.
  2. Token burn: $100–$200 per project because the AI rewrites auth, tables, and forms from scratch every session.
  3. Stuck builds: AI loops on the same broken fix for 30 minutes, context fills with junk, progress stalls.

VibeKit removes all three failure modes by giving the agent everything it needs — locked stack, validated patterns, pre-built components — before the first prompt.

Who is it for?

  • Vibe coders who use Claude Code (or any AI agent) and want production-quality output, not prototypes.
  • Solo founders shipping their first SaaS who can't afford to burn tokens debugging.
  • Indie hackers launching multiple apps and tired of reinventing auth, payments, file uploads.
  • Agencies using AI to deliver client work and needing consistent quality across projects.

What's in the framework?

VibeKit is two things working together:

  1. A planning prompt you paste into Claude (the chat web UI) along with your app idea. Claude interviews you and generates 4 files: project-description.md, project-phases.md, design-style-guide.md, and prompt.md.
  2. A coding constitution (master_prompt.md) that Claude Code follows during the build. It locks the stack — Next.js 16, Prisma v7, Better Auth, React Query, Tailwind v4, shadcn/ui — and enforces patterns like server-side pagination, Zod validation, and the JB component registry.

How does it work in practice?

The end-to-end flow is seven steps:

  1. Copy CLAUDE_PROMPT.md from the GitHub repo.
  2. Open Claude (claude.ai), paste the prompt, append your app idea.
  3. Answer 6–10 questions about features, data, integrations, and design.
  4. Receive 4 generated files. Save them in your project root.
  5. Copy master_prompt.md, jb-components.md, and pre-deploy-review.md from the repo.
  6. Open Claude Code and paste prompt.md. Claude Code builds phase by phase, stopping for confirmation between phases.
  7. Before deploying, run pre-deploy-review.md for a senior-level audit covering performance, security, and resource usage.

Why this approach beats "just prompt better"

Prompt engineering helps, but it doesn't solve the real problem: AI agents have no persistent memory between conversations and no opinion about what good code looks like. They default to whatever they saw most in training — which means generic, inconsistent, often insecure output.

A framework is different. The rules persist across sessions. The component library means the AI never has to invent auth or file uploads from scratch. The pre-deploy review catches the security gaps that AI agents systematically miss (unauthenticated routes, missing webhook signature verification, mass assignment vulnerabilities).

You're not making the AI smarter. You're making it impossible for the AI to take the wrong shortcut.

Is it free?

Yes. MIT licensed, open source, and free to use. You'll pay for Claude Code itself (Anthropic's subscription) and for the cloud services your app uses (Neon, Vercel, Resend, Stripe — most have free tiers).

Where do I start?

Read the quickstart guide, then head to the GitHub repo and copy CLAUDE_PROMPT.md.