How I Vibe-Coded MattOden.com — My Personal Website and AI System
Why a Sales Professional Built this Website and AI Tool
When you work in technology, understanding how products work matters as much as discussing them.
As a sales leader and former AI company founder without software engineering training, I wanted to build something tangible. My goal was to create a professional website with an integrated AI system that I could fully control the output and functionality of.
I built MattOden.com using AI tools, modern frameworks, and cloud services.
The website includes a private AI system that functions like a more powerful, customizable version of Google's NotebookLM. The system uses production-grade technology: vector databases, semantic search, and retrieval-augmented generation (RAG). Every blog post, note, and curated resource I add becomes instantly searchable and conversational through the integrated chatbot.
Unlike consumer tools that lock users into their ecosystems, this implementation provides full control over the data pipeline, search algorithms, and AI behavior. The system serves as a long-term knowledge repository that improves with each piece of content added.
This project was primarily "vibe coding"—building through conversation with AI tools rather than traditional programming. While some technical configuration was required (setting up databases, connecting APIs, debugging deployments), Claude Pro, Claude Code, Google, and ChatGPT guided me through each step. If you can describe what you want and follow instructions, you can build this.
The System Architecture
Modern websites work by connecting multiple specialized services through APIs. Each service performs one function well. MattOden.com uses the following services:
Note: Because this is a low-traffic website, I use primarily free tiers from the providers listed below. As traffic scales, costs increase proportionally.
GoDaddy handles domain registration and DNS ($12/year). DNS (Domain Name System) translates human-readable addresses like MattOden.com into the IP addresses that computers use to locate servers. When someone types MattOden.com, GoDaddy's DNS servers direct them to where the website is hosted.
Vercel hosts and deploys the website (Free tier: 100GB bandwidth). Hosting means storing your website files on servers that are always online and accessible to visitors. The platform stores site files on servers distributed globally (a CDN - Content Delivery Network), which reduces load times by serving files from locations geographically close to each visitor. When I update code, Vercel automatically rebuilds and deploys the new version.
Supabase provides the database (Free tier: 500MB). It stores chat conversations, contact form submissions, and user interactions. Supabase runs PostgreSQL, an enterprise-grade relational database.
Pinecone stores vector representations of my content (Free tier: 100K vectors). A vector database stores mathematical representations of text meaning rather than the text itself. This enables semantic search—finding information based on what it means rather than just matching keywords. When someone asks a question, Pinecone instantly retrieves the most relevant content by comparing the question's meaning to the stored content.
AI Model Providers (Groq - Free tier, Together.ai - Free tier, Cerebras - Free tier, OpenAI - Paid) generate the actual responses. These services host large language models (LLMs) that can understand questions and generate human-like text. I selected these providers specifically for their speed—they run edge inference networks (servers distributed globally for minimal latency) that host the latest open-source models. Each provider offers different models with different performance characteristics. Users can choose based on their preference for speed versus response quality.
Resend handles email delivery from the contact form (Free tier: 100 emails/day). Sending emails programmatically requires managing email authentication protocols (SPF, DKIM, DMARC) that verify your emails are legitimate and not spam. Resend handles this complexity and ensures messages reach the inbox rather than spam folders.
Cloudflare Turnstile provides bot protection without requiring users to solve CAPTCHAs (Free). This prevents automated scripts from spamming the contact form or chat system while maintaining a seamless user experience.
Stage 1: Designing a Website Using Prompts
AI website builders let you describe what you want in plain language, then generate the code to build it. These tools differ in what they produce: some generate proprietary templates locked to their platform, while others generate standard code you can own and modify.
I tested several AI website builders—Base 44, Lovable, and V0 by Vercel. I chose V0 because it generates actual Next.js code (a popular React framework) that I could own and modify, rather than a proprietary template.
What Next.js Is:
Next.js is a framework built on React (a JavaScript library for building user interfaces). It adds features like automatic routing, server-side rendering for better performance, and built-in optimization. Most modern web applications use frameworks like Next.js because they handle complex technical requirements automatically.
The process (V0 Pro: $20 for one month to export code):
- Signed up for V0
- Described what I wanted in plain language
- Iterated on the design through conversational prompts
- Exported the complete code package
The export included React components (reusable UI building blocks), Tailwind CSS styling (a utility-first CSS framework for designing interfaces), and a full Next.js application structure. This is production-ready code—the same quality code that professional developers write for commercial applications.
Stage 2: Setting Up a Professional Development Environment
After exporting the code, I wanted to work like a professional developer: build and test changes on my local machine, then push to the cloud when ready to go live. This requires three foundational tools that developers use worldwide.
What These Tools Do:
- Node.js: A runtime environment that allows your computer to execute JavaScript code. Web applications built with frameworks like Next.js require Node.js to run locally.
- Git: Version control software that tracks every change you make to your code. Think of it as an unlimited undo button that also lets you sync code between your computer and the cloud.
- VS Code: A code editor where you view and modify your project files. It provides syntax highlighting, file navigation, and integrations with other developer tools.
How I Set These Up:
I used a combination of Claude Pro and Claude Code to walk me through installing each tool. Claude Pro provided step-by-step instructions for downloading and installing Node.js, Git, and VS Code on my operating system. Claude Code, a command-line tool that can read and modify project files, later helped me configure settings and troubleshoot issues.
Installing Requirements:
- Downloaded Node.js (required to run the code)
- Installed Git for version control
- Downloaded VS Code as the code editor
Setting Up GitHub: GitHub is a cloud platform that hosts Git repositories. It serves as the central location where your code lives online and where services like Vercel pull from to deploy your site.
- Created a GitHub account (Free for public repositories)
- Created a new repository for the website
- Connected my local code to the GitHub repository using Git commands (Claude provided these commands)
Deploying to Vercel: Vercel is a hosting platform that automatically deploys your website whenever you push changes to GitHub. This automation is what professional developers call "continuous deployment."
- Signed up at Vercel using my GitHub account (Free tier: 100GB bandwidth)
- Imported the repository
- Clicked deploy
The site went live immediately at a vercel.app subdomain. After this initial deployment, pushing code changes to GitHub automatically triggers Vercel to rebuild and deploy.
Stage 3: Connecting the Custom Domain
By default, Vercel provides a free subdomain like yourproject.vercel.app. To use a custom domain like MattOden.com, you need to configure DNS records that point your domain to Vercel's servers.
How AI Helped:
Claude Pro explained what each DNS record does and why CNAME records are necessary. When the domain wasn't connecting properly, Claude helped me troubleshoot by checking the DNS configuration.
The Setup Process:
- Already owned the domain at GoDaddy (registered 20 years ago, ~$12/year renewal)
- In Vercel's project settings, added the custom domain
- In GoDaddy's DNS settings, added the CNAME records Vercel provided (CNAME records tell DNS servers where to redirect traffic)
- Waited for DNS propagation (typically 5-60 minutes for changes to spread across global DNS servers)
Vercel automatically configured SSL certificates (the technology that enables HTTPS and the padlock icon in browsers), which encrypt data between visitors and the server.
Stage 4: Setting Up Backend Services
Backend services are cloud-based platforms that handle specific functions: storing data, sending emails, protecting against bots, and running AI models. Each service requires creating an account, obtaining API keys (secure passwords for your application to access the service), and configuring environment variables (secure storage for these keys).
How AI Assisted This Process:
This was the most technical part of the project. Each service has different setup requirements, and I needed to:
- Run SQL queries in Supabase to create custom database tables for storing conversations and contact form submissions
- Add DNS records in GoDaddy to verify my domain with Resend
- Configure environment variables both locally and in Vercel's dashboard
- Debug connection issues when services weren't communicating properly
Claude Pro provided the SQL queries, explained what each DNS record does, and generated the code to connect each service. When things broke, I copied error logs from Chrome's developer console and Vercel's deployment logs into Claude Code, which identified the issues and fixed the code. For additional context, I searched Google for documentation and used ChatGPT to get alternative explanations of technical concepts.
Service Setup Overview:
Each backend service required creating an account, getting API keys, and adding those keys to environment variables (both locally and in Vercel's settings).
Supabase Setup:
- Created an account and new project (Free tier: 500MB database)
- Used the SQL editor to create database tables for conversations and contact submissions. SQL (Structured Query Language) is used to create and modify database structures. Claude provided the specific SQL queries to run, including table schemas with appropriate data types and relationships.
- Copied the project URL and API keys
Pinecone Setup:
- Created an account (Free tier: 100K vectors)
- Created a vector index with 1536 dimensions (matching OpenAI's embedding size—embeddings are mathematical representations of text meaning, and the dimension count must match between where vectors are created and where they're stored)
- Selected cosine similarity as the distance metric (the mathematical method for comparing how similar two vectors are)
- Copied the API key
Claude explained what dimensions and distance metrics are, and why the configuration must match OpenAI's embedding model specifications.
Resend Setup:
- Created an account (Free tier: 100 emails/day)
- Verified the domain by adding DNS records to GoDaddy. Domain verification proves you own the domain and allows Resend to send emails on your behalf. Claude explained which DNS records to add and where to find them in the GoDaddy dashboard.
- Created an API key for sending emails
Cloudflare Turnstile Setup:
- Added the site to Cloudflare (Free)
- Enabled Turnstile
- Got the site key and secret key
AI Model Provider Setup:
- OpenAI: Created account and API key (Paid - pay-per-use)
- Groq: Created account and API key (Free tier available)
- Together.ai: Created account and API key (Free tier available)
- Cerebras: Created account and API key (Free tier available)
I stored all API keys in environment variables rather than directly in the code.
Configuring Environment Variables
After creating accounts and generating API keys for each service, I needed to manually configure environment variables in two places: locally on my machine and in Vercel's cloud environment. This ensures the application can securely connect to each service without exposing sensitive keys in the code.
What Environment Variables Are:
Environment variables are configuration values stored separately from your code. They contain sensitive information like API keys, database passwords, and service URLs. By keeping these values out of your code, you prevent accidentally exposing them when sharing code or pushing to GitHub.
The Manual Configuration Process:
For each service (Pinecone, Supabase, Resend, Cloudflare, OpenAI, Groq, Together.ai, Cerebras), I had to:
- Generate the API key: Navigate to the admin/settings section of each service and create an API key
- Copy the key: Copy the key value (usually a long string of random characters)
- Add to
.env.local: Open the.env.localfile in my project root directory and add a line likePINECONE_API_KEY=your_key_here - Add to Vercel: Log into Vercel's dashboard, navigate to my project settings, find the Environment Variables section, and manually add each key-value pair
Why This Takes Time:
This process required switching between multiple browser tabs, carefully copying and pasting keys without errors, and ensuring the variable names in .env.local matched exactly what the code expected. A single typo in a variable name or key value would cause the service to fail silently or throw cryptic error messages.
For this project, I configured approximately 15-20 environment variables across 8 different services. The entire process took roughly 45-60 minutes of careful copying, pasting, and verification.
How AI Helped:
Claude Pro provided a checklist of which environment variables I needed for each service and the exact format they should follow. When services weren't connecting, Claude Code helped me identify whether the issue was a missing variable, incorrect variable name, or malformed key value.
Stage 5: Building the AI Chat System
The AI system uses RAG (Retrieval-Augmented Generation), the same technology that powers tools like Perplexity and Google's NotebookLM. RAG works by first searching for relevant information, then using that information to generate accurate responses. This prevents the AI from making up information and keeps responses grounded in your actual content.
How AI Helped Build This:
Building a RAG system requires writing code to process documents, generate embeddings (mathematical representations of text meaning), search for similar content, and orchestrate the conversation flow. Claude Code wrote all of this code based on descriptions of what I wanted the system to do. When the system returned errors or unexpected results, I pasted error messages into Claude Code, which debugged and fixed the implementation. I also used Google to find documentation about OpenAI's embedding API and ChatGPT to understand vector database concepts.
How It Works:
Content Processing: Blog posts and other content are split into chunks. Each chunk is converted into a vector (a mathematical representation of meaning) using OpenAI's embedding model. These vectors are stored in Pinecone along with the original text.
Query Processing: When a user asks a question, the question is also converted into a vector using the same embedding model.
Similarity Search: Pinecone finds the vectors most similar to the question vector, retrieving the corresponding text chunks.
Response Generation: The retrieved text chunks are sent to an AI model (Groq, Together, Cerebras, or OpenAI) along with the user's question. The AI generates a response based on this context.
Storage: The conversation is saved to Supabase for record-keeping.
Implementation Steps:
Claude Code wrote the code for each of these steps based on descriptions of what I wanted:
- Wrote a script to process content files and generate embeddings
- Created an API endpoint to handle chat requests
- Built the frontend chat interface
- Connected everything to the database for conversation logging
When components didn't work together properly, I described the issue or pasted error messages, and Claude Code identified where connections were failing and fixed the integration points.
My Current Development Workflow
Building the site was one thing; maintaining and improving it is another. Here's how I work on the site now:
Local Development Process:
- Open the project in VS Code: I navigate to my project folder and open it in VS Code
- Start the development server: Run
npm run devin the terminal (npm is the package manager that comes with Node.js), which launches a local version of the site atlocalhost:3000 - Make changes using Claude Code: I describe what I want to change or add in plain language to Claude Code. It reads the relevant files, makes the modifications, and I see the changes instantly refresh in my browser
- Test the changes: I interact with the feature to verify it works correctly
- Debug if needed: If something breaks, I copy error messages from Chrome's developer console into Claude Code, which identifies the issue and fixes it
Deploying Changes:
- Test thoroughly locally: Verify everything works on my local machine
- Commit changes to Git: Use Git commands to save the changes with a description of what I modified
- Push to GitHub: Send the changes from my computer to GitHub
- Automatic deployment: Vercel detects the changes in GitHub and automatically deploys the updated site within 60 seconds
When Deployments Fail:
Sometimes deployments fail due to environment variable issues, package version conflicts, or code errors that only appear in production. When this happens, I copy the error logs from Vercel's dashboard into Claude Code, which analyzes the logs and tells me exactly what to fix. For particularly complex issues, I also searched Google for similar error messages and used ChatGPT to get alternative explanations or solutions.
Why This Workflow Works:
This professional developer setup means I never directly edit the live site. I test everything locally first, which prevents breaking the live site. Git provides a complete history of every change, so I can revert mistakes. The automatic deployment from GitHub to Vercel means updates go live in seconds without manual intervention.
Mobile Optimization
Over 60% of visitors access the site from mobile devices. Responsive design ensures the site works well on screens of all sizes, from phones to tablets to desktop monitors.
The site uses Tailwind CSS's responsive design utilities, which allow specifying different layouts, font sizes, and spacing for different screen widths. Next.js automatically optimizes images by generating multiple sizes and serving the appropriate version based on the device's screen resolution. Images are also lazy-loaded, meaning they only download when a user scrolls to them, which improves initial page load speed.
Costs
The cost structure for modern web applications differs from traditional hosting. Instead of paying a fixed monthly fee regardless of usage, most services charge based on actual consumption. For low-traffic sites, this means using primarily free tiers.
One-Time Costs:
- V0 Pro: $20 (one month to design and export code)
- Domain: Already owned (registered 20 years ago, ~$12/year renewal)
- Total: ~$20 in year one
Monthly Running Costs:
The services listed below offer free tiers that are sufficient for low-traffic websites:
- Vercel: Free (100GB bandwidth)
- Supabase: Free (500MB database)
- Pinecone: Free (100K vectors)
- Resend: Free (100 emails/day)
- Cloudflare: Free
AI API costs are pay-per-use. Using primarily free tiers from Groq, Together.ai, and Cerebras, costs average $3-5/month with moderate traffic. OpenAI embeddings for content processing incur minimal charges.
Traditional web hosting typically costs $20-100/month without any AI capabilities.
Tools and Resources
Core Infrastructure:
- Vercel - Hosting and deployment
- GitHub - Version control
- GoDaddy - Domain registration
- VS Code - Code editor
- Node.js - JavaScript runtime
- npm - Package manager
- Git - Version control software
AI and Development:
- Base 44 - AI website builder
- Lovable - AI website builder
- V0.dev - AI website builder
- Claude Pro - AI assistant for guidance and explanations
- Claude Code - AI coding assistant
- ChatGPT - AI assistant for alternative explanations
- OpenAI - Embeddings
- Groq - Fast inference
- Together.ai - Open source models
- Cerebras - High-performance inference
Backend Services:
- Supabase - Database
- PostgreSQL - Database engine
- Pinecone - Vector database
- Resend - Email delivery
- Cloudflare Turnstile - Bot protection
Learning Resources:
- Next.js Documentation
- React Documentation
- Tailwind CSS Documentation
- Chrome DevTools
- Perplexity - AI-powered search
Final Thoughts
Building sophisticated web applications now requires less programming experience than before. AI tools like V0, Claude Pro, and Claude Code enable non-engineers to create production-quality websites through conversation and iteration. Modern cloud services simplify infrastructure through well-documented APIs and generous free tiers.
What AI Can't Do:
AI provides instructions and writes code, but you still need to:
- Follow setup procedures for each service
- Copy and paste API keys and configuration values
- Recognize when something isn't working and describe the problem
- Test features to verify they work as intended
- Debug by providing error messages to AI tools
What Makes This Achievable:
Each platform provides clear documentation. When that documentation isn't enough, AI assistants can interpret it, explain technical concepts, generate the necessary code, and debug issues. The combination makes previously inaccessible technical tasks manageable for non-programmers willing to follow instructions and troubleshoot methodically.
The Build Process:
Design with V0, set up a local development environment with Node.js and Git, deploy to Vercel, connect backend services (Supabase, Pinecone, Resend), and add AI capabilities through vector databases and language model APIs (Groq, Together.ai, Cerebras, OpenAI).
Let's Connect
Reach out through the contact form at MattOden.com, or connect with me on LinkedIn.