I recently built PinpointAI, an AI-powered newsletter SaaS that sends users personalized daily email digests based on their interests. You describe what you care about in plain English, and PinpointAI uses semantic search to find the most relevant content from across the web and delivers it to your inbox.
My personal goal for this project was to see how efficiently I could use AI tools to help build it.
What PinpointAI Does
The concept is simple: you sign up, add topics in natural language like “Rust programming and WebAssembly” or “startup fundraising strategies,” pick your preferred delivery time, and every day you get a curated email with the most relevant content matched to your interests.
Under the hood, it fetches content from sources like Hacker News, generates vector embeddings for both your topics and the content, then uses cosine similarity to rank and surface the best matches. No keyword matching or manual tagging – it actually understands what you’re interested in.
Planning
I started off doing some brainstorming with ChatGPT with the goal of determining a tech stack to use to support my project.
This project touches a lot of different technologies. Here’s what I landed on:
Backend: NestJS with TypeScript. NestJS gave me a solid modular architecture out of the box – dependency injection, decorators, guards, scheduled tasks. The backend is organized into modules for auth, users, topics, content fetching, curation, email sending, and scheduling.
Frontend: Next.js 16 with React 19 and Tailwind CSS. The frontend lives in a /web directory and handles the dashboard where users manage their topics and delivery preferences.
Database: PostgreSQL with pgvector for vector similarity search. This is the core of the curation engine – storing 1536-dimension embeddings and running cosine similarity queries directly in the database. I used Neon for hosted Postgres in production and Docker with pgvector locally.
ORM: Prisma. Prisma’s schema-first approach made it straightforward to define the data model, including the vector columns using its Unsupported type for pgvector fields.
Auth: Clerk. Clerk handles both the frontend authentication (sign-up, sign-in) and syncs user data to the backend via webhooks. This saved me from building an entire auth system from scratch.
AI/Embeddings: OpenAI API using the text-embedding-3-small model. Every topic and content item gets an embedding generated, which powers the semantic matching.
Email: SendGrid for composing and sending the digest emails.
Scheduling: NestJS’s built-in @nestjs/schedule module with cron expressions. Content gets fetched hourly, and the email delivery queue runs every minute to check if any users are due for their digest.
Hosting: Railway for the backend, Vercel for the Next.js frontend.
The Vibe Coding Process
This whole project was built by vibe coding with Claude. Rather than meticulously planning every implementation detail upfront, I worked conversationally – describing what I wanted, iterating on the output, and letting Claude handle the heavy lifting of writing the actual code.
The process looked something like this:
Starting with architecture. I described the product idea and Claude helped me think through the architecture – which services to use, how to structure the modules, what the data model should look like. The CLAUDE.md file in the repo is essentially the blueprint that came out of those early conversations, covering everything from the module breakdown to the curation flow to the MVP scope.
Building module by module. I worked through the app one module at a time. Auth integration with Clerk, the content fetching pipeline for Hacker News, the embedding generation and curation engine, the email sender. For each piece, I’d describe what I needed and Claude would generate the implementation, which I’d review and iterate on.
Handling the tricky parts. The pgvector integration was one of the more interesting challenges. Getting Prisma to work with vector columns, writing raw SQL for cosine similarity queries, batching embedding generation for content items – these are the kinds of things that would normally require a lot of documentation reading and trial and error. Working with Claude made it significantly faster.
Wiring it all together. The last stretch was connecting the frontend to the backend, getting the scheduled jobs running correctly, and making sure the whole pipeline worked end-to-end: fetch content, generate embeddings, match against user topics, send emails at the right time.
What I Learned
Vibe coding with Claude is particularly effective for projects like this where you’re integrating a lot of different services and libraries. Each individual piece isn’t necessarily hard, but the combinatorial complexity of getting Clerk + NestJS + Prisma + pgvector + OpenAI + SendGrid + Next.js all working together is where the time usually goes.
The key was maintaining a clear CLAUDE.md that served as the project’s source of truth. As the architecture evolved, that file kept Claude grounded in the decisions we’d already made, which meant less context-switching and fewer contradictory suggestions as the project grew.
From initial commit to a working product took about two days.
You can check out the project on GitHub.