Welcome to the first issue. I'll keep these focused: one build, broken down, with everything you need to try it yourself.

Today: the win/loss analysis system I built to extract patterns from hundreds of sales calls and query them on demand.

A note before we start: I'm not a developer. I built this entire system using vibe coding — AI tools that write code for you while you focus on what to build, not how. More on that at the end.

The Problem

Win/loss analysis is one of the most valuable things a product marketer can do. Talk to enough customers and you'll find patterns that change how you position, what you build, and how you sell.

But it's brutal to do manually. I had thousands of Gong calls to get through. Using the built‑in summarize feature, going call by call, took weeks. I got valuable insights but couldn't sustain the process.

So I tried to automate it. That's where things went wrong. Until it worked.

What Didn't Work

Bulk transcript uploads to ChatGPT

I uploaded transcripts directly and asked for insights. Two problems: I could only upload 10 files at a time, and the outputs were inconsistent. Same transcript, different insights on every run. Not usable.

One giant custom GPT

I put all transcripts into a single document and uploaded it as knowledge to a custom GPT. It hallucinated constantly. I'd ask about objections I knew were in the calls and it would tell me there were none. Too much unstructured data.

The core issue: I was asking AI to do too much at once without giving it structure.

What Worked: Version 1

I built a two‑layer system.

Layer 1: Extraction GPT

A custom GPT with strict output formatting. For each transcript I upload, it extracts:

  • Key pain points, organized by theme
  • Top objections and concerns
  • What resonated with the prospect
  • Desired outcomes and success criteria
  • Competitor mentions with context
  • Company size and industry
  • Personas with titles

The output is consistent every time. Each analysis gets saved to a Google Doc.

Layer 2: Synthesis GPT

A second custom GPT that takes all the extracted summaries and identifies patterns across them. This is where the real insights come from.

One finding changed how we positioned. We discovered that frontline industry buyers cared most about an outcome we weren't leading with in our messaging. That pattern emerged across dozens of calls. I never would have connected them manually.

This system turned weeks of analysis into hours. But it had a limitation: I could only answer questions based on what I thought to extract initially. If a new question came up, I had to start over.

What I Built Next: The Full System

I rebuilt it from the ground up with three goals: automate extraction, make everything queryable, and surface insights without me having to ask.

Automated extraction via Gong API

No more downloading transcripts manually. The system pulls calls directly from Gong and processes them automatically.

Vector database for instant search

All transcripts are embedded in a vector database (such as Pinecone or Supabase with pgvector). This means I can query the full dataset, not just pre‑extracted summaries.

Now I can ask things like:

  • “Find every mention of [specific feature] across lost deals.”
  • “What are the top objections raised by enterprise customers?”
  • “Create a monthly summary of won deal patterns.”

Questions I didn't think to ask upfront are now answerable instantly.

Chat interface

I layered a simple chat interface on top so I can query the dataset in plain English without writing a query.

Slack bot for automated insights

A Slack bot pushes weekly win/loss summaries automatically. I don't have to remember to check. The insights come to me.

How I Built This Without Being a Developer

I used Cursor, an AI coding tool that writes code based on what you describe. There are other tools like Claude Code that do similar things. The key is understanding what these tools can do, not writing code yourself.

Start with a plan. Before writing any code, I ask Cursor to create a plan. I describe what I want the system to do, what APIs I have available, and any constraints. Then I ask it to propose an approach before building anything.

Build iteratively. I learned the hard way that asking the tool to do everything at once leads to errors. Now I build in small pieces. Get one thing working. Then add the next.

Here's an example of a starter prompt:

"Build me a Python script that connects to the Gong API, pulls transcripts daily, uses the OpenAI API to extract structured insights (pain points, objections, competitor mentions, personas), and saves them to a vector database. Start with the Gong connection first. We'll add the other pieces after that works."

Notice I'm specific about what I want but not how to implement it. The AI handles the code. I focus on the outcome.

Learn enough to ask for the right things. This is the real skill. I didn't know what a vector database was until I needed one. I was searching for a way to handle thousands of transcripts and discovered that vector databases can handle large amounts of data efficiently and make it searchable by meaning, not just keywords.

That one piece of knowledge changed everything. Instead of saying "save this to Google Drive," I could say "store this in a vector database." The AI knew what to do with that.

You don't need to know how to write a database query. But you need to know that vector databases exist and what they're good for. That's the difference between someone who can use AI coding tools effectively and someone who gets stuck.

Key Takeaways

  • Don't ask AI to analyze everything at once. Break it into extraction, then synthesis.
  • Structure is what makes it reliable. The custom GPT with strict output formatting solved the consistency problem.
  • Vector databases unlock questions you didn't think to ask. Pre‑extracted summaries are useful, but queryable transcripts are powerful.
  • Automate the boring parts. If you're manually downloading and uploading files, you'll stop doing it. Connect the APIs.
  • You don't need to code. You need to know what to ask for. Learn enough about the tools to describe what you want. AI handles the rest.

Try It Yourself

Start with the extraction GPT. Build a custom GPT with a strict output format for whatever source you're analyzing. Focus on consistency before scale. Once that works, explore vector databases for full‑text search.

If there's interest, I'll share the actual GPT instructions and Cursor prompts I use in a future issue.

Keep Going

If this was useful, send me a note on LinkedIn and tell me what you're building.