Skip to main content
website logo savvydev
Building Paveway: A 90-Day AI Product Experiment

Building Paveway: A 90-Day AI Product Experiment

Day 1 of 90: Starting a 90-day journey to build an AI-powered career development platform from scratch. Exploring responsible AI design, safety considerations, and human-AI interaction while documenting every decision and learning in public.

AI Product Development Career Development Learning in Public Constitutional AI Claude TypeScript

Day 1 of 90

Today I’m starting a 90-day experiment: building an AI-powered career development platform from scratch, documenting every decision, and learning in public.

This is about exploring what it takes to build responsible, reliable AI products—not just wrapping APIs, but thinking deeply about the design, safety, and human implications of AI systems.

Why This Matters

I’ve been building software for over 20 years. I’ve seen technologies come and go, hype cycles rise and fall. But AI feels different. Not because of what it can do today, but because of the questions it forces us to ask.

How do we build AI systems that are genuinely helpful? How do we handle uncertainty and edge cases when outputs are unpredictable? How do we ensure advice-giving systems don’t cause harm? How do we design human-AI interactions that feel natural and trustworthy?

These questions fascinate me. As an engineer, I’ve always cared about edge cases, failure modes, and what happens when things go wrong. AI amplifies all of that—which is exactly why I want to dive deeper.

The Challenge

Here’s where I’m starting: I’m a strong full-stack engineer with deep experience in React, TypeScript, and building products end-to-end. But my AI product experience is thin. I’ve used AI tools. I’ve integrated APIs. I’ve read the papers. But I haven’t built substantial AI products that grapple with the real challenges.

So over the next 90 days, I’m going to change that—and I’m documenting the journey.

The Project: Paveway

I’m building Paveway—an AI-powered platform that helps engineers reach their next level of seniority.

Here’s the concept: You log your daily work (what you shipped, problems you solved, decisions you made). Paveway analyzes your GitHub activity, builds context over time, and provides personalized, actionable career advice powered by Claude.

Why This Project?

This isn’t arbitrary. Paveway forces me to solve problems that matter for AI product development:

Building reliable advice systems: How do you ensure AI gives helpful guidance, not harmful recommendations? What happens when the model is uncertain? How do you validate advice quality?

Context management: Career development requires understanding someone’s journey over weeks and months. How do you build and maintain that context effectively? What belongs in memory vs. retrieval?

Human-AI interaction design: Raw AI output isn’t enough. How do you design prompts, structure data, and build UX that makes AI genuinely useful? When should you guide users vs. let them explore freely?

Safety and ethics: Career advice shapes people’s lives. What safeguards do you build? How do you handle bias and uncertainty? When should the system stay silent instead of giving advice?

I’m particularly interested in exploring ideas from Constitutional AI and other work on building reliable, aligned systems. These aren’t just academic exercises—they’re practical design challenges.

The Plan

I’m committing 20 hours per week, every week, for 90 days:

Weeks 1-4: Foundation

  • Build core logging system with voice transcription
  • Integrate Claude API for basic analysis
  • Start using the app myself daily
  • Deep dive into Anthropic’s research on Constitutional AI

Weeks 5-8: Sophistication

  • Add GitHub integration and code pattern analysis
  • Implement vector search for semantic context
  • Build reliability and safety layers
  • Invite beta users and iterate

Weeks 9-12: Polish & Document

  • Comprehensive documentation and case studies
  • Performance optimization and monitoring
  • Technical deep-dive blog posts
  • Open source reusable components

Why I’m Sharing This Publicly

First, accountability. By writing this, I’m committing publicly. Every week, I’ll share updates: what I built, what I learned, what didn’t work, what surprised me.

Second, I believe in learning in public. The process matters as much as the outcome. If my experiments lead to insights about building AI products, those insights are valuable whether Paveway succeeds or not.

Third, I’m hoping to connect with others thinking about these problems. If you’re building AI products, grappling with safety considerations, or just interested in career development tools—let’s talk.

What Success Looks Like

In 90 days, I want to have:

  • A working product that demonstrates thoughtful AI product design
  • Deep knowledge of prompt engineering, safety considerations, and context management
  • Documented learnings through blog posts, case studies, and open source contributions
  • Real users who find Paveway genuinely helpful

But honestly? The real success is becoming the kind of engineer who thinks carefully about building AI responsibly. The kind who asks “should we?” alongside “can we?”

Day 1 Starts Now

Today, I’m setting up the project foundation. By this time next week, I’ll have a deployed app where I can log my daily work and see it in a timeline.

Small steps. Consistent progress. Public accountability.

If you’re interested in following along, I’ll be posting weekly updates here and on LinkedIn. And if you have thoughts, feedback, or want to be a beta user, reach out.

Here’s to the next 90 days.

Week 1 Goals

  • ✅ Published commitment post
  • ⬜ Next.js project setup complete
  • ⬜ Authentication working
  • ⬜ First daily log entry saved
  • ⬜ App deployed to Vercel

Next: Week 2 - Building the Foundation →


This is part of a 90-day series on building Paveway. Follow along for weekly updates on the technical decisions, challenges, and learnings from building an AI-powered career development platform.