← Back to home

THE SELF-TAUGHT ENGINEER GUIDE

how i went from zero engineering knowledge to founding engineer at a yc startup in 8 months. the exact tools, stack, and 80/20 lessons. no cs degree required.

by austin kennedy · founding engineer @ origami (yc f24)

0. Context — Who This Is For

Eight months ago I knew nothing about engineering. I had a Finance degree from UIUC. I was building $200 websites for small businesses. I didn't know what an API was.

Now I'm a founding engineer at Origami (YC F24) doing everything from back-end infrastructure to design engineering to AI agent development. I built 6 AI agents that automated my entire operations job — went from 40+ hours of manual work to about 10.

I'm not claiming to be the best at any of this. But I learned the 80/20 of all these things, and most of the practical lessons are in this guide.

This guide is for you if:

  • You don't have a CS degree (neither do I)
  • You want to learn engineering fast, not "properly"
  • You're willing to build ugly things and ship them
  • You want to know what tools actually matter in 2026

This guide is NOT for you if:

  • You want a comprehensive CS education
  • You're already a senior engineer
  • You want theory before practice

1. The Stack (Pick One, Go Deep)

The single biggest mistake beginners make is dabbling. They learn a bit of Python, then switch to Go, then try Rust, then wonder why they can't build anything.

Pick one stack. Go deep. Build 10 things with it.

Here's the stack I use every day at a YC startup:

Frontend

React, Next.js, Tailwind CSS

Backend

Node.js, Express, Python

Database

PostgreSQL via Supabase

Auth

Clerk

AI / Agents

Claude API, MCP, OpenAI

Hosting

Vercel (frontend), Render (backend)

Why this stack? Next.js is the most popular React framework. Supabase gives you a Postgres database with a nice UI and JS client. Vercel deploys your frontend in one click. And everything is in TypeScript/JavaScript, which means one language for your entire application.

If I was starting from zero today, I'd learn JavaScript/TypeScript and nothing else for 6 months. Depth beats breadth when you're self-taught.

2. The Tools That Changed Everything

The tools available in 2026 are insane. They didn't exist when most senior engineers learned to code. Use them shamelessly.

AI-first code editor

Basically ChatGPT integrated into your IDE (IDE is just a fancy word for 'where you can write code and see if it actually works'). Press Cmd+I to open Agent Mode — it can search your codebase, edit files across your project, and run terminal commands autonomously. I use Opus 4.6 as my model — it's really, really good for complex tasks.

My tip: Use Plan Mode (Shift+Tab) before building anything complex. It researches your codebase first so it doesn't hallucinate.

Claude by Anthropic

claude.ai

LLM for writing and creative work

Whenever I want to do any copywriting, script-writing, or anything that involves creativity, I use Claude. They have a feature called 'Projects' — SUPER VALUABLE for when you want to give the LLM a ton of examples of 'what good actually looks like.' I also use the Claude API directly in my AI agents.

My tip: For coding inside Cursor, use Claude Opus 4.6. For standalone creative work, use Claude in the browser with Projects.

ChatGPT by OpenAI

chatgpt.com

LLM for coding and factual work

Whenever I want deep coding help, or just want 'brute-force' type of work, I use GPT. They also have a 'Deep Research' feature that I use whenever I want to get a TON of information on any given subject.

My tip: Use Deep Research before starting any project to understand the landscape of libraries and approaches.

V0 by Vercel

v0.dev

AI frontend generator

Create a beautiful frontend in Next.js from a prompt. It previews everything for you, uses React + Tailwind + Shadcn UI, and you can deploy to Vercel in one click. Over 4 million people have used it. When I need a UI fast, I start here.

My tip: Paste a screenshot of a design you like and say 'make this.' V0 can work from images.

AI-powered browser debugging

This is a game-changer for design engineering. It gives your AI agent (in Cursor) direct access to your live Chrome browser — it can inspect the DOM, check CSS, analyze network requests, and even click through your app. I use it to iterate on design by having the AI see what the page actually looks like, not just what the code says.

My tip: Pair this with screenshots in Cursor. Take a screenshot of the bug, paste it in, and let the AI fix it with live browser context.

Database + auth + storage

PostgreSQL via Supabase is working incredibly well for us. You get a full Postgres database with a nice dashboard, JavaScript client, and real-time subscriptions. It's basically Firebase but with a real database.

My tip: Use the Supabase JS client for reads and the service_role key for writes from your backend.

3. The Three Phases

Looking back, my learning happened in three distinct phases. Each one required a different approach.

01

Build Ugly Things (Months 1-2)

Your only goal is to ship something. It doesn't matter if the code is terrible. It doesn't matter if you'd be embarrassed for anyone to see it. The point is proving to yourself that you can make a computer do what you want.

  • Build a personal website with Next.js and deploy it on Vercel
  • Build a simple CRUD app (todo list, notes app, anything)
  • Connect to a database (Supabase) and read/write data
  • Use an AI API (Claude or OpenAI) for one feature

I started by building $200 websites for small businesses. The code was awful. But each one was a problem I'd never solved before, and every bug I fixed was a concept I'd never encountered. Client work forced me to ship.

02

Build Alongside Better People (Months 3-5)

This is where the real acceleration happens. You need to be in an environment where you're the worst engineer in the room. Reading production code teaches you how problems get solved in the real world.

  • Join a startup, open-source project, or find a mentor
  • Read more code than you write — study how production apps are structured
  • Learn Git properly (push more, brother)
  • Start building features end-to-end: database → API → frontend

At Origami, I was surrounded by engineers who had years of experience I didn't. I learned more in my first month than in a year of solo study. The environment matters more than the curriculum.

03

Build Things That Automate Your Job (Months 6-8)

Now you have enough context to see the opportunities. Look at where your team spends time on repetitive work. Build agents and tools that eliminate it. This is where engineering becomes magic.

  • Identify your team's most repetitive workflows
  • Build internal tools and AI agents to automate them
  • Learn deployment, monitoring, and making things reliable
  • Ship something that saves real hours every week

Here's what my project stack looked like by this phase — each one building on the last:

-
TIS (The Intelligent Search)

Claygent replica with better functionality + screenshot tools. Deployed on Render with 3 sync API routes across 3 separate repos.

-
ORYO-MCP

MCP Server for workflow generation. Working for real clients like Instafleet, Luthor, and more.

-
Cold Outbound Consultant

Managed 25 client accounts and their cold outbound process. Clients averaged 15-20% response rates.

-
Money-Collector

We were collecting $5k/month when I joined. After building a system to track billing and collect cash, we rose to $65k, $57k, $47k, $62k/month — all run by me.

-
The Nest

I hated all my admin + account management work, so I built an MCP server with an open-source front end to automate call prep, invoice sending, and billing tracking. Essentially automated my job.

Engineering 12-13 hours a day was the norm in this phase. The first version of everything was a CLI tool only I could use. That was fine — it proved the concept. Every week I was stacking more proof that I could actually ship. Ship ugly, iterate fast.

4. How I Built 6 AI Agents

Two months into my role at Origami, I was spending 2-3 hours every day on tasks I didn't want to do — and that a computer could handle far more reliably. So I built The Nest — a set of reactive agents that automated my entire operations job.

What Each Agent Does

Call Prep Agent

Problem: Hunting through transcripts from 8 separate meetings to remember one detail a prospect mentioned.

Solution: After every Zoom call, CircleBack sends a webhook → n8n processes the transcript → the agent extracts key points, pain points, and action items → everything lands in Supabase. Before my next call, I just ask 'prep me for my call with [client].'

Transcript Agent

Problem: Manually transferring context across the org. Copy-pasting transcripts, pinging teammates for details.

Solution: Transcripts automatically land in a PostgreSQL database via n8n. They're vectorized for semantic search. Anyone on the team can query any conversation instantly.

Onboarding Agent

Problem: Every closed deal meant a half-day of manual setup across Clerk, Stripe, HubSpot, Vitally, email, and Slack.

Solution: Agent picks up new deal events from HubSpot, creates the customer in Stripe, sets up the Clerk organization, configures Vitally, creates the Slack channel, and sends the welcome email. Zero manual work.

Billing Agent

Problem: Navigating to Stripe, finding the right customer, creating invoices, tracking payments.

Solution: Agent handles invoice creation, payment tracking, and sends follow-ups automatically. Grew Origami revenue from $5k/month to $65k/month with this system.

CRM Agent

Problem: Keeping HubSpot up to date with deal stages, contact info, and activity logs.

Solution: Auto-updates HubSpot deal properties, syncs external IDs, and logs activities after every customer interaction.

Slack Bot Agent

Problem: Team members pinging me for information I'd already documented somewhere.

Solution: Listens for @Bot mentions in Slack, queries the knowledge base, and responds with context. Saved hours of back-and-forth.

The Architecture

Here's how it all fits together:

1.Triggers — Zoom calls (CircleBack webhooks), Slack mentions, HubSpot deal events
2.Ingestion (n8n) — Workflows listen for events, parse data, and write tasks to Supabase
3.Router (Node.js) — Picks up tasks, determines which agent should handle them
4.Agent Workers (Python + Agno) — Specialized agents execute tasks, update CRMs, send emails, create follow-ups
5.Notifications — Results fire back into Slack, HubSpot timelines, and email

Tech Used

Workflows

n8n (self-hosted or cloud)

Database

Supabase (PostgreSQL)

Agent Framework

Agno (Python)

Router

Node.js + Express

LLM

Claude API + OpenAI

Integrations

Stripe, HubSpot, Clerk, Vitally, Slack

Frontend

Next.js + Clerk + Supabase

Deployment

Vercel (frontend), Render (backend)

5. Deploying Everything

Deployment used to scare me. Now it's the easiest part. Here's exactly what I use:

Frontend → Vercel

  • Link your GitHub repo to Vercel
  • Set your environment variables
  • Every push to main auto-deploys
  • That's it. Seriously.

Backend → Render

  • Go to Render, click New → Web Service
  • Connect your GitHub repo
  • Set build command: npm install
  • Set start command: node app.js
  • Every push auto-deploys, just like Vercel

Design Iteration → Chrome DevTools MCP + Screenshots

This is my secret weapon for design engineering. Add the Chrome DevTools MCP server to Cursor. Now your AI agent can:

  • See your live browser (inspect DOM, CSS, network)
  • Take screenshots and compare them to your design
  • Fix layout issues by seeing what's actually rendered
  • Click through your app like a real user

Pair this with taking screenshots and pasting them directly into Cursor with Opus 4.6. The model sees the visual bug and fixes it in context. This workflow is incredibly fast.

6. The AI Copywriter (Side Project)

My copywriting background (from freelancing and running content for Jesse Itzler with SEI) helped me spot an opportunity: personal branding agencies need to produce a lot of content that sounds like their clients, not like AI slop.

So I built an AI copywriter using Next.js + the Claude API. The key insight:

Use Claude's Projects feature to feed it dozens of examples of your client's actual writing style. Then when you prompt it, it mimics their voice — not generic AI voice.

The workflow I built:

1.Run a script to grab all their Instagram reel transcripts via Apify
2.Throw those transcripts into a Claude Project as context
3.Use Wispr Flow to think out loud on a walk — capture ideas via voice
4.Prompt Claude: "Cook me up a post about [topic], using the writing style from these transcripts"

This went from $200 websites to applications worth thousands. The copywriting background + engineering skills was a unique combination that most engineers don't have.

7. Mistakes I Made

I'd be lying if I said the path was smooth. Here's what I'd do differently:

Mistake: Dabbling in multiple languages instead of going deep on one

Lesson: Pick TypeScript. Build everything with it for 6 months. You can learn other languages later.

Mistake: Watching tutorials instead of building

Lesson: Tutorials feel productive but aren't. Build something, get stuck, Google the specific problem, fix it. That's the loop.

Mistake: Not pushing to Git enough

Lesson: Push more, brother. Seriously. Small, frequent commits. It's a habit that saves you constantly.

Mistake: Perfectionism before shipping

Lesson: The first version of my AI agents was a CLI tool only I could use. That was fine. It proved the concept. Ship ugly, iterate fast.

Mistake: Not learning testing early

Lesson: Test cases and assertions — build the habit early. It's the difference between 'it works on my machine' and 'it actually works.'

Mistake: Ignoring error messages

Lesson: 80% of debugging is reading the error message carefully. Seriously. Read it before you Google it.

8. What I'm Still Learning (Honest Self-Assessment)

I want to be real about this. Being self-taught means there are massive gaps. Acknowledging them is how you close them. Here's where I know I need to get better — and what I'm actively studying.

Gaps I'm Closing Right Now

Testing Discipline

Honest take: I ask for test cases and some assertions exist, but a sustained testing habit isn't there yet. This is a big unlock.

What I'm doing: Building the muscle of writing tests before shipping. Even simple assertions save hours of debugging later.

Observability (Logs, Metrics, Traces)

Honest take: Essential for agents + MCP + servers. I use Sentry but it's not yet systematic across everything I build.

What I'm doing: Learning OpenTelemetry and building proper observability into every new project from day one.

Concurrency & Queues

Honest take: Rate-limits, exponential backoff, idempotency keys — I've hit these problems but haven't mastered them. BullMQ/Redis for job control, cancellation, and timeouts.

What I'm doing: This is critical for stable multi-agent runs and reliable batch jobs. Active area of study.

Git Practices

Honest take: Push more, brother. Seriously. Small, frequent commits — I know this but still catch myself going too long between pushes.

What I'm doing: Building the habit of atomic commits. It's simple but makes a massive difference.

Data Layer

Honest take: I use Postgres + Supabase heavily but haven't gone deep on schema constraints, migrations, indices, or pgvector for retrieval quality.

What I'm doing: Moving from 'it works' to 'it's properly modeled.' Less CSV thrash, more reproducible experiments.

Security

Honest take: Input validation, output schema enforcement, token management — I know the concepts but don't apply them rigorously enough yet.

What I'm doing: Every new MCP server and API route now gets input validation from day one.

The Bigger Picture — What's Next to Learn

Being honest about what you don't know is just as important as showing what you've built. Here's my study roadmap:

Cloud Infrastructure & DevOps

I deploy to Vercel/Render now. Learning real infra (Docker internals, Kubernetes, CI/CD pipelines, Terraform) unlocks running complex pipelines reliably at scale.

Core Machine Learning

I use pretrained models and orchestrate them — but I haven't dug into training, loss functions, or fine-tuning. Even a light touch here lets you move from using models to tuning them for your own workflows.

Operating Systems Fundamentals

What is a process? A thread? How does memory work (virtual vs physical addresses)? How do caches work at each level? Understanding a single system is the prerequisite for understanding distributed systems.

Distributed Systems

I'm building multi-agent systems already. Understanding distributed correctness — MapReduce, consensus protocols (Raft, Paxos), sharding, replication — will be a superpower when scaling.

Computer Networking

TCP/IP, DNS, load balancing, protocol design (HTTP/2, gRPC). Would help debug those MCP transport/network race conditions with more rigor.

Design & UX Engineering

Human-computer interaction, design systems, accessibility (WCAG, ARIA). Engineering + design chops make you dangerous — you can go from idea to usable MVP solo.

The pattern I've noticed: every time I learn the fundamentals behind something I was already using, my ability to debug and build jumps dramatically. The abstractions make more sense when you know what's underneath.

9. Every Resource I Used

Here's everything, organized by what you need at each stage.

Start Here (Day 1)

AI Tools (Use These Daily)

For Building Agents

Deployment

Content & Scraping

That's the full playbook. No gatekeeping, no fluff. If you have questions, DM me on X or LinkedIn. I respond to everything.

If you want to see more of what I'm building, check out the blog or projects page.

Now go build something.