how i went from zero engineering knowledge to founding engineer at a yc startup in 8 months. the exact tools, stack, and 80/20 lessons. no cs degree required.
by austin kennedy · founding engineer @ origami (yc f24)
Eight months ago I knew nothing about engineering. I had a Finance degree from UIUC. I was building $200 websites for small businesses. I didn't know what an API was.
Now I'm a founding engineer at Origami (YC F24) doing everything from back-end infrastructure to design engineering to AI agent development. I built 6 AI agents that automated my entire operations job — went from 40+ hours of manual work to about 10.
I'm not claiming to be the best at any of this. But I learned the 80/20 of all these things, and most of the practical lessons are in this guide.
This guide is for you if:
This guide is NOT for you if:
The single biggest mistake beginners make is dabbling. They learn a bit of Python, then switch to Go, then try Rust, then wonder why they can't build anything.
Pick one stack. Go deep. Build 10 things with it.
Here's the stack I use every day at a YC startup:
React, Next.js, Tailwind CSS
Node.js, Express, Python
PostgreSQL via Supabase
Clerk
Claude API, MCP, OpenAI
Vercel (frontend), Render (backend)
Why this stack? Next.js is the most popular React framework. Supabase gives you a Postgres database with a nice UI and JS client. Vercel deploys your frontend in one click. And everything is in TypeScript/JavaScript, which means one language for your entire application.
If I was starting from zero today, I'd learn JavaScript/TypeScript and nothing else for 6 months. Depth beats breadth when you're self-taught.
The tools available in 2026 are insane. They didn't exist when most senior engineers learned to code. Use them shamelessly.
AI-first code editor
Basically ChatGPT integrated into your IDE (IDE is just a fancy word for 'where you can write code and see if it actually works'). Press Cmd+I to open Agent Mode — it can search your codebase, edit files across your project, and run terminal commands autonomously. I use Opus 4.6 as my model — it's really, really good for complex tasks.
My tip: Use Plan Mode (Shift+Tab) before building anything complex. It researches your codebase first so it doesn't hallucinate.
LLM for writing and creative work
Whenever I want to do any copywriting, script-writing, or anything that involves creativity, I use Claude. They have a feature called 'Projects' — SUPER VALUABLE for when you want to give the LLM a ton of examples of 'what good actually looks like.' I also use the Claude API directly in my AI agents.
My tip: For coding inside Cursor, use Claude Opus 4.6. For standalone creative work, use Claude in the browser with Projects.
LLM for coding and factual work
Whenever I want deep coding help, or just want 'brute-force' type of work, I use GPT. They also have a 'Deep Research' feature that I use whenever I want to get a TON of information on any given subject.
My tip: Use Deep Research before starting any project to understand the landscape of libraries and approaches.
AI frontend generator
Create a beautiful frontend in Next.js from a prompt. It previews everything for you, uses React + Tailwind + Shadcn UI, and you can deploy to Vercel in one click. Over 4 million people have used it. When I need a UI fast, I start here.
My tip: Paste a screenshot of a design you like and say 'make this.' V0 can work from images.
AI-powered browser debugging
This is a game-changer for design engineering. It gives your AI agent (in Cursor) direct access to your live Chrome browser — it can inspect the DOM, check CSS, analyze network requests, and even click through your app. I use it to iterate on design by having the AI see what the page actually looks like, not just what the code says.
My tip: Pair this with screenshots in Cursor. Take a screenshot of the bug, paste it in, and let the AI fix it with live browser context.
Database + auth + storage
PostgreSQL via Supabase is working incredibly well for us. You get a full Postgres database with a nice dashboard, JavaScript client, and real-time subscriptions. It's basically Firebase but with a real database.
My tip: Use the Supabase JS client for reads and the service_role key for writes from your backend.
Looking back, my learning happened in three distinct phases. Each one required a different approach.
Your only goal is to ship something. It doesn't matter if the code is terrible. It doesn't matter if you'd be embarrassed for anyone to see it. The point is proving to yourself that you can make a computer do what you want.
I started by building $200 websites for small businesses. The code was awful. But each one was a problem I'd never solved before, and every bug I fixed was a concept I'd never encountered. Client work forced me to ship.
This is where the real acceleration happens. You need to be in an environment where you're the worst engineer in the room. Reading production code teaches you how problems get solved in the real world.
At Origami, I was surrounded by engineers who had years of experience I didn't. I learned more in my first month than in a year of solo study. The environment matters more than the curriculum.
Now you have enough context to see the opportunities. Look at where your team spends time on repetitive work. Build agents and tools that eliminate it. This is where engineering becomes magic.
Here's what my project stack looked like by this phase — each one building on the last:
Claygent replica with better functionality + screenshot tools. Deployed on Render with 3 sync API routes across 3 separate repos.
MCP Server for workflow generation. Working for real clients like Instafleet, Luthor, and more.
Managed 25 client accounts and their cold outbound process. Clients averaged 15-20% response rates.
We were collecting $5k/month when I joined. After building a system to track billing and collect cash, we rose to $65k, $57k, $47k, $62k/month — all run by me.
I hated all my admin + account management work, so I built an MCP server with an open-source front end to automate call prep, invoice sending, and billing tracking. Essentially automated my job.
Engineering 12-13 hours a day was the norm in this phase. The first version of everything was a CLI tool only I could use. That was fine — it proved the concept. Every week I was stacking more proof that I could actually ship. Ship ugly, iterate fast.
Two months into my role at Origami, I was spending 2-3 hours every day on tasks I didn't want to do — and that a computer could handle far more reliably. So I built The Nest — a set of reactive agents that automated my entire operations job.
Problem: Hunting through transcripts from 8 separate meetings to remember one detail a prospect mentioned.
Solution: After every Zoom call, CircleBack sends a webhook → n8n processes the transcript → the agent extracts key points, pain points, and action items → everything lands in Supabase. Before my next call, I just ask 'prep me for my call with [client].'
Problem: Manually transferring context across the org. Copy-pasting transcripts, pinging teammates for details.
Solution: Transcripts automatically land in a PostgreSQL database via n8n. They're vectorized for semantic search. Anyone on the team can query any conversation instantly.
Problem: Every closed deal meant a half-day of manual setup across Clerk, Stripe, HubSpot, Vitally, email, and Slack.
Solution: Agent picks up new deal events from HubSpot, creates the customer in Stripe, sets up the Clerk organization, configures Vitally, creates the Slack channel, and sends the welcome email. Zero manual work.
Problem: Navigating to Stripe, finding the right customer, creating invoices, tracking payments.
Solution: Agent handles invoice creation, payment tracking, and sends follow-ups automatically. Grew Origami revenue from $5k/month to $65k/month with this system.
Problem: Keeping HubSpot up to date with deal stages, contact info, and activity logs.
Solution: Auto-updates HubSpot deal properties, syncs external IDs, and logs activities after every customer interaction.
Problem: Team members pinging me for information I'd already documented somewhere.
Solution: Listens for @Bot mentions in Slack, queries the knowledge base, and responds with context. Saved hours of back-and-forth.
Here's how it all fits together:
n8n (self-hosted or cloud)
Supabase (PostgreSQL)
Agno (Python)
Node.js + Express
Claude API + OpenAI
Stripe, HubSpot, Clerk, Vitally, Slack
Next.js + Clerk + Supabase
Vercel (frontend), Render (backend)
The code is open source: github.com/austin02202016/the_nest_final
Deployment used to scare me. Now it's the easiest part. Here's exactly what I use:
npm installnode app.jsThis is my secret weapon for design engineering. Add the Chrome DevTools MCP server to Cursor. Now your AI agent can:
Pair this with taking screenshots and pasting them directly into Cursor with Opus 4.6. The model sees the visual bug and fixes it in context. This workflow is incredibly fast.
My copywriting background (from freelancing and running content for Jesse Itzler with SEI) helped me spot an opportunity: personal branding agencies need to produce a lot of content that sounds like their clients, not like AI slop.
So I built an AI copywriter using Next.js + the Claude API. The key insight:
Use Claude's Projects feature to feed it dozens of examples of your client's actual writing style. Then when you prompt it, it mimics their voice — not generic AI voice.
The workflow I built:
This went from $200 websites to applications worth thousands. The copywriting background + engineering skills was a unique combination that most engineers don't have.
I'd be lying if I said the path was smooth. Here's what I'd do differently:
Mistake: Dabbling in multiple languages instead of going deep on one
Lesson: Pick TypeScript. Build everything with it for 6 months. You can learn other languages later.
Mistake: Watching tutorials instead of building
Lesson: Tutorials feel productive but aren't. Build something, get stuck, Google the specific problem, fix it. That's the loop.
Mistake: Not pushing to Git enough
Lesson: Push more, brother. Seriously. Small, frequent commits. It's a habit that saves you constantly.
Mistake: Perfectionism before shipping
Lesson: The first version of my AI agents was a CLI tool only I could use. That was fine. It proved the concept. Ship ugly, iterate fast.
Mistake: Not learning testing early
Lesson: Test cases and assertions — build the habit early. It's the difference between 'it works on my machine' and 'it actually works.'
Mistake: Ignoring error messages
Lesson: 80% of debugging is reading the error message carefully. Seriously. Read it before you Google it.
I want to be real about this. Being self-taught means there are massive gaps. Acknowledging them is how you close them. Here's where I know I need to get better — and what I'm actively studying.
Honest take: I ask for test cases and some assertions exist, but a sustained testing habit isn't there yet. This is a big unlock.
What I'm doing: Building the muscle of writing tests before shipping. Even simple assertions save hours of debugging later.
Honest take: Essential for agents + MCP + servers. I use Sentry but it's not yet systematic across everything I build.
What I'm doing: Learning OpenTelemetry and building proper observability into every new project from day one.
Honest take: Rate-limits, exponential backoff, idempotency keys — I've hit these problems but haven't mastered them. BullMQ/Redis for job control, cancellation, and timeouts.
What I'm doing: This is critical for stable multi-agent runs and reliable batch jobs. Active area of study.
Honest take: Push more, brother. Seriously. Small, frequent commits — I know this but still catch myself going too long between pushes.
What I'm doing: Building the habit of atomic commits. It's simple but makes a massive difference.
Honest take: I use Postgres + Supabase heavily but haven't gone deep on schema constraints, migrations, indices, or pgvector for retrieval quality.
What I'm doing: Moving from 'it works' to 'it's properly modeled.' Less CSV thrash, more reproducible experiments.
Honest take: Input validation, output schema enforcement, token management — I know the concepts but don't apply them rigorously enough yet.
What I'm doing: Every new MCP server and API route now gets input validation from day one.
Being honest about what you don't know is just as important as showing what you've built. Here's my study roadmap:
I deploy to Vercel/Render now. Learning real infra (Docker internals, Kubernetes, CI/CD pipelines, Terraform) unlocks running complex pipelines reliably at scale.
I use pretrained models and orchestrate them — but I haven't dug into training, loss functions, or fine-tuning. Even a light touch here lets you move from using models to tuning them for your own workflows.
What is a process? A thread? How does memory work (virtual vs physical addresses)? How do caches work at each level? Understanding a single system is the prerequisite for understanding distributed systems.
I'm building multi-agent systems already. Understanding distributed correctness — MapReduce, consensus protocols (Raft, Paxos), sharding, replication — will be a superpower when scaling.
TCP/IP, DNS, load balancing, protocol design (HTTP/2, gRPC). Would help debug those MCP transport/network race conditions with more rigor.
Human-computer interaction, design systems, accessibility (WCAG, ARIA). Engineering + design chops make you dangerous — you can go from idea to usable MVP solo.
The pattern I've noticed: every time I learn the fundamentals behind something I was already using, my ability to debug and build jumps dramatically. The abstractions make more sense when you know what's underneath.
Here's everything, organized by what you need at each stage.
Your code editor. Download it first.
Where you'll deploy everything. Free tier is generous.
Your database. Create a free project.
The official docs are genuinely excellent.
Creative work, copywriting, Projects feature for style matching.
Brute-force coding help, Deep Research for learning new topics.
Generate beautiful UIs from a prompt. Start here for frontend.
The 'USB-C port for AI.' Understand this protocol.
Low-code workflow builder. When {x} happens, do {y}.
For building agents programmatically.
Give your AI agent access to your live browser.
Frontend. Link repo, set env vars, push to deploy.
Backend. Node.js and Python services. Auto-deploys from Git.