Welcome to the Skiplevel newsletter. Every issue I share tips, knowledge, and insights for non-technical PMs to become technical leaders in the age of AI.

Keep an eye on your inbox and forward to a fellow PM who needs it!

Oh hey there! It’s been a few months since you last heard from me, so a re-introduction is in order. Hi, I’m Irene, founder of Skiplevel! Since 2021, I’ve helped over 800+ PMs and product teams build technical confidence and become technical leaders in their roles. You’ve probably noticed that AI is re-shaping the PM role. PMs and product teams need clear, easy-to-digest technical education to stay confident and effective in the age of AI, and they need it more than ever.

Technical literacy is no longer a nice-to-have, it’s table stakes. Keep an eye on your inbox and feel free to forward this to a fellow PM who needs it!

In the meantime…

Join the “Build Your Technical Confidence” Sprint: June 15 to July 26

We’re inviting a small group of PMs and Product leaders to supercharge their technical literacy and be ready for the new AI-driven world!

4 hours a week for 6 weeks. Full Skiplevel program access, a private Slack community, and a 1:1 coaching session with me. Everything you need to transform your technical confidence so you can hold your own with engineers and make AI work for you.

Limited spots available. Join the waitlist to get the full details first and get priority access before we open applications to the public.

A lot has changed for PMs with AI. Here’s what hasn’t.

Last week, Stanford released its 2026 AI Index. MIT Technology Review's summary opened with this line: "If you're following AI news, you're probably getting whiplash."

TL;DR: AI is a gold rush. AI is a bubble. AI is taking your job. AI can't read a clock. All of it is in your feed at once, and none of it agrees.

Stanford's own data itself notes that people are adopting AI faster than they picked up the personal computer or the internet, and the benchmarks, policies, and job market are all struggling to keep up.

So if you're feeling behind, just remember that even the people whose full-time job is tracking this stuff are saying the pace is incoherent.

Things have changed for PMs and product teams, that’s for sure. And what's changed can be bucketed into 3 parts:

  1. How you operate individually (your day-to-day)

  2. How you serve your customers (building AI solutions to user problems)

  3. How you collaborate with your team (AI prototyping, writing requirements)

What hasn’t changed is the foundations of how software actually works under the hood.

And anchoring to what stays constant is the fastest way to cut through the AI noise without burning out… while actually keeping up.

AI is still software.

Software is still software.. is still software… is still software. Even if the shape of it is AI.

The traditional software and engineering concepts still apply when you're working with engineers to ship an AI feature or using AI tools to build a prototype yourself.

Most of the "new" AI terms and concepts you keep seeing on LinkedIn is the same technical concepts you’ll find in traditional software, with new AI terminologies and concepts layered on top.

The stronger your existing technical foundation, the easier it’ll be for you to pick up the new AI terminologies and concepts.

And that’s one of the ways Skiplevel’s newsletter will help you cut through the noise.. by helping you stay grounded in the fundamentals before layering the AI building blocks on top.

Three must-know technical concepts and their AI equivalents

To help kick you off, here are 3 must-know technical terms and concepts and their AI equivalents.

APIs → MCP

No doubt you’ve heard of an API. They’re how one piece of software talks to another. With a traditional API, a developer writes the call. A human decides when to call Stripe, what parameters to pass, and what to do with the response. It's deterministic and hand-coded.

🎯 For example, an API living on your server sends an API request connected to Stripe to process a payment. The API request contains parameters such as payment amount and customer information. Stripe’s API responds with the status, and ID of the payment request.

With MCP, the AI model is the one deciding when to call something, what to ask for, and how to use the response. All in real time and based on context.

🎯 For example, when you ask Claude to check your Google Calendar and draft a meeting invite, Claude makes an MCP call to a Google Calendar MCP server, which fetches your availability and returns it to Claude, who then uses that information to complete your request.

To be clear, MCP does not replace APIs, it’s built on top of APIs. When an AI model uses MCP to connect to Google Calendar, there’s actually a 2-step process happening under the hood:
1. The AI model makes an MCP call to the Google Calendar MCP server
2. The MCP server then makes a regular API call to Google Calendar’s actual API, gets the response, and passes it back to the model in a format it understands

Both APIs and MCPs are standardized protocols for how two systems exchange information. If you understand this concept in APIs, you’ll understand it with MCPs.

Logs → Model traces / evals

Logs are the record of everything that happens inside a running system. When something breaks or behaves unexpectedly, a developer pulls the logs to see exactly what happened, in what order, with what inputs and outputs.

🎯 For example, a user hits an error during checkout. A developer pulls the logs and can see the exact sequence: the user clicked "confirm order," a request was sent to the payment service, the payment service returned an error code, and the system failed silently instead of showing a helpful message. The log tells you what happened. You debug from there.

A model trace is the record of everything that happened inside an AI interaction: what prompt the model received, what tools it called, what it returned, and in what order. An eval is the layer on top that asks: was that output actually any good?

🎯 For example, you're a PM testing a new AI feature that summarizes customer support tickets. You run it against 50 real tickets. The traces show you exactly what the model received and returned for each one. The evals score those outputs: did the summary capture the right information, was it the right length, did it miss anything critical? Together they tell you what happened and whether it worked.

Both logs and model traces exist to answer the same question: what did my system actually do, and why? If you've ever dug into logs to debug unexpected behavior, you already understand why traces and evals exist. The AI just made "unexpected behavior" a lot harder to define.

Caching → RAG (Retrieval-Augmented Generation)

Caching is the practice of storing the result of an expensive operation so you don't have to redo it every time someone asks for it. Instead of hitting your database on every single request, you store the result somewhere faster and cheaper to access, and serve it from there.

🎯 For example, your product displays a leaderboard that pulls from a database of millions of records. Recalculating that leaderboard on every page load would be slow and expensive. So instead, you cache the result so page load is faster for every subsequent user.

RAG is the same instinct applied to a different problem. Instead of making a model rely purely on what it was trained on, you retrieve the relevant information first, then hand it to the model along with the user's question.

🎯 For example, you're a PM building an AI assistant for your customer support team. Instead of training a custom model on your entire knowledge base (expensive and slow to update), you set up RAG so when a support agent asks a question, the system first retrieves the most relevant help articles and past ticket resolutions, then passes those to the model along with the question. The model answers using that retrieved context and not just its training data.

RAG doesn't replace caching since your system still caches data the traditional way. RAG is a pattern specifically for giving AI models access to information they weren't trained on such as your internal docs, your product data, your customer history.

Both caching and RAG exist to answer the same question: how do we avoid paying the full cost every time? Caching stores computed results so your app doesn’t need to re-query a database. RAG retrieves relevant context so the model doesn't have to hallucinate an answer it was never trained on. If you understand why caching exists, you understand why RAG exists.

Here are more examples of traditional software and their AI equivalents:

  • SQL Database → Vector database

  • Server → GPU / Model hosting

  • Third-party integration → AI vendor / Model provider

  • Latency → Inference latency

  • Business logic layer → AI / LLM layer

  • Environment variables / Config → System prompt

  • Rate limiting → Token limits / Context window

  • Integration testing → Prompt regression testing

But we’ll take it slow so you don’t burn out.

Welcome back and til next time!

From the field

I recently had a Skiplevel student onboarding call with a PM at Dell who's been building AI products for the past year. I asked her what finally made her enroll:

"I was searching Google and Coursera, but nothing consolidated all the technical terms I knew together.. you had to go here, here, here, and here.

Then I got pushed into AI product design, and I started learning all the AI orchestration stuff like RAG, agentic AI. But by the end of the year, I realized nobody was talking about the orchestration stuff. And I still needed the foundations because it's not going anywhere. It's just getting extended with AI"

- PM at Dell

If this sounds familiar, hit reply and share your story with me!

Connect with me on LinkedIn and X and follow Skiplevel on LinkedIn, and X.

Keep Reading