why i watched how i use llms by andrej karpathy
3 min read
A couple of weeks ago, I came across Andrej Karpathy's video How I Use LLMs. It’s a more-than-two-hour walkthrough where Karpathy, a leading AI scientist, shares his practical approach to using large language models (LLMs) like ChatGPT, Claude Sonnet, and Grok3. Despite the duration of the video being longer than a standard movie, I hit play (considering that when one of the top AI scientists offers a hands-on demo of the tech supposedly gunning for my software engineering job, it feels like required viewing. 😅).
why i watched it
The main reason? Curiosity, with a side of self-preservation. AI is advancing at breakneck speed, and if there’s one thing it’s exceptionally good at, it’s writing code. LLMs are reshaping software development, and I know that if I don’t keep up and learn how to make them work for me, my role could become obsolete.
I don’t think software engineering jobs will disappear entirely, but they’re definitely evolving.
what it made me think about
My career progression in recent years has been less about writing clever algorithms and more about understanding business needs, communicating with stakeholders, and managing projects from conception to delivery. LLMs have only accelerated this shift — writing code won’t be my main value-add anymore.
Instead, product sense and judgment will matter even more. It’s creeping into territory traditionally held by product managers. Engineers who can collaborate cross-functionally, ask the right questions, and make strategic decisions will stand out. It won’t just be about knowing how to code; it’ll be about knowing what to build and why.
Since LLMs lower the barrier to software creation, more people will have access to build tools. To stay relevant, engineers will need to specialise in areas where LLMs (and newcomers) struggle.
what i’m pondering for the future
🛠️ Engineering is evolving, not disappearing
- Software engineers will move further up the abstraction ladder, focusing on product vision, architecture, and problem definition.
- Writing great code quickly won’t be a differentiator — it’ll be table stakes with LLM support.
- Engineers will need to become skilled “prompters,” adept at guiding LLMs to produce the desired results.
🤝 Soft skills become hard requirements
- Communication, collaboration, and problem decomposition will become even more crucial skills than they already are.
- The ability to work with other humans to understand complex problems will be a key differentiator that LLMs can't replicate (yet!).
- Project management skills—breaking complex tasks into manageable chunks—will be essential for effectively directing LLM work.
🔍 Quality control remains human-centric
- Ensuring LLM-generated code is correct, efficient, and suitable will be a critical part of the job.
- Knowing what “good” code looks like will stay essential — engineers must distinguish between functional and optimal solutions.
- Spotting hallucinations and logical flaws in LLM outputs requires experience and technical insight.
other interesting bits
- Andrej Karpathy was a founding member of OpenAI, before joining Tesla as director of AI. More recently, he started an AI education company, Eureka Labs.
- Andrej coined the term vibe coding in February 2025.