Details
Details
Four weeks into my first-ever people management gig, 80% of my team quit. Some followed their old manager to a new gig. Others just decided the team shakeup wasn't for them. It stung. But before it stung, I was panicking.
How were we going to get all this work done with 80% of the team?
It wasn't pretty, and it definitely wasn't quite on time, but we got there. I had to beg for people from other teams and sling a bunch of code myself, and we got there. In the process, it taught me my first management lesson: assume that you might be the last one standing, and you're still responsible for delivery. Are you prepared?
As I've moved into increasing levels of responsibility, I've still held to that core tenet. It's certainly harder to accomplish now as a CTO with multiple teams. I'll keep a local copy of the code working, and I'll review MRs to stay afloat of what's going on. I'll write design docs myself and have the team keep me honest on what's not going to work. If I needed to ship a bunch of code tomorrow, I'd admittedly be in trouble. But I'd be shipping by the end of the week for sure.
When I was an individual contributor, I knew when the bosses just didn't "get it" and I doubted their judgment accordingly. It felt like bosses came in with some map that they used at their last job, or two jobs ago, and they were trying to use that same map on the completely new terrain of our reality.
There's value in experience, for sure. Sometimes the map does work. But when your map says there's a clear path from A to B, and you're in the mess staring at a giant canyon where the road should be, no amount of pointing at the old map is going to get you across that canyon. You've got to figure out a new problem.
Pre-AI, a bad leader would hit the wall where their preconceived notions didn't work with the real world, and they'd be forced to either level up (by properly engaging with the problem) or shut up (and let them team work it themselves). Post-AI, there's a new, ugly paradigm... bad leaders can now constantly talk to a sycophant AI who will tell them that actually, their map is really correct all along. And the canyon is still there for their team.
Head chefs have to solve practical challenges that home cooks do not. You can make a beautiful dish of top notch ingredients that you would serve to your mom for her birthday. But is it possible to crank that out every night, dozens of times a night? Can you price it profitably? Does it take three hours to make when you've got a 90-minute table turnaround? If you can't solve those issues, none of the wonderful complex architecture that is this dish will be of any use to anyone in that restaurant.
And the same is true for AI, for leaders. You can construct a beautiful ideal world in your head and in your Claude Code terminal where all of your ideas work perfectly. But you turn them loose on the real world and everything comes apart. Good leaders are properly skeptical of this happening, and immediately mistrust anything the AI says until they validate it. Poor leaders don't want to live in the real world.