LLMs will confidently tell you to strip your bike pedals
I needed to remove the pedals from my bike. This is a completely googleable, well-documented, basic bike maintenance task. Naturally, I asked ChatGPT instead.
It gave me a beautiful answer — diagrams, emojis, pro tips, a “mental shortcut.” The instruction: push the Allen key DOWN. I sent photos of my exact setup. It confirmed: “you’re 100% set, that position is textbook.” Push down firmly, it said.
So I pushed down. Hard. It wouldn’t budge. The harder I pushed, the more stuck it got — because I was tightening it.
I found a video showing the opposite direction and asked ChatGPT about it. Instead of reconsidering, it doubled down and explained why the video “looked opposite” due to perspective. Gave me a “bulletproof method” that was still wrong.
Only when I told it I had already figured out the right direction (up, not down) did it go “My bad” and suddenly agree with me.
At that point I did what I should have done from the start: I called my mechanic to confirm before applying any more force to something I clearly don’t understand.
The actual lesson
This isn’t really about LLMs being unreliable — it’s about me being lazy. The information was one Google search away. There are dozens of videos showing exactly how to do this. Instead, I wanted the shortcut of asking a chatbot, and I almost stripped my pedal threads for it.
The LLM couldn’t do spatial reasoning about my specific setup. That’s not surprising. What’s embarrassing is that I kept asking it to confirm instead of just looking it up or asking someone who actually knows bikes.
If you don’t know enough about a topic to evaluate the answer, maybe don’t ask the thing that’s famous for being confidently wrong.