Greg Troszak

When did cognitive friction become such a bad thing?

Maybe I'm wrong, but at least on the internet, it feels like we're trying to minimize cognitive friction at all costs.

I suspect that's because most major platforms incentivize attention and engagement at the expense of trust and quality.

I think that's very misguided. Cognitive friction is probably one of the most useful signals I know that I'm doing something worthwhile. Yes, it doesn't usually feel great in the moment, but it's a pretty good proxy for growth and learning.

And more importantly, when it's not evenly distributed in social interactions, trust can erode quickly.

An example.

Imagine someone is trying to submit a patch to an open source project.

In world one, they use their AI agent to do the work, it generates slop they don't personally review, and they submit the patch.

In world two, they do the work themselves and submit the patch.

In world one, all the cognitive friction has been placed on the maintainer. Assuming they give a shit about quality, as soon as they realize it's slop, they reject the patch and the interaction is over. Trust with the contributor is gone. No one learned anything.

In world two, the cognitive friction is more evenly distributed. The person had to put time and energy into understanding the codebase. Even if they made a mistake, their initial investment will make the maintainer more likely to give feedback. The contributor learns something. The maintainer gains another person who understands their project. Trust is built.

This isn't an anti-AI post. I think there's a version of world one — where the contributor takes some time to understand the codebase and review their agent's work — that gets pretty close to world two. You can still incur cognitive friction and use AI. I just think it's very tempting not to.

That's a slippery slope. You're cheating yourself out of an opportunity to learn, and you run the risk of eroding trust.