A framework for processing my dissatisfaction using AI

Like many people, I'm feeling some trepidation about AI, both at the macro and micro scales. I'm not even going to address the macro scale here - that's its own circus. But at the microscale, I use it pretty much every day. Most times it helps me get things done faster, which is a good thing. But I also usually feel very unsatisfied, even if I accomplished what I set out to achieve. I haven't quite been able to pinpoint why I feel that way.

Bob Nystrom recently wrote a post that's helped me make some sense out of that. Basically, he argues that in order for AI to be useful, it has to provide some value. But what does it mean for something to be valuable?

He argues that things are valuable because they either provide utility or meaning. A thing has utility if it helps us accomplish something. A thing has meaning if you have some emotional connection to it. That emotional connection can come from any number of sources - maybe the thing helps you do something deeply important, or someone you care about gave it to you, or made it for you. If you distill those examples, meaning is derived from a human being spending time on something. And time is our most precious resource.

As far as I can tell, AI is heavily skewed towards utility. Yes, we need utilitarian things, but the things that I value the most are all imbued with meaning. I'm not going to say that AI will never be able to create something meaningful. I just think human beings will always be better at it - I feel no emotional connection to AI.

When I think back on times when I've felt the most satisfied using AI, it's been to accomplish very utilitarian tasks. I'm a software engineer, so on the surface you'd think most of my tasks skew utilitarian. And they probably do. But I also enjoy practicing my craft.1 And part of practicing a craft is putting yourself into it.

Maybe the craft is shifting? Maybe I should derive more satisfaction from writing a good set of prompts, wielding the tool effectively? But for now, I still find the process - thinking through abstractions, writing the code, honing it - much more satisfying.

I don't think AI is going away. I'm going to keep using it. But it would be nice to have a better, more thoughtful, relationship with it. At least now I have a framework to help guide me through that.

1

I realize some people scoff at the idea of software engineering being a craft. I think anything someone deliberately practices and pursues thoughtfully is a craft, even if the output skews utilitarian.