Skip to content

Who Should Own Your LLM Prompts? (It's Sometimes Not the Developer)

Part of what got me thinking about LLM prompt management is that I don’t really enjoy tweaking prompts. So I’m not going to pretend that’s not a factor. But I think there’s also something more to it.

I was building a feature that uses an LLM to analyse and summarise customer feedback for a product. The architecture side of it I could handle fine. The API calls, the pipeline, how the response gets parsed and stored — that’s developer stuff and I was comfortable with it.

The prompt though is a different thing.

What does a good summary actually look like for this product’s users? What should the model focus on vs. ignore? What tone should the output have? I don’t know that. The founder and PM do. They’ve been talking to customers and they have a clear picture of what useful looks like that I just don’t have.

So the prompt is trying to capture all of that. And it’s sitting in a code file that only I can change.

Every time someone upstream wanted to adjust the focus, tweak what the model picked up on, change how the output felt — that’s a ticket to me. I’d then try to interpret what they wanted, make a change I wasn’t really qualified to make, and hope it landed.

That’s me being a blocker on something I probably shouldn’t even be owning.

Why LLM prompts end up with the developer by default

I think the reason the prompt ends up in the codebase is just that it lives there by default. You write the code, you write the prompt alongside it, you move on. Nobody questions it.

But at some point I had to ask: why is a domain decision going through me? The content of a prompt is really answering questions like what does this feature focus on, what matters to the business, what should the output feel like. Those aren’t engineering questions. And yet the people best placed to answer them can’t touch it without raising a ticket.

It’s worth occasionally stepping back and asking where you’re the bottleneck not because you need to be, but just because of how things ended up.

Giving non-technical team members direct access to the prompt

A clean way for the non-technical team to edit the LLM prompt directly. Not the codebase, not a config file — just a simple admin panel where the relevant parts of the prompt are exposed as editable fields they can update and test themselves.

That way the founder or PM who actually knows what the model should be saying can go in, change it, see how it behaves, and refine it without ever needing to come to me. No ticket, no back and forth, no me guessing at what they mean.

It removes the blocker entirely. They can move faster on the thing they understand better than I do, and I’m free to focus on the parts that actually need a developer.

I should have thought to build this earlier. Instead I was fielding prompt change requests and making calls I wasn’t really in a good position to make.

When prompt management gets more complicated

For simple cases a basic text field is probably enough. But prompts for real features often have more going on — sections that serve different purposes, variables that get filled in at runtime, context injected from elsewhere.

Once it gets there you’d probably want a bit more structure. Maybe separate fields for separate parts of the prompt, maybe a way to preview what the model actually receives before saving a change. A bit more upfront work but worth it if the prompt is going to be adjusted regularly — which in my experience it will be.

The main thing is just noticing when something that looks like a technical concern is actually a product or domain one that’s ended up with you by accident. The LLM prompt is probably that thing more often than it seems.

Took me longer than it should have to spot it.

If you’re curious about the broader toolkit I use when building with LLMs — which APIs, which models, and when — I wrote about it in my developer LLM stack for 2026.