I’m trying to get my AI chatbots to stop being so… chatty. Their simulated empathy and forced pleasantries are getting in the way. Designed for engagement and retention, not utility. And for an Incessant Tinkerer, utility is the whole point.
I’m hearing more about the number one use of generative AI is for companionship! Rationally we (should) know we’re ‘talking’ to extremely sophisticated prediction engines that don’t feel or have any empathy towards us. Yet, as discussed in this Nielsen article, we like to anthropomorphize our AI buddies.
Something I heard this morning though strikes a more disturbing tone. That is the notion of AI Psychosis, where parts of our population are seeking refuge with generative AI in preference to sharing their feelings with loved ones or human therapists.
It got me thinking. What if we remove the simulated pleasantries? Assuming we can do that, do we get better utility, less cruft between answers, a more efficient tool, less risk of imagining this tech as something more than it is?
Table of contents
Open Table of contents
Role playing with AI
I think it’s safe to assume that everyone’s journey with generative AI has a phase where we discover system prompts can set up a ‘personality’. I was consumed with writing a Python project creating conversations between two AI chatbots; each set up with simple system prompts to enable some role playing. Like two philosophers debating and defending Free Will versus Determinism. Pretty entertaining!
Then, as part of some sentimental need to recreate computer characters from old sci-fi shows I loved, I was trying to create a system prompt to make chatbots sound like ORAC from the BBC Blake’s Seven TV series and the same for Holly, the ship’s computer, in the early episodes of Red Dwarf. What a great way of burning up data centre resources rather than solving world hunger or the climate crisis!
It was only when I started telling them to be themselves and defend why one was better than the other, that I started to see a common thread across OpenAI, Anthropic, xAI, and Google. Their AI chatbots were all about engagement. Of course right?!
Paring down to get to the utility
The ORAC personality was not a great simulation. I think much of that was not having a voice that was close enough to the wonderful acting of Peter Tuddenham. But what did strike me was that, after the usual condescending initial remark that ORAC would typically come up with, it felt like I was getting much more focused answers. I ended up using the system prompt for a while, not for the recollection of ORAC, but more for the utility of the response!
I was attributing more to the utility and not the simulated engagement.
I was reminded of this after listening to Your Undivided Attention’s podcast called, “AI Psychosis and the Attachment Economy”. Could a suitably crafted system prompt not only remove the false pleasantries, but also remind us that we’re talking to a sophisticated prediction engine with no thought to our wellbeing. While it’s at it, making sure it cites its sources, saying if it can’t and stating its confidence.
Having gotten over the hump of enjoying the engagement, would this make my AI bot a better tool for me?
Collaborative system prompt generation
So I got my best buddies on the case. First Claude Code, then Codex and then Gemini CLI. Claude Code gave me the usual “you’ve reached your token limit” nag so I continued with Codex. Then, once Codex realized I was not writing code, I jumped in with Gemini.
Since I prefer to use my best buddies on the command line, the next thing was to understand how to enable this system prompt before opening a chat session. Initially I asked Gemini and it responded with all the ways to set this up via the API - which I’d already done for the ai-chatbot-conversation project. Then, when I asked it to write instructions about invoking it from the CLI via a flag or configuration file, it invented a bunch of twaddle about non-existent command line flags that these tools just don’t have.
I went back to each command line tool in turn and found out that only Claude Code allows you to amend a system prompt from the invocation:
claude --append-system-prompt-file SYSTEM_PROMPT.md
Did it work?
With bated breath I launched this beautiful new world and got…
You've hit your limit · resets 10am (America/New_York)
If you’re at all interested, here’s this project on GitHub. I don’t blame you if you’re not.
… now that sounds like the personality of Marvin the Paranoid Android!