Why is GPT-5.4 obsessed with Goblins?
Posted by pants2 20 hours ago
After the 5.4 update, ChatGPT uses "goblin" in almost every conversation. Sometimes It's "gremlin." A recent chat of mine used goblin 3 times in 4 messages:
> this stuff turns into legal goblins fast
> hiding exclusions like little goblins
> But here’s the important goblin
I am not the only one to notice this, there are many Reddit threads on it:
https://www.reddit.com/r/ChatGPT/comments/1roci77/anyone_elses_chatgpt_obsessed_with_goblins_since/
https://www.reddit.com/r/ChatGPT/comments/1rll8hb/suddenly_obsessed_with_goblins_and_gremlins/
---
This is such a weirdly specific word that it chooses to use in over half of its conversations (IME, you should search your chat history for goblin/gremlin and report).
I'm genuinely curious what happens in their post-training that leads to something line this.
What's ironic is OpenAI has been touting 5.4's great personality, but these quirks irritate me like a tiny chaos goblin.
Comments
Comment by muzani 1 hour ago
It used verdant excessively in the past, but that's a less noticeable word than goblin.
Comment by HPSimulator 5 hours ago
If you look at how engineers explain messy systems, they often reach for anthropomorphic metaphors — “gremlins in the machine”, “ghost in the system”, “yak shaving”, etc. They’re basically shorthand for “there’s hidden complexity here that behaves unpredictably”.
For a model generating explanations, those metaphors are useful because they bundle a lot of meaning into one word. So even if the actual frequency in normal conversation is low, the model might still favor them because they’re efficient explanation tokens.
In other words it might not just be training frequency — it could be the model learning that those metaphors are a compact way to communicate messy-system behavior.
Comment by ghostlyInc 13 hours ago
Words like “goblin”, “gremlin”, “yak shaving”, etc. are common in engineering culture to describe hidden bugs or messy systems. If those appear often in the training corpus or get positively reinforced during alignment tuning, the model may overuse them as narrative shortcuts.
It's basically a mild style artifact of the training distribution, not something intentionally programmed.
Comment by d--b 12 hours ago
Comment by ghostlyInc 9 hours ago
But you're right that in day-to-day professional environments they aren't used nearly as much. So it might also just be the model over-generalizing a small stylistic pattern it saw frequently in certain parts of the corpus.
Comment by kilianciuffolo 11 hours ago
Comment by arthurcolle 20 hours ago
Comment by Tarraq 19 hours ago