Why is GPT-5.4 obsessed with Goblins?

Posted by pants2 20 hours ago

Counter13Comment8OpenOriginal

After the 5.4 update, ChatGPT uses "goblin" in almost every conversation. Sometimes It's "gremlin." A recent chat of mine used goblin 3 times in 4 messages:

> this stuff turns into legal goblins fast

> hiding exclusions like little goblins

> But here’s the important goblin

I am not the only one to notice this, there are many Reddit threads on it:

https://www.reddit.com/r/ChatGPT/comments/1roci77/anyone_elses_chatgpt_obsessed_with_goblins_since/

https://www.reddit.com/r/ChatGPT/comments/1rll8hb/suddenly_obsessed_with_goblins_and_gremlins/

---

This is such a weirdly specific word that it chooses to use in over half of its conversations (IME, you should search your chat history for goblin/gremlin and report).

I'm genuinely curious what happens in their post-training that leads to something line this.

What's ironic is OpenAI has been touting 5.4's great personality, but these quirks irritate me like a tiny chaos goblin.

Comments

Comment by muzani 1 hour ago

It could be a kind of watermark. It's possible they aimed for it to be just 5% more noticeable but overshot it. Also humans tend to spot these things better than computers.

It used verdant excessively in the past, but that's a less noticeable word than goblin.

Comment by HPSimulator 5 hours ago

One thing that might also be happening is that LLMs tend to converge on metaphors that compress complex ideas quickly.

If you look at how engineers explain messy systems, they often reach for anthropomorphic metaphors — “gremlins in the machine”, “ghost in the system”, “yak shaving”, etc. They’re basically shorthand for “there’s hidden complexity here that behaves unpredictably”.

For a model generating explanations, those metaphors are useful because they bundle a lot of meaning into one word. So even if the actual frequency in normal conversation is low, the model might still favor them because they’re efficient explanation tokens.

In other words it might not just be training frequency — it could be the model learning that those metaphors are a compact way to communicate messy-system behavior.

Comment by ghostlyInc 13 hours ago

LLMs tend to pick up recurring metaphors from training data and reinforcement tuning.

Words like “goblin”, “gremlin”, “yak shaving”, etc. are common in engineering culture to describe hidden bugs or messy systems. If those appear often in the training corpus or get positively reinforced during alignment tuning, the model may overuse them as narrative shortcuts.

It's basically a mild style artifact of the training distribution, not something intentionally programmed.

Comment by d--b 12 hours ago

They seem a lot more common in OP's conversations than in any regular engineering conversation though. Like I've been an engineer for 20 years. I don't remember the phrase used in my work context, ever.

Comment by ghostlyInc 9 hours ago

That's fair. It probably depends a lot on which corners of engineering culture the training data comes from. In some communities (older Unix culture, Hacker News, ops/debugging discussions) terms like “gremlins”, “yak shaving”, etc. pop up more often as humorous shorthand for messy problems.

But you're right that in day-to-day professional environments they aren't used nearly as much. So it might also just be the model over-generalizing a small stylistic pattern it saw frequently in certain parts of the corpus.

Comment by kilianciuffolo 11 hours ago

I am getting the world goblin and gremlin once every hour.

Comment by arthurcolle 20 hours ago

why don't you ask the model?

Comment by Tarraq 19 hours ago

Not to scare away the goblins!