AI hallucinate. Do you ever double check the output?

Posted by jackota 1 day ago

Counter8Comment14OpenOriginal

Been building AI workflows and then randomly hallucinate and do something stupid so I end up manually checking everything anyway to approve the AI generated content (messages, emails, invoices,ecc.), which defeats the whole point.

Anyone else? How did you manage it?

Comments

Comment by prepend 22 hours ago

Yes, of course I review everything.

I treat it like hiring a consultant. They do a lot of work, but I still review the output before making a decision or passing it on.

Sending something with errors to my boss or peers makes me look stupid. Saying it was caused by unrevised AI makes me look stupider.

Comment by codingdave 1 day ago

Ever? More like always. Keeping humans in the loop is the current best practice. If you truly need to automate something that cannot afford a human checkpoint, find a deterministic solution for it, not LLMs.

Comment by varshith17 23 hours ago

Build validation layers, not trust. For structured outputs (invoices, emails), use JSON schemas + fact-checking prompts where a second AI call verifies critical fields against source data before you see it. Real pattern: AI generates → automated validation catches type/format errors → second LLM does adversarial review ("check for hallucinated numbers/dates") → you review only flagged items + random samples. Turns "check everything" into "check exceptions," cuts review time 80%.

Comment by casualscience 23 hours ago

Also lets 50% of errors through

Comment by Zigurd 1 day ago

You have put your finger on why agent assisted coding often doesn't suck, and other use cases of LLMs often do suck. Lint and the compiler get there licks in before you even smoke test the code. There aren't two layers of deterministic, algorithmic checking for your emails or invoices.

So before anyone concludes that coding agents prove that AI can be useful, find some use cases with similar characteristics.

Comment by jackfranklyn 17 hours ago

The validation layer point is key. Where things actually work is when you can define what 'correct' looks like - invoice numbers either exist or don't, amounts either reconcile against known data or they don't, email addresses either parse or fail.

The trap is when correctness is subjective. Tone, phrasing, whether something 'sounds right' - no automated check helps there, so you're back to reviewing everything.

For structured data like invoices, I've found pattern-matching against known values beats LLMs anyway. Less hallucination risk, faster, and when it fails at least it fails obviously rather than confidently wrong.

Comment by exabrial 23 hours ago

The new guys on my team do not check it. They already had problems checking their work, AI is just amplifying the actual human problem.

Comment by 19arjun89 22 hours ago

At this point, we are not there yet in terms of letting AI make business critical decisions based on its own outputs. Its meant to serve as a decision support system rather than a decision maker.

To minimize hallucinations, yes AI should be set up for deterministic behaviour (depending on your use case, for example, in recruiter, yes it should be deterministic so it produces the same evaluation for the same candidate every time). Secondly, having another AI check hallucination can be a good starting point, assigning scores and penalizing the first AI can also lead to more grounded responses.

Comment by aavci 22 hours ago

In my opinion, the way this will play out is with a significant amount of validation and human oversight to fully utilize these LLMs. As you mentioned, I recommend giving the AI room for error and improving the experience of manually checking everything. Maybe create a tool to facilitate manually checking the output?

This is a valuable read: https://www.ufried.com/blog/ironies_of_ai_1/

Comment by Gioppix 1 day ago

I also don't trust LLMs, but I still find automations useful. Even with human-in-the-loop they save a bunch of time. Clicking "Approve & Send" is much quicker than manually writing out the email, and I just rewrite the 5% that contains hallucinations.

Comment by 7777777phil 1 day ago

I have been building Research automation with LangGraph for the past 2 months. We always put a human in the loop checkpoint after each critical step, might be annoying now but I think it will save us long-term.

Comment by AlexeyBrin 1 day ago

You can't 100% be sure the AI won't hallucinate. If you don't want to manually check it, you can have a different AI check it and if it finds something suspect flag it for a human to verify it. Even better have 2 different AIs check the output and if they don't agree flag it.

Comment by Xorakios 15 hours ago

FWIW, I utilize Perplexity a lot, and Gemini occasionally for what we old geezer call spitballing.

Part of the reason I like Perplexity is because of the embedded references, and I always, always, double check the sources and holler at the Perp AI when it is clearly confabulating or misinterpreting. Still gives me insights and is useful, but trust-but-verify isn't just about arms control ;)

Comment by wormpilled 17 hours ago

> which defeats the whole point.

Not at all