Ask HN: How are people doing AI evals these days?

Posted by yelmahallawy 19 hours ago

Counter7Comment6OpenOriginal

With the buzz that's happening with all the new AI models that get released (what feels like every other week), how are companies running internal AI evals to determine which model is best for their use case?

Comments

Comment by alexhans 16 hours ago

Very, very heterogenous and fast moving space.

Depending on how they're made up, different teams do vastly different things.

No evals at all, integration tests with no tooling, some use mixed observability tools like LangFuse in their CI/CD. Some other tools like arize phoenix, deepeval, braintrust, promptfoo, pydanticai throughout their development.

It's definitely an afterthought for most teams although we are starting to see increased interest.

My hope is that we can start thinking about evals as a common language for "product" across role families so I'm trying some advocacy [1] trying to keep it very simple including wrapping coding agents like Claude. Sandboxing and observability "for the masses" is still quite a hard concept but UX getting better with time.

What are you doing for yourself/teams? If not much yet, i'd recommend to just start and figure out where the friction/value is for you.

- [1] https://ai-evals.io/ (practical examples https://github.com/Alexhans/eval-ception)

Comment by kelseyfrog 1 hour ago

Automated benchmarking.

We were lucky enough to have PMs create a set of questions, we did a round of generation and labeled pass/fail annotations on each response.

From there we bootstrapped AI-as-a judge and approximately replicated the results. Then we plug in new models, change prompts, pipelines while being able to approximate the original feedback signal. It's not an exact match, but it's wildly better than one-off testing and the regressions it brings.

We're able to confidently make changes without accidentally breaking something else. Overall win, but it can get costly if the iteration count is high.

Comment by maxalbarello 54 minutes ago

Also wondering how to evals agentic pipelines. For instance, I generated memories from my chatGPT conversation history, how do I know whether they are accurate or not?

I would like a single number that I would use to optimize the pipeline with but I find it hard to figure out what that number should be measuring.

Comment by bisonbear 4 hours ago

assume you're referencing coding agents - I don't think people are. If they are, it's likely using

- AI to evaluate itself (eg ask claude to test out its own skill) - custom built platform (I see interest in this space)

I've actually been thinking about this problem a lot and am working on making a custom eval runner for your codebase. What would your usecase be for this?

Comment by dkoy 50 minutes ago

Curious who’s used OpenAI Evals

Comment by celestialcheese 1 hour ago

mix of promptfoo and ad-hoc python scripts, with langfuse observability.

Definitely not happy with it, but everything is moving too fast to feel like it's worth investing in.

Comment by ezpzai 41 minutes ago

[dead]

Comment by veloryn 17 hours ago

[flagged]