Mistral releases Devstral2 and Mistral Vibe CLI

Posted by pember 16 hours ago

Counter563Comment276OpenOriginal

Comments

Comment by simonw 14 hours ago

  llm install llm-mistral
  llm mistral refresh
  llm -m mistral/devstral-2512 "Generate an SVG of a pelican riding a bicycle"
https://tools.simonwillison.net/svg-render#%3Csvg%20xmlns%3D...

Pretty good for a 123B model!

(That said I'm not 100% certain I guessed the correct model ID, I asked Mistral here: https://x.com/simonw/status/1998435424847675429)

Comment by Jimmc414 12 hours ago

We are getting to the point that its not unreasonable to think that "Generate an SVG of a pelican riding a bicycle" could be included in some training data. It would be a great way to ensure an initial thumbs up from a prominent reviewer. It's a good benchmark but it seems like it would be a good idea to include an additional random or unannounced similar test to catch any benchmaxxing.

Comment by simonw 12 hours ago

Comment by th0ma5 11 hours ago

[flagged]

Comment by vanschelven 10 hours ago

Whatever you think Jimmc414's _concerns_ are (they merely state a possibility) Simon enumerates a number of concerns in the linked article, and then addresses those. So I'm not sure why you think this is so.

Comment by dugidugout 11 hours ago

Condescending and disrespectful to whom? Everybody wholsale? This doesnt seem reasonable? Please elaborate.

Comment by bravetraveler 10 hours ago

Not sure if I'd use the same descriptions so pointedly, but I can see what they mean.

It's perfectly fine to link for convenience, but it does feel a little disrespectful/SEO-y to not 'continue the conversation'. A summary in the very least, how exactly it pertains. Sell us.

In a sense, link-dropping [alone] is saying: "go read this and establish my rhetorical/social position, I'm done here"

Imagine meeting an author/producer/whatever you liked. You'd want to talk about their work, how they created it, the impact it had, and so on. Now imagine if they did that... or if they waved their hand vaguely at a catalog.

Comment by simonw 8 hours ago

I've genuinely been answering the question "what if the labs are training on your pelican benchmark" 3-4 times a week for several months at this point. I wrote that piece precisely so I didn't have to copy and paste the same arguments into dozens of different conversations.

Comment by bravetraveler 8 hours ago

Oh, no. Does this policing job pay well? /s Seriously: less is more, trust the process, any number of platitudes work here. Who are you defending against? Readers, right? You wrote your thing, defended it with more of the thing. It'll permeate. Or it won't. Does it matter?

You could be done, nothing is making you defend this (sorry) asinine benchmark across the internet. Not trying to (m|y)uck your yum, or whatever.

Remember, I did say linking for convenience is fine. We're belaboring the worst reading in comments. Inconsequential, unnecessary heartburn. Link the blog posts together and call it good enough.

Comment by Barbing 6 hours ago

Surprised to see snark re: what I thought was a standard practice (linking FAQs, essentially).

I hadn’t seen the post. It was relevant. I just read it. Lucky Ten Thousand can read it next time even though I won’t.

Simon has never seemed annoying so unlike other comments that might worry me (even “Opus made this” even though it’s cool but I’m concerned someone astroturfed), that comment would’ve never raised my eyebrows. He’s also dedicated and I love he devotes his time to a new field like this where it’s great to have attempts at benchmarks, folks cutting through chaff, etc.

Comment by bravetraveler 6 hours ago

The specific 'question' is a promise to catch training on more publicly available data, and to expect more blog links copied 'into dozens of different conversations'... Jump for joy. Stop the presses. Oops, snarky again :)

Yes, the LLM people will train on this. They will train on absolutely everything [as they have]. The comments/links prioritize engagement over awareness. My point, I suppose, if I had one is that this blogosphere can add to the chaff. I'm glad to see Simon here often/interested.

Aside: all this concern about over-fitting just reinforces my belief these things won't take the profession any time soon. Maybe the job.

Comment by simonw 8 hours ago

You don't have to convince me the pelican riding a bicycle SVG benchmark is asinine. That's kind of the point!

Comment by bravetraveler 8 hours ago

Having read the followup post being linked, I'm even more confused. Commenting or, really, anything seems even less worthwhile. That's my point.

You bring the benchmark and anticipated their... cheesing, with a promise to catch them on it. Cool announcement of an announcement. Just do that [or don't]. In a hippy sense, this is no longer yours. It's out there. Like everything else anyone wrote.

Let the LLM people train on your test. Catch them as claimed. Publish again. Huzzah, industry without overtime in the comments. It makes sense/cents to position yourself this way :)

Obviously they're going to train on anything they can get. They did. Mouse, meet cat. Some of us in the house would love it if y'all would keep it down! This is 90s rap beef all over again

Comment by charcircuit 7 hours ago

If you want a summary you can have your ai assistant summarize the link.

Comment by bravetraveler 7 hours ago

Woooooosh, please see if an LLM can help you. I'm not getting paid for this

Comment by tomrod 9 hours ago

Hell, I would consider myself graced that simonw, yes, THAT simonw, the LLM whisperer, took time out of his busy schedule to send me to a discussion I might have expressed interest in.

Comment by bravetraveler 9 hours ago

> send me to a discussion I might have expressed interest in

No, no, remember? Points to the blog you were already reading! Working diligently to build a brand: podcast, paid newsletter, the works.

Comment by tomrod 3 hours ago

I wasn't speaking to this interaction, and my point is genuine. Simonw has done fantastic work in the LLM space

Comment by th0ma5 11 hours ago

No, when did I say that?

Comment by dugidugout 11 hours ago

It isn't clear what you said.

You asserted a pattern of conduct on the user simonw:

> I think constantly replying to everybody with some link which doesn't address their concerns

Then claimed that conduct was:

> condescending and disrespectful.

I am asking you to elaborate to whom simonw is condescending and disrespecting. I don't see how it follows.

Comment by 11 hours ago

Comment by Workaccount2 6 hours ago

It would be easy to out models that train on the bike pelican, because they would probably suck at the kayaking bumblebee.

So far though, the models good at bike pelican are also good at kayak bumblebee, or whatever other strange combo you can come up with.

So if they are trying to benchmaxx by making SVG generation stronger, that's not really a miss, is it?

Comment by majormajor 5 hours ago

That depends on if "SVG generation" is a particularly useful LLM/coding model skill outside of benchmarking. I.e., if they make that stronger with some params that otherwise may have been used for "rust type system awareness" or somesuch, it might be a net loss outside of the benchmarks.

Comment by 0cf8612b2e1e 7 hours ago

I assume all of the models also have variations on, “how many ‘r’s in strawberry”.

Comment by thatwasunusual 7 hours ago

> We are getting to the point that its not unreasonable to think that "Generate an SVG of a pelican riding a bicycle" could be included in some training data.

I may be stupid, but _why_ is this prompt used as a benchmark? I mean, pelicans _can't_ ride a bicycle, so why is it important for "AI" to show that they can (at least visually)?

The "wine glass problem"[0] - and probably others - seems to me to be a lot more relevant...?

[0] https://medium.com/@joe.richardson.iii/the-curious-case-of-t...

Comment by simonw 7 hours ago

The fact that pelicans can't ride bicycles is pretty much the point of the benchmark! Asking an LLM to draw something that's physically impossible means it can't just "get it right" - seeing how different models (especially at different sizes) handle the problem is surprisingly interesting.

Honestly though, the benchmark was originally meant to be a stupid joke.

I only started taking it slightly more seriously about six months ago, when I noticed that the quality of the pelican drawings really did correspond quite closely to how generally good the underlying models were.

If a model draws a really good picture of a pelican riding a bicycle there's a solid chance it will be great at all sorts of other things. I wish I could explain why that was!

If you start here and scroll through and look at the progression of pelican on bicycle images it's honestly spooky how well they match the vibes of the models they represent: https://simonwillison.net/2025/Jun/6/six-months-in-llms/#ai-...

So ever since then I've continue to get models to draw pelicans. I certainly wouldn't suggest anyone take serious decisions on model usage based on my stupid benchmark, but it's a fun first-day initial impression thing and it appears to be a useful signal for which models are worth diving into in more detail.

Comment by thatwasunusual 5 hours ago

> If a model draws a really good picture of a pelican riding a bicycle there's a solid chance it will be great at all sorts of other things.

Why?

If I hired a worker that was really good at drawing pelicans riding a bike, it wouldn't tell me anything about his/her other qualities?!

Comment by suspended_state 5 minutes ago

Your comment is funny, but please note: it's not drawing a pelican riding a bike, it's describing in SVG a pelican riding a bike. Your candidate would at least displays some knowledge of the SVG specs.

Comment by vikramkr 2 hours ago

The difference is that the worker you hire would be a human being and not a large matrix multiplication that had parameters optimized by a a gradient descent process and embeds concepts in a higher dimensional vector space that results in all sorts of weird things like subliminal learning (https://alignment.anthropic.com/2025/subliminal-learning/).

It's not a human intelligence - it's a totally different thing, so why would the same test that you use to evaluate human abilities apply here?

Also more directly the "all sorts of other things" we want llms to be good at often involve writing code/spatial reasoning/world understanding which creating an svg of a pelican riding a bicycle very very directly evaluates so it's not even that surprising?

Comment by simonw 4 hours ago

I wish I knew why. I didn't think it would be a useful indicator of model skills at all when I started doing it, but over time the pattern has held that performance on pelican riding a bicycle is a good indicator of performance on other tasks.

Comment by jtbaker 4 hours ago

a posteriori knowledge. the pelican isn't the point, it's just amusing. the point is that Simon has seen a correlation between this skill and and the model's general capabilities.

Comment by wisty 7 hours ago

It's not nessessarily the best benchmark, it's a popular one, probably because it's funny.

Yes it's like the wine glass thing.

Also it's kind of got depth. Does it draw the pelican and the bicycle? Can the penguin reach the peddles? How?

I can imagine a really good AI finding a funny or creative or realistic way for the penguin to reach the peddles.

An slightly worse AI will do an OK job, maybe just making the bike small or the legs too long.

An OK AI will draw a penguin on top of a bicycle and just call it a day.

It's not as binary as the wine glass example.

Comment by thatwasunusual 5 hours ago

> It's not nessessarily the best benchmark, it's a popular one, probably because it's funny.

> Yes it's like the wine glass thing.

No, it's not!

That's part of my point; the wine glass scenario is a _realistic_ scenario. The pelican riding a bike is not. It's a _huge_ difference. Why should we measure intelligence (...) in regards to something that is realistic and something that is unrealistic?

I just don't get it.

Comment by Fnoord 1 hour ago

> the wine glass scenario is a _realistic_ scenario

It is unrealistic because if you go to a restaurant, you don't get served a glass like that. It is frowned upon (alcohol is a drug, after all) and impractical (wine stains are annoying) to fill a glass of wine as such.

A pelican riding a bike, on the other hand, is realistic in a scenario because of TV for children. Example from 1950's animation/comic involving a pelican [1].

[1] https://en.wikipedia.org/wiki/The_Adventures_of_Paddy_the_Pe...

Comment by vikramkr 2 hours ago

If the thing we're measuring is a the ability to write code, visually reason, and handle extrapolating to out of sample prompts, then why shouldn't we evaluate it by asking it to write code to generate a strange image that it wouldn't have seen in its training data?

Comment by th0ma5 11 hours ago

If this had any substance then it could be criticized, which is what they're trying to avoid.

Comment by Etheryte 9 hours ago

How? There's no way for you to verify if they put synthetic data for that into the dataset or not.

Comment by 12 hours ago

Comment by baq 13 hours ago

but can it recreate the spacejam 1996 website? https://www.spacejam.com/1996/jam.html

Comment by aschobel 12 hours ago

in case folks are missing the context

https://news.ycombinator.com/item?id=46183294

Comment by 12 hours ago

Comment by lagniappe 12 hours ago

That is not a meaningful metric given that we don't live in 1996 and neither do our web standards.

Comment by tarsinge 12 hours ago

In what year was it meaningful to have pelicans riding bicycles?

Comment by lagniappe 12 hours ago

SVG is a current standard. Do not be coy just to satisfy your urge to disagree.

Comment by tarsinge 11 hours ago

The website is live and renders correctly on my Safari mobile: https://www.spacejam.com/1996/

I may have missed something but where are we saying the website should be recreated with 1996 tech or specs? The model is free to use any modern CSS, there is no technical limitations. So yes I genuinely think it is a good generalization test, because it is indeed not in the training set, and yet it is easy an easy task for a human developer.

Comment by locallost 12 hours ago

The point stands. Whether or not the standard is current has no relevance for the ability of the "AI" to produce the requested content. Either it can or can't.

Comment by lagniappe 12 hours ago

Comment by locallost 1 hour ago

> Ergo, models for the most part will only have a cursory knowledge of a spec that your browser will never be able to parse because that isn't the spec that won.

Browsers are able to parse a webpage from 1996. I don't know what the argument in the linked comment is about, but in this one, we discuss the relevance of creating a 1996 page vs a pelican on a a bicycle in SVG.

Here is Gemini when asked how to build a webpage from 1996. Seems pretty correct. In general I dislike grand statements that are difficult to back up. In your case, if models have only a cursory knowledge of something (what does this mean in the context of LLMs anyway), what exactly they were trained on etc.

The shortened Gemini answer, the detailed version you can ask for yourself:

Layout via Tables: Without modern CSS, layouts were created using complex, nested HTML tables and invisible "spacer GIFs" to control white space.

Framesets: Windows were often split into independent sections (like a static sidebar and a scrolling content window) using Frames.

Inline Styling: Formatting was not centralized; fonts and colors were hard-coded individually on every element using the <font> tag.

Low-Bandwidth Design: Visuals relied on tiny tiled background images, animated GIFs, and the limited "Web Safe" color palette.

CGI & Java: Backend processing was handled by Perl/CGI scripts, while advanced interactivity used slow-loading Java Applets.

Comment by utopiah 10 hours ago

> neither do our web standards

I'd be curious about that actually, feel like W3C specifications (I don't mean browser support of them) rarely deprecate and precisely try to keep the Web running.

Comment by baq 12 hours ago

Yes, now please prepare an email template which renders fine in outlook using modern web standards. Write it up if you succeed, front page of HN guaranteed!

Comment by tomashubelbauer 12 hours ago

The parent comment is a reference to a different story that was on the HN home page yesterday where someone attempted that with Claude.

Comment by lagniappe 12 hours ago

Yes, and I had a lengthier response in that thread explaining why this isn't a useful metric.

https://news.ycombinator.com/item?id=46183673

Comment by willahmad 14 hours ago

I think this benchmark could be slightly misleading to assess coding model. But still very good result.

Yes, SVG is code, but not in a sense of executable with verifiable inputs and outputs.

Comment by jstummbillig 12 hours ago

I love that we are earnestly contemplating the merits of the pelican benchmark. What a timeline.

Comment by andrepd 10 hours ago

It's not even halfway up the list of inane things of the AI hype cycle.

Comment by hdjrudni 4 hours ago

But it does have a verifiable output, no more or less than HTML+CSS. Not sure what you mean by "input" -- it's not a function that takes in parameters if that's what you're getting at, but not every app does.

Comment by iberator 11 hours ago

Where did you get llm tool from?!

Comment by fauigerzigerk 11 hours ago

Comment by techsystems 8 hours ago

Cool! I can't find it on the read me, but can it run Qwen locally?

Comment by simonw 8 hours ago

The best way to do that at the moment is using the llm-ollama plugin.

Comment by cpursley 14 hours ago

Skipped the bicycle entirely and upgraded to a sweet motorcycle :)

Comment by aorth 14 hours ago

Looks like a Cybertruck actually!

Comment by BudaDude 13 hours ago

I was thinking a Warthog

https://www.halopedia.org/Warthog

Comment by lubujackson 10 hours ago

The Batman motorcycle!

Comment by troyvit 10 hours ago

I'm Pelicanman </raspy voice>

Comment by felixg3 13 hours ago

Is it really an svg if it’s just embedded base64 of a jpg

Comment by joombaga 12 hours ago

You were seeing the base64 image tag output at the bottom. The SVG input is at the top.

Comment by breedmesmn 12 hours ago

Impressive! I'm really excited to leverage this in my gooning sessions!

Comment by 12 hours ago

Comment by esafak 14 hours ago

Less than a year behind the SOTA, faster, and cheaper. I think Mistral is mounting a good recovery. I would not use it yet since it is not the best along any dimension that matters to me (I'm not EU-bound) but it is catching up. I think its closed source competitors are Haiku 4.5 and Gemini 3 Pro Fast (TBA) and whatever ridiculously-named light model OpenAI offers today (GPT 5.1 Codex Max Extra High Fast?)

Comment by kevin061 12 hours ago

The OpenAI thing is named Garlic.

(Surely they won't release it like that, right..?)

Comment by esafak 12 hours ago

TIL: https://garlicmodel.com/

That looks like the next flagship rather than the fast distillation, but thanks for sharing.

Comment by kevin061 11 hours ago

Lol, someone vibecoded an entire website for OpenAI's model, that's some dedication.

Comment by BoorishBears 8 hours ago

People have been doing this for literally every anticipated model release, and I presume skimming some amount of legitimate interest since their sites end up being top indexed until the actual model is released.

Google should be punishing these sites but presumably it's too narrow of a problem for them to care.

Comment by kevin061 8 hours ago

Black SEO in the age of LLMs

Comment by dmix 5 hours ago

It would need outbound links to be SEO

Or at least a profit model. I don't see either on that page but maybe I'm missing something

Comment by ewoodrich 4 hours ago

Every link in the "Legal" tree is a dead end redirecting back to the home page... strange thing to put together without any acknowledgement, unless they spam it on LLM adjacent subreddits for clout/karma?

Comment by ttul 5 hours ago

"GPT, please make me a website about OpenAI's 'Garlic' model."

Comment by YetAnotherNick 12 hours ago

No this is comparable to Deepseek-v3.2 even on their highlight task, with significantly worse general ability. And it's priced 5x of that.

Comment by esafak 11 hours ago

It's open source; the price is up to the provider, and I do not see any on openrouter yet. ̶G̶i̶v̶e̶n̶ ̶t̶h̶a̶t̶ ̶d̶e̶v̶s̶t̶r̶a̶l̶ ̶i̶s̶ ̶m̶u̶c̶h̶ ̶s̶m̶a̶l̶l̶e̶r̶,̶ ̶I̶ ̶c̶a̶n̶ ̶n̶o̶t̶ ̶i̶m̶a̶g̶i̶n̶e̶ ̶i̶t̶ ̶w̶i̶l̶l̶ ̶b̶e̶ ̶m̶o̶r̶e̶ ̶e̶x̶p̶e̶n̶s̶i̶v̶e̶,̶ ̶l̶e̶t̶ ̶a̶l̶o̶n̶e̶ ̶5̶x̶.̶ ̶I̶f̶ ̶a̶n̶y̶t̶h̶i̶n̶g̶ ̶D̶e̶e̶p̶S̶e̶e̶k̶ ̶w̶i̶l̶l̶ ̶b̶e̶ ̶5̶x̶ ̶t̶h̶e̶ ̶c̶o̶s̶t̶.̶

edit: Mea culpa. I missed the active vs dense difference.

Comment by NitpickLawyer 10 hours ago

> Given that devstral is much smaller, I can not imagine it will be more expensive

Devstral 2 is 123B dense. Deepseek is 37B Active. It will be slower and more expensive to run inference on this than dsv3. Especially considering that dsv3.2 has some goodies that make inference at higher context be more effective than their previous gen.

Comment by syntaxing 8 hours ago

Devstral is purely nonthinking too it’s very possible it uses less models (I don’t know how DS 3.2 nonthinking compares). It’s interesting because Qwen pretty much proved hybrid models work worse than fully separate models.

Comment by aimanbenbaha 7 hours ago

Deepseek v3.2 is that cheap because its attention mechanism is ridiculously efficient.

Comment by esafak 5 hours ago

Yeah, DeepSeek Sparse Attention. Section 2: https://arxiv.org/abs/2512.02556

Comment by 2 hours ago

Comment by InsideOutSanta 11 hours ago

I gave Devstral 2 in their CLI a shot and let it run over one of my smaller private projects, about 500 KB of code. I asked it to review the codebase, understand the application's functionality, identify issues, and fix them.

It spent about half an hour, correctly identified what the program did, found two small bugs, fixed them, made some minor improvements, and added two new, small but nice features.

It introduced one new bug, but then fixed it on the first try when I pointed it out.

The changes it made to the code were minimal and localized; unlike some more "creative" models, it didn't randomly rewrite stuff it didn't have to.

It's too early to form a conclusion, but so far, it's looking quite competent.

Comment by MLgulabio 11 hours ago

On what hardware did you run it?

Comment by syntaxing 8 hours ago

FWIW, it’s free through Mistral right now

Comment by seaal 4 hours ago

Comment by embedding-shape 15 hours ago

Look interesting, eager to play around with it! Devstral was a neat model when it released and one of the better ones to run locally for agentic coding. Nowadays I mostly use GPT-OSS-120b for this, so gonna be interesting to see if Devstral 2 can replace it.

I'm a bit saddened by the name of the CLI tool, which to me implies the intended usage. "Vibe-coding" is a fun exercise to realize where models go wrong, but for professional work where you need tight control over the quality, you can obviously not vibe your way to excellency, hard reviews are required, so not "vibe coding" which is all about unreviewed code and just going with whatever the LLM outputs.

But regardless of that, it seems like everyone and their mother is aiming to fuel the vibe coding frenzy. But where are the professional tools, meant to be used for people who don't want to do vibe-coding, but be heavily assisted by LLMs? Something that is meant to augment the human intellect, not replace it? All the agents seem to focus on off-handing work to vibe-coding agents, while what I want is something even tighter integrated with my tools so I can continue delivering high quality code I know and control. Where are those tools? None of the existing coding agents apparently aim for this...

Comment by williamstein 15 hours ago

Their new CLI agent tool [1] is written in Python unlike similar agents from Anthropic/Google (Typescript/Bun) and OpenAI (Rust). It also appears to have first class ACP support, where ACP is the new protocol from Zed [2].

[1] https://github.com/mistralai/mistral-vibe

[2] https://zed.dev/acp

Comment by esafak 15 hours ago

I did not know A2A had a competitor :(

Comment by 4b11b4 14 hours ago

They're different use cases, ACP is for clients (UIs, interfaces)

Comment by embedding-shape 13 hours ago

> Their new CLI agent tool [1] is written in

This is exactly the CLI I'm referring to, whose name implies it's for playing around with "vibe-coding", instead of helping professional developers produce high quality code. It's the opposite of what I and many others are looking for.

Comment by chrsw 13 hours ago

I think that's just the name they picked. I don't mind it. Taking a glance at what it actually does, it just looks like another command line coding assistant/agent similar to Opencode and friends. You can use it for whatever you want not just "vibe coding", including high quality, serious, professional development. You just have to know what you're doing.

Comment by hadlock 11 hours ago

>vibe-coding

A surprising amount of programming is building cardboard services or apps that only need to last six months to a year and then thrown away when temporary business needs change. Execs are constantly clamoring for semi-persistent dashboards and ETL visualized data that lasts just long enough to rein in the problem and move on to the next fire. Agentic coding is good enough for cardboard services that collapse when they get wet. I wouldn't build an industrial data lake service with it, but you can certainly build cardboard consumers of the data lake.

Comment by bigiain 6 hours ago

You are right.

But there is nothing more permanent that a quickly hacked together prototype or personal productivity hack that works. There are so many Python (or Perl or Visual Basic) scripts or Excel spreadsheets - created by people who have never been "developers" - which solve in-the-trenches pain points and become indispensable in exactly the way _that_ xkcd shows.

Comment by pdntspa 14 hours ago

> But where are the professional tools, meant to be used for people who don't want to do vibe-coding, but be heavily assisted by LLMs? Something that is meant to augment the human intellect, not replace it?

Claude Code not good enough for ya?

Comment by embedding-shape 13 hours ago

Claude Code has absolutely zero features that help me review code or do anything else than vibe-coding and accept changes as they come in. We need diff-comparisons between different executions, tailored TUI for that kind of work and more. Claude Code is basically a MVP of that.

Still, I do use Claude Code and Codex daily as there is nothing better out there currently. But they still feel tailored towards vibe-coding instead of professional development.

Comment by vidarh 13 hours ago

I really do not want those things in Claude COde - I much prefer choosing my own diff tools etc. and running them in a separate terminal. If they start stuffing too much into the TUI they'd ruin it - if you want all that stuff built in, they have the VS Code integration.

Comment by Havoc 6 hours ago

Mind elaborating a bit on the diff tool / flow you’re using? Trying to follow along better with what CC is doing

Comment by jbs789 2 minutes ago

Claude code in the VS Code terminal window pops up a diff in VSCode before making changes. Not sure if that helps.

Comment by embedding-shape 12 hours ago

Me neither, hence the stated preference for something completely new and different, a stab in the different direction instead of the same boring iteration on yet another agentic TUI coder.

Comment by pdntspa 2 hours ago

IntelliJ's AI service as a PR summarizer that I have found very helpful

Comment by johnfn 12 hours ago

> Claude Code has absolutely zero features that help me review code

Err, doesn’t it have /review?

Comment by victorbjorklund 12 hours ago

What’s wrong with using GIT for reviewing the changes?

Comment by embedding-shape 10 hours ago

Are any of them integrated with git? AFAIK, you'd have to instruct them to use git for you if you don't want to do it manually.

Imagine a GUI built around git branches + agents working in those branches + tooling to manage the orchestration and small review points, rather than "here's a chat and tool calling, glhf".

Comment by jbellis 12 hours ago

> where are the professional tools, meant to be used for people who don't want to do vibe-coding, but be heavily assisted by LLMs?

This is what we're building at Brokk: https://brokk.ai/

Quick intro: https://blog.brokk.ai/introducing-lutz-mode/

Comment by johanvts 15 hours ago

Did you try Aider?

Comment by embedding-shape 13 hours ago

I did, although a long time ago, so maybe I need to try it again. But it still seems to be stuck in a chat-like interface instead of something tailored to software development. Think IDE but better.

Comment by vidarh 13 hours ago

When I think "IDE but better", a Claude Code-like interface is increasingly what I want.

If you babysit every interaction, rather than reviewing a completed unit of work of some size, you're wasting your time second-guessing that the model won't "recover" from stupid mistakes. Sometimes that's right, but more often than not it corrects itself faster than you can.

And so it's far more effective to interact with it far more async, where the UI is more for figuring out what it did if something doesn't seem right, than for working live. I have Claude writing a game engine in another window right now, while writing this, and I have no interest in reviewing every little change, because I know the finished change will look nothing like the initial draft (it did just start the demo game right now, though, and it's getting there). So I review no smaller units of change than 30m-1h, often it will be hours, sometimes days, between each time I review the output, when working on something well specified.

Comment by johanvts 12 hours ago

It has a new “watch files” mode where you can work interactively. You just code normally but can send commands to the llm via a special string. Its a great way if interacting with LLMs, if only they where much faster.

Comment by macNchz 11 hours ago

If you're interested in much faster LLM coding, GLM 4.6 on Cerebras is pretty mind blowing. It's not quite as smart as the latest Claude and Gemini, but it generates code so fast it's kind of comical if you're used to the other models. Good with Aider since you can keep it on a tighter leash than with a fully agentic tool.

Comment by reachtarunhere 12 hours ago

If your goal is to edit code and not discuss it aider also supports a watch mode. You can keep adding comments about what you want it to do in a minimal format and it will make changes to the files and you can diff/revert them.

Comment by zmmmmm 10 hours ago

I think Aider is closest to what you want.

The chat interface is optimal to me because you often are asking questions and seeking guidance or proposals as you are making actual code changes. On reason I do like it is that its default mode of operation is to make a commit for each change it makes. So it is extremely clear what the AI did vs what you did vs what is a hodge podge of both.

As others have mentioned, you can integrate with your IDE through the watch mode. It's somewhat crude but still useful way. But I find myself more often than not just running Aider in a terminal under the code editor window and chatting with it about what's in the window.

Comment by embedding-shape 10 hours ago

> I think Aider is closest to what you want.

> The chat interface

Seems very much not, if it's still a chat interface :) Figuring out a chat UX is easy compared to something that was creating with letting LLM fill in some parts from the beginning. I guess I'm searching for something with a different paradigm than just "chat + $Something".

Comment by zmmmmm 8 hours ago

the question is, how do you want to provide instructions for what the AI is to do? You might not like calling it "chat" but somehow you need to communicate that, right? With aider you can write a comment for a function and then instruct it to finish the function inline (see other comments). But unless you just want pure autocomplete based on it guessing things, you need to provide guidance to it somehow.

Comment by embedding-shape 8 hours ago

I don't know exactly, but I guess in a more declarative manner rather than anything. Maybe we set goals/milestones/concrete objectives, or similar, rather than imperatively steer it, give it space to experiment yet make it very easy to understand exactly what important tradeoffs everything is doing.

It's all very fluffy and theoretical of course.

Comment by xmcqdpt2 1 hour ago

I think the problem is that models are just not that good yet. At least for my usage at work, the CLI tools are the fastest way to get something useful, but if you can't describe basically exactly what you want, you get garbage.

Comment by zmmmmm 7 hours ago

I find a good compromise on that front is not to use the chat primarily, but to create files like 'ARCHITECTURE.md', 'REQUIREMENTS.md' and put information in there describing how the application works. Then you add those to the chat as context docs.From the chat interface then you are just referring to those not just describing features willy nilly. So the nice thing is you are building documentation for the application in a formal sense as part of instructing the LLM.

Comment by embedding-shape 6 hours ago

But that is the typical agentic LLM coder style program I was initially referring to, saying we maybe should explore other alternatives to. It's too basic and primitive, with some imagination.

Comment by mhast 6 hours ago

The typical "best practice" for these tools tend to be to ask it something like

"I want you to do feature X. Analyse the code for me and make suggestions how to implement this feature."

Then it will go off and work for a while and typically come back after a bit with some suggestions. Then iterate on those if needed and end with.

"Ok. Now take these decided upon ideas and create a plan for how to implement. And create new tests where appropriate."

Then it will go off and come back with a plan for what to do. And then you send it off with.

"Ok, start implementing."

So sure. You probably can work on this to make it easier to use than with a CLI chat. It would likely be less like an IDE and more like a planning tool you'd use with human colleagues though.

Comment by troyvit 9 hours ago

Aider can be a chat interface and it's great for that but you can also use it from your editor by telling it to watch your files.[1]

So you'd write a function name and then tell it to flesh it out.

  function factorial(n) // Implement this. AI!
Becomes:

  function factorial(n) {
    if (n === 0 || n === 1) {
      return 1;
    } else {
      return n \* factorial(n - 1);
    }
  }
Last I looked Aider's maintainer has had to focus on other things recently, but aider-ce is a fantastic fork.

I'm really curious to try Mistral's vibe, but even though I'm a big fanboi I don't want to be tied to just one model. Aider lets tier your models such that your big, expensive model can do all the thinking and then stuff like code reviews can run through a smaller model. It's a pretty capable tool

Edit: Fix formatting

[1] https://aider.chat/docs/usage/watch.html

Comment by zmmmmm 9 hours ago

> I don't want to be tied to just one model.

Very much this for me - I really don't get why, given a new models are popping out every month from different providers, people are so happy to sink themselves into provider ecosystems when there are open source alternatives that work with any model.

The main problem with Aider is it isn't agentic enough for a lot of people but to me that's a benefit.

Comment by andai 14 hours ago

I created a very unprofessional tool, which apparently does what you want!

While True:

0. Context injected automatically. (My repos are small.)

1. I describe a change.

2. LLM proposes a code edit. (Can edit multiple files simultaneously. Only one LLM call required :)

3. I accept/reject the edit.

Comment by true2octave 8 hours ago

High quality code is a thing from the past

What matters is high quality specifications including test cases

Comment by embedding-shape 8 hours ago

> High quality code is a thing from the past

Says the person who will find themselves unable to change the software even in the slightest way without having to large refactors across everything at the same time.

High quality code matters more than ever, would be my argument. The second you let the LLM sneak in some quick hack/patch instead of correctly solving the problem, is the second you invite it to continue doing that always.

Comment by bigiain 6 hours ago

I dunno...

I have a feeling this will only supercharge the long established industry practice of new devs or engineering leadership getting recruited and immediately criticising the entire existing tech stack, and pushing for (and often succeeding) a ground up rewrite in language/framework de jour. This is hilariously common in web work, particularly front end web work. I suspect there are industry sectors that're well protected from this, I doubt people writing firmware for fuel injection and engine management systems suffer too much from this, the Javascript/Nodejs/NPM scourge _probably_ hasn't hit the PowerPC or 68K embedded device programming workflow. Yet...

Comment by bigiain 6 hours ago

"high quality specifications" have _always_ been a thing that matters.

In my mind, it's somewhat orthogonal to code quality.

Waterfall has always been about "high quality specifications" written by people who never see any code, much less write it. Agile make specs and code quality somewhat related, but in at least some ways probably drives lower quality code in the pursuit of meeting sprint deadlines and producing testable artefacts at the expense of thoroughness/correctness/quality.

Comment by chrsw 13 hours ago

> run locally for agentic coding. Nowadays I mostly use GPT-OSS-120b for this

What kind of hardware do you have to be able to run a performant GPT-OSS-120b locally?

Comment by embedding-shape 12 hours ago

RTX Pro 6000, ends up taking ~66GB when running the MXFP4 native quant with llama-server/llama.cpp and max context, as an example. Guess you could do it with two 5090s with slightly less context, or different software aimed at memory usage efficiency.

Comment by kristianp 9 hours ago

That has 96GB GDDR7 ECC, to save people looking it up.

Comment by fgonzag 12 hours ago

The model is 64GB (int4 native), add 20GB or so for context.

There are many platforms out there that can run it decently.

AMD strix halo, Mac platforms. Two (or three without extra ram) of the new AMD AI Pro R9700 (32GB of RAM, $1200), multi consumer gpu setups, etc.

Comment by FuckButtons 10 hours ago

Mbp 128gb.

Comment by freakynit 4 hours ago

So I tested the bigger model with my typical standard test queries which are not so tough, not so easy. They are also some that you wouldn't find extensive training data for. Finally, I already have used them to get answers from gpt-5.1, sonnet 4.5 and gemini 3 ....

Here is what I think about the bigger model: It sits between sonnet 4 and sonnet 4.5. Something like "sonnet 4.3". The response sped was pretty good.

Overall, I can see myself shifting to this for reguar day-to-day coding if they can offer this for copetitive pricing.

I'll still use sonnet 4.5 or gemini 3 for complex queries, but, for everything else code related, this seems to be pretty good.

Congrats Mistral. You most probably have caught up to the big guys. Not there yet exactly, but, not far now.

Comment by pluralmonad 15 hours ago

I'm sure I'm not the only one that thinks "Vibe CLI" sounds like an unserious tool. I use Claude Code a lot and little of it is what I would consider Vibe Coding.

Comment by tormeh 14 hours ago

They're looking for free publicity. "This French company launched a tool that lets you 'vibe' an application into being. Programmers outraged!"

Comment by klysm 15 hours ago

Using LLM's to write code is inherently best for unserious work.

Comment by dwaltrip 14 hours ago

These are the cutting insights I come to HN for.

Comment by neevans 13 hours ago

these are just old senior devs not wanting to accept new changes in the industry.

Comment by reyqn 9 hours ago

These are the cutting insights I come to HN for.

Comment by freakynit 4 hours ago

"Not reviewing generated code" is the problem. Not the LLM generated code.

Comment by 14 hours ago

Comment by kilpikaarna 2 hours ago

Agree, but that's just the term for any LLM-assisted development now.

Even the Gemini 3 announcement page had some bit like "best model for vibe coding".

Comment by jimmydoe 15 hours ago

Maybe they are just trying to be funny.

Comment by isodev 14 hours ago

If you’re letting Claude write code you’re vibe coding

Comment by andai 14 hours ago

So people have different definitions of the word, but originally Vibe Coding meant "don't even look at the code".

If you're actually making sure it's legit, it's not vibe coding anymore. It's just... Backseat Coding? ;)

There's a level below that I call Power Coding (like power armor) where you're using a very fast model interactively to make many very small edits. So you're still doing the conceptual work of programming, but outsourcing the plumbing (LLM handles details of syntax and stdlib).

Comment by HarHarVeryFunny 11 hours ago

Peer coding?

Maybe common usage is shifting, but Karpathy's "vibe coding" was definitely meant to be a never look at the code, just feel the AI vibes thing.

Comment by isodev 10 hours ago

I know tech bros like to come up with fancy words to make trivial things sounds fancy but as long as it’s a slop out process, it’s vibe coding. If you’re fixing what a bot spits out, should be a different word … something painful that could’ve been avoided?

Also, we’re both “people in tech”, we know LLMs can’t conceptualise beyond finding the closest collection of tokens rhyming with your prompt/code. Doesn’t mean it’s good or even correct. So that’s why it’s vibe coding.

Comment by brazukadev 13 hours ago

> If you're actually making sure it's legit, it's not vibe coding anymore.

sorry to disappoint you but that is also been considered vibecoding. It is just not pejorative.

Comment by theLiminator 12 hours ago

Pretty sure Karpathy coined the term here: https://x.com/karpathy/status/1886192184808149383

Imo, if you read the code, it's no longer vibecoding.

Comment by NitpickLawyer 14 hours ago

The original definition was very different. The main thing with vibe coding is that you don't care about the code. You don't even look at the code. You prompt, test that you got what you wanted, and move on. You can absolutely use cc to vibe code. But you can also use it to ... code based on prompts. Or specs. Or docs. Or whatever else. The difference is if you want / care to look at the code or not.

Comment by sunaookami 1 hour ago

No, that's not the definition of "vibe coding". Vibe coding is letting the model do whatever without reviewing it and not understanding the architecture. This was the original definition and still is.

Comment by tomashubelbauer 13 hours ago

It sure doesn't feel like it given how closely I have to babysit Claude Code lest I don't recognize the code after Claude Code is done with it when left to its own devices for a minute.

Comment by giancarlostoro 9 hours ago

It gets pretty close for me, but I usually tell it how I want it done from the get go.

Comment by 14 hours ago

Comment by princehonest 13 hours ago

Let's say you had a hardware budget of $5,000. What machine would you buy or build to run Devstral Small 2? The HuggingFace page claims it can run on a Mac with 32 GB of memory or an RTX 4090. What kind of tokens per second would you get on each? What about DGX Spark? What about RTX 5090 or Pro series? What about external GPUs on Oculink with a mini PC?

Comment by clusterhacks 11 hours ago

All those choices seem to have very different trade-offs? I hate $5,000 as a budget - not enough to launch you into higher-VRAM RTX Pro cards, too much (for me personally) to just spend on a "learning/experimental" system.

I've personally decided to just rent systems with GPUs from a cloud provider and setup SSH tunnels to my local system. I mean, if I was doing some more HPC/numerical programming (say, similarity search on GPUs :-) ), I could see just taking the hit and spending $15,000 on a workstation with an RTX Pro 6000.

For grins:

Max t/s for this and smaller models? RTX 5090 system. Barely squeezing in for $5,000 today and given ram prices, maybe not actually possible tomorrow.

Max CUDA compatibility, slower t/s? DGX Spark.

Ok with slower t/s, don't care so much about CUDA, and want to run larger models? Strix Halo system with 128gb unified memory, order a framework desktop.

Prefer Macs, might run larger models? M3 Ultra with memory maxed out. Better memory bandwidth speed, mac users seem to be quite happy running locally for just messing around.

You'll probably find better answers heading off to https://www.reddit.com/r/LocalLLaMA/ for actual benchmarks.

Comment by kpw94 9 hours ago

> I've personally decided to just rent systems with GPUs from a cloud provider and setup SSH tunnels to my local system.

That's a good idea!

Curious about this, if you don't mind sharing:

- what's the stack ? (Do you run like llama.cpp on that rented machine?)

- what model(s) do you run there?

- what's your rough monthly cost? (Does it come up much cheaper than if you called the equivalent paid APIs)

Comment by clusterhacks 8 hours ago

I ran ollama first because it was easy, but now download source and build llama.cpp on the machine. I don't bother saving a file system between runs on the rented machine, I build llama.cpp every time I start up.

I am usually just running gpt-oss-120b or one of the qwen models. Sometimes gemma? These are mostly "medium" sized in terms of memory requirements - I'm usually trying unquantized models that will easily run on an single 80-ish gb gpu because those are cheap.

I tend to spend $10-$20 a week. But I am almost always prototyping or testing an idea for a specific project that doesn't require me to run 8 hrs/day. I don't use the paid APIs for several reasons but cost-effectiveness is not one of those reasons.

Comment by Juminuvi 4 hours ago

I know you say you don't use the paid apis, but renting a gpu is something I've been thinking about and I'd be really interested in knowing how this compares with paying by the token. I think gpt-oss-120b is 0.10/input 0.60/output per million tokens in azure. In my head this could go a long way but I haven't used gpt oss agentically long enough to really understand usage. Just wondering if you know/be willing to share your typical usage/token spend on that dedicated hardware?

Comment by bigiain 6 hours ago

I don't suppose you have (or would be interested in writing) a blog post about how you set that up? Or maybe a list of links/resources/prompts you used to learn how to get there?

Comment by clusterhacks 5 hours ago

No, I don't blog. But I just followed the docs for starting an instance on lambda.ai and the llama.cpp build instructions. Both are pretty good resources. I had already setup an SSH key with lambda and the lambda OS images are linux pre-loaded with CUDA libraries on startup.

Here are my lazy notes + a snippet of the history file from the remote instance for a recent setup where I used the web chat interface built into llama.cpp.

I created an instance gpu_1x_gh200 (96 GB on ARM) at lambda.ai.

connected from terminal on my box at home and setup the ssh tunnel.

ssh -L 22434:127.0.0.1:11434 ubuntu@<ip address of rented machine - can see it on lambda.ai console or dashboard>

  Started building llama.cpp from source, history:    
     21  git clone   https://github.com/ggml-org/llama.cpp
     22  cd llama.cpp
     23  which cmake
     24  sudo apt list | grep libcurl
     25  sudo apt-get install libcurl4-openssl-dev
     26  cmake -B build -DGGML_CUDA=ON
     27  cmake --build build --config Release 
MISTAKE on 27, SINGLE-THREADED and slow to build see -j 16 below for faster build

     28  cmake --build build --config Release -j 16
     29  ls
     30  ls build
     31  find . -name "llama.server"
     32  find . -name "llama"
     33  ls build/bin/
     34  cd build/bin/
     35  ls
     36  ./llama-server -hf ggml-org/gpt-oss-120b-GGUF -c 0 --jinja
MISTAKE, didn't specify the port number for the llama-server

     37  clear;history
     38  ./llama-server -hf Qwen/Qwen3-VL-30B-A3B-Thinking -c 0 --jinja --port 11434
     39  ./llama-server -hf Qwen/Qwen3-VL-30B-A3B-Thinking.gguf -c 0 --jinja --port 11434
     40  ./llama-server -hf Qwen/Qwen3-VL-30B-A3B-Thinking-GGUF -c 0 --jinja --port 11434
     41  clear;history
I switched to qwen3 vl because I need a multimodal model for that day's experiment. Lines 38 and 39 show me not using the right name for the model. I like how llama.cpp can download and run models directly off of huggingface.

Then pointed my browser at http//:localhost:22434 on my local box and had the normal browser window where I could upload files and use the chat interface with the model. That also gives you an openai api-compatible endpoint. It was all I needed for what I was doing that day. I spent a grand total of $4 that day doing the setup and running some NLP-oriented prompts for a few hours.

Comment by bigiain 10 minutes ago

Thanks, much appreciated.

Comment by tgtweak 8 hours ago

dual 3090's (24GB each) on 8x+8x pcie has been a really reliable setup for me (with nvlink bridge... even though it's relatively low bandwidth compared to tesla nvlink, it's better than going over pcie!)

48GB of vram and lots of cuda cores, hard to beat this value atm.

If you want to go even further, you can get an 8x V100 32GB server complete with 512GB ram and nvlink switching for $7000 USD from unixsurplus (ebay.com/itm/146589457908) which can run even bigger models and with healthy throughput. You would need 240V power to run that in a home lab environment though.

Comment by lostmsu 8 hours ago

V100 is outdated (no bf16, dropped in CUDA 13) and power hungry (8 cards 3 years continuous use are about $12k of electricity).

Comment by monster_truck 12 hours ago

I'd throw a 7900xtx in an AM4 rig with 128gb of ddr4 (which is what I've been using for the past two years)

Fuck nvidia

Comment by clusterhacks 11 hours ago

You know, I haven't even been thinking about those AMD gpus for local llms and it is clearly a blind spot for me.

How is it? I'd guess a bunch of the MoE models actually run well?

Comment by stusmall 8 hours ago

I've been running local models on an AMD 7800 XT with ollama-rocm. I've had zero technical issues. It's really just the usefulness of a model with only 16GB vram + 64GB of main RAM is questionable, but that isn't an AMD specific issue. It was a similar experience running locally with an nvidia card.

Comment by androiddrew 11 hours ago

Get a Radeon AI Pro r9700! 32GB of RAM

Comment by eavan0 11 hours ago

I'm glad it's not another LLM CLI that uses React. Vibe-cli seems to be built with https://github.com/textualize/textual/

Comment by kristianp 9 hours ago

I'm not excited that it's done in python. I've had experience with Aider struggling to display text as fast as the llm is spitting it out, though that was probably 6 months ago now.

Comment by NSPG911 38 seconds ago

thats an issue with aider. using a proper framework in the alternate terminal buffer would have greatly benefitted them

Comment by willm 9 hours ago

Python is more than capable of doing that. It’s not an issue of raw execution speed.

https://willmcgugan.github.io/streaming-markdown/

Comment by zimbatm 13 hours ago

Just added it to our inventory. For those of you using Nix:

    nix run github:numtide/llm-agents.nix#mistral-vibe
The repo is updated daily.

Comment by jquaint 12 hours ago

This is such a cool project. Thanks for sharing.

Comment by pzmarzly 15 hours ago

10x cheaper price per token than Claude, am I reading it right?

As long as it doesn't mean 10x worse performance, that's a good selling point.

Comment by Macha 14 hours ago

Something like GPT 5-mini is a lot cheaper than even Haiku but when I tried it in my experience it was so bad it was a waste of time. But it’s probably still more than 1/10 the performance of Haiku probably?

In work, where my employer pays for it, Haiku tends to be the workhorse with Sonnet or Opus when I see it flailing. On my own budget I’m a lot more cost conscious, so Haiku actually ends up being “the fancy model” and minimax m2 the “dumb model”.

Comment by phildougherty 14 hours ago

Even if it is 10x cheaper and 2x worse it's going to eat up even more tokens spinning its wheels trying to implement things or squash bugs and you may end up spending more because of that. Or at least spending way more of your time.

Comment by amarcheschi 14 hours ago

The benchmark of swe places it in a comparable score with respect to open models and just a few points below the top notch models though

Comment by fastball 14 hours ago

Is it? The actual SOTA are not amazing at coding, so at least for me there is absolutely no reason to optimize on price at the moment. If I am going to use an LLM for coding it makes little sense to settle for a worse coder.

Comment by gunalx 11 hours ago

I dunno. Even pretty weak models can be decently performant, and 9/10 the performance for 1/10 the price means 10x the output, and for a lot of stuff that quality difference dosent really matter. Considering even sota models are trash, slightly worse dosent really make that much difference.

Comment by fastball 10 hours ago

> SOTA models are "trash"

> this model is worse (but cheaper)

> use it to output 10x the amount of trashier trash

You've lost me.

Comment by gunalx 10 hours ago

Fair. Mostly the argument is, if all you need is to iterate on output to refine it, you get 10x the iterations, while lesser quality, its still a aspect to consider. But yes, why bother eine coding when they do make so many mistakes.

Comment by rubin55 10 hours ago

This is great! I just made an AUR package for it: https://aur.archlinux.org/packages/mistral-vibe

Comment by alexmorley 15 hours ago

Does anyone know where their SWE-bench Verified results are from? I can't find matching results on the leaderboards for their models or the Claude models and they don't provide any links.

Comment by rsolva 11 hours ago

Ah, finally! I was checking just a few days ago if they had a Claude Code-like tool as I would much rather give money to a European effort. I'll stop my Pro subscription at Anthropic and switch over and test it out.

Comment by SyneRyder 13 hours ago

I was briefly excited when Mistral Vibe launched and mentions "0 MCP Servers" in its startup screen... but I can't find how to configure any MCP servers. It doesn't respond to the /mcp command, and asking Devstral 2 for help, it thinks MCP is "Model Context Preservation". I'd really like to be able to run my local MCP tools that I wrote in Golang.

I'm team Anthropic with Claude Max & Claude Code, but I'm still excited to see Mistral trying this. Mistral has occasionally saved the day for me when Claude refused an innocuous request, and it's good to have alternatives... even if Mistral / Devstral seems to be far behind the quality of Claude.

Comment by tomashubelbauer 13 hours ago

Comment by SyneRyder 12 hours ago

Thank you! Finally got it working, had to comment out the mcp_servers line near the top of the config.toml file in ~/.vibe/, before adding my [[mcp_servers]] sections at the end of the file.

That was very helpful, thanks!

Comment by joostdevries 14 hours ago

Very nice that there's a coding cli finally. I have a Mistral Pro account. I hope that it will be included. It's the main reason to have a Pro account tbh.

Comment by mentalgear 9 hours ago

Just tried it out via their free API and the Roo Code VSCode extension, and it's impressive. It walked through a data analytics and transformation problem (150.000 dataset entries) I have been debugging for the past 2 hours.

Comment by pshirshov 10 hours ago

> Mistral Code is available with enterprise deployments. > Contact our team to get started.

The competition is much smoother. Where are the subscriptions which would give users the coding agent and the chat for a flat fee and working out of the box?..

Comment by weitendorf 12 hours ago

Open sourcing the TUI is pretty big news actually. Unless I missed something, I had to dig a bit to find it, but I think this is it: https://github.com/mistralai/mistral-vibe

Going to start hacking on this ASAP

Comment by syntaxing 11 hours ago

Extremely happy with this release, the previous Devstral was great but training it for open hands crippled the usefulness. Having their own CLI dev tool will hopefully be better

Comment by kristianp 9 hours ago

Can you explain "training it for open hands"? I can't parse the meaning.

Comment by syntaxing 6 hours ago

The original Devstral was a collaboration between All Hands AI (OpenHands) and Mistral [1]. You can use it with other agents but had to transfer over the prompt. Even then, the agents still didn't work that well. I tried it in RooCline and it worked extremely poorly with the tool calls.

[1] https://openhands.dev/blog/devstral-a-new-state-of-the-art-o...

Comment by tucnak 15 hours ago

I'm so glad Mistral never sold out. We're really lucky to have them in the EU at the time when we're so focused on mil-tech etc.

Comment by ismailmaj 15 hours ago

I don’t think it was ever an option since it had ties with the french government early on (Cédric O) and Macron’s party is quite pro EU

Comment by maelito 9 hours ago

They let so many important French companies down. So, yes, it could happen despite this beginning.

Comment by poszlem 15 hours ago

They’ll switch to military tech the second it becomes necessary, don’t kid yourself. I’m just glad we have a European alternative for the day the US decides to turn its back on us.

This tech is simply too critical to pretend the military won’t use it. That’s clearer now than ever, especially after the (so far flop-ish) launch of the U.S. military’s own genAI platform.

Comment by embedding-shape 13 hours ago

> I’m just glad we have a European alternative for the day the US decides to turn its back on us

Not sure you've kept up to date, US have turned their backs on most allies so far including Europe and the EU, and now welcome previous enemies with open arms.

Comment by breedmesmn 12 hours ago

Wow! BLUMPF has really done it this time! Excited to be part of the resistance!

Comment by hobofan 14 hours ago

It's not like there aren't already military AI startups in the EU. e.g. Helsing.

Comment by maelito 9 hours ago

> I’m just glad we have a European alternative for the day the US decides to turn its back on us.

They did.

Comment by simonw 11 hours ago

Comment by giancarlostoro 9 hours ago

Based on your experience with Claude Code, how does Mistral Vibe compare?

Comment by simonw 8 hours ago

I've not spent enough time with Mistral Vibe yet for a credible comparison, but given what I know about the underlying models (likely-1T-plus Opus 4.5 compared to the 123B Devstral 2) I'd be shocked if Vibe could out-perform Claude Code for the kinds of things I'm using it for.

Here's n example of the kinds of things I do with Claude Code now: https://gistpreview.github.io/?b64d5ee40439877eee7c224539452... - that one involved several from-scratch rewrites of the history of an entire Git repo just because I felt like it.

Comment by therealmarv 14 hours ago

offtopic but it hurts my eyes: I dislike for their font choice and their "cool looks" in their graphics.

Surprising and good is only: Everything including graphics fixed when clicking my "speedreader" button in Brave. So they are doing that "cool look" by CSS.

Comment by netghost 10 hours ago

Yeah, it's a bit gimicky. You can hit `esc` and it will revert to the normal page design.

There's a scan lines affect they apply to everything that's "cool", but gets old after a minute.

Comment by rwky 12 hours ago

I gave it the job of modifying a fairly simple regex replacement and it took a while over 5 minutes, claude failed on the same prompt (which surprised me), codex did a similar job but faster. So all in all not bad!

Comment by maelito 11 hours ago

Finally, we can use a european model to replace claude code.

Comment by badsectoracula 15 hours ago

> Devstral 2 ships under a modified MIT license, while Devstral Small 2 uses Apache 2.0. Both are open-source and permissively licensed to accelerate distributed intelligence.

Uh, the "Modified MIT license" here[0] for Devstral 2 doesn't look particularly permissively licensed (or open-source):

> 2. You are not authorized to exercise any rights under this license if the global consolidated monthly revenue of your company (or that of your employer) exceeds $20 million (or its equivalent in another currency) for the preceding month. This restriction in (b) applies to the Model and any derivatives, modifications, or combined works based on it, whether provided by Mistral AI or by a third party. You may contact Mistral AI (sales@mistral.ai) to request a commercial license, which Mistral AI may grant you at its sole discretion, or choose to use the Model on Mistral AI's hosted services available at https://mistral.ai/.

[0] https://huggingface.co/mistralai/Devstral-2-123B-Instruct-25...

Comment by Arcuru 14 hours ago

Personally I really like the normalization of these "Permissively" licensed models that only restrict companies with massive revenues from using them for free.

If you want to use something, and your company makes $240,000,000 in annual revenue, you should probably pay for it.

Comment by badsectoracula 13 hours ago

These are not permissively licensed though, the terms "permissive license" has connotations that pretty much everyone who is into FLOSS understands (same with "open source").

I do not mind having a license like that, my gripe is with using the terms "permissive" and "open source" like that because such use dilutes them. I cannot think of any reason to do that aside from trying to dilute the term (especially when some laws, like the EU AI Act, are less restrictive when it comes to open source AIs specifically).

Comment by kouteiheika 12 hours ago

> I do not mind having a license like that, my gripe is with using the terms "permissive" and "open source" like that because such use dilutes them. I cannot think of any reason to do that aside from trying to dilute the term (especially when some laws, like the EU AI Act, are less restrictive when it comes to open source AIs specifically).

Good. In this case, let it be diluted! These extra "restrictions" don't affect normal people at all, and won't even affect any small/medium businesses. I couldn't care less that the term is "diluted" and that makes it harder for those poor, poor megacorporations. They swim in money already, they can deal with it.

We can discuss the exact threshold, but as long as these "restrictions" are so extreme that they only affect huge megacorporations, this is still "permissive" in my book. I will gladly die on this hill.

Comment by dragonwriter 6 hours ago

> Good. In this case, let it be diluted! These extra "restrictions" don't affect normal people at all,

Yes, they do, and the only reason for using the term “open source” for things whose licensing terms flagrantly defy the Open Source definition is to falsely sell the idea that using the code carries the benefits that are tied to the combination of features that are in the definition and which are lost with only a subset of those features. The freedom to use the software in commercial services is particularly important to end-users that are not interested in running their own services as a guarantee against lock-in and of whatever longevity they are able to pay to have provided even if the original creator later has interests that conflict with offering the software as a commercial service.

If this deception wasn't important, there would be no incentive not to use the more honest “source available for limited uses” description.

Comment by JoshTriplett 6 hours ago

> I couldn't care less that the term is "diluted" and that makes it harder

It also makes life harder for individuals and small companies, because this is not Open Source. It's incompatible with Open Source, it can't be reused in other Open Source projects.

Terms have meanings. This is not Open Source, and it will never be Open Source.

Comment by kouteiheika 1 hour ago

> It also makes life harder for individuals and small companies, because this is not Open Source. It's incompatible with Open Source, it can't be reused in other Open Source projects.

I'm amazed at the social engineering that the megacorps have done with the whole Open Source (TM) thing. They engineered a whole generation of engineers to advocate not in their own self-interest, nor for the interest of the little people, but instead for the interest of the megacorps.

As soon as there is even the tiniest of restrictions, one which doesn't affect anyone besides a bunch of richiest corporations in the world, a bunch of people immediately come out of the woodwork, shout "but it's not open source!" and start bullying everyone else to change their language. Because if you even so much as inconvenience a megacorporation even a little bit it's not Open Source (TM) anymore.

If we're talking about ideals then this is something I find unsettling and dystopian.

I hard disagree with your "It also makes life harder for individuals and small companies" statement. It's the opposite. It gives them a competitive advantage vs megacorps, however small it may be.

Comment by whimsicalism 14 hours ago

That's fine, but I don't think you should call it open source or call it MIT or even 'modified MIT.' Call it Mistral license or something along those lines

Comment by joseda-hg 14 hours ago

That's probably better, but Modified MIT is pretty descriptive, I read it as "mostly MIT, but with caveats for extreme cases" which is about right, if you already know what the MIT license entails

Whatever name they come up with for a new license will be less useful, because I'll have to figure out that this is what that is

Comment by jrm4 14 hours ago

You're presently illustrating exactly why Stallman et al were such sticklers about "Free Software."

"Open Source" is nebulous. It reasonably works here, for better or worse.

Comment by stonemetal12 10 hours ago

>"Open Source" is nebulous

No it isn't it is well defined. The only people who find it "nebulous" are people who want the benefits without upholding the obligations.

https://opensource.org/definition-annotated

Comment by whimsicalism 14 hours ago

Free software to me means GPL and associates, so if that is what Stallman was trying to be a stickler for - it worked.

Open source has a well understood meaning, including licenses like MIT and Apache - but not including MIT but only if you make less than $500million dollars, MIT unless you were born on a wednesday, etc.

Comment by whimblepop 11 hours ago

MIT and Apache are free software licenses in Stallman's sense, and the FSF has always been clear about it.

Comment by fastball 14 hours ago

imo this is a hill people need to stop dying on. Open source means "I can see the source" to most of the world. Wishing it meant "very permissively licensed" to everyone is a lost cause.

And honestly it wasn't a good hill to begin with: if what you are talking about is the license, call it "open license". The source code is out in the open, so it is "open source". This is why the purists have lost ground to practical usage.

Comment by embedding-shape 13 hours ago

> imo this is a hill people need to stop dying on.

As someone who was born and raised on FOSS, and still mostly employed to work on FOSS, I disagree.

Open source is what it is today because it's built by people with a spine who stand tall for their ideals even if it means less money, less industry recognition, lots of unglorious work and lots of other negatives.

It's not purist to believe that what built open source so far should remain open source, and not wanting to dilute that ecosystem with things that aren't open source, yet call themselves open source.

Comment by kouteiheika 12 hours ago

> Open source is what it is today because it's built by people with a spine who stand tall for their ideals even if it means less money, less industry recognition, lots of unglorious work and lots of other negatives.

With all due respect, don't you see the irony in saying "people with a spine who stand tall for their ideals", and then arguing that attaching "restrictions" which only affect the richest megacorporations in the world somehow makes the license not permissive anymore?

What ideals are those exactly? So that megacorporations have the right to use the software without restrictions? And why should we care about that?

Comment by embedding-shape 10 hours ago

> What ideals are those exactly?

Anyone can use the code for whatever purpose they want, in any way they want. I've never been a "rich megacorporation", but I have gone from having zero money to having enough money, and I still think the very same thing about the code I myself release as I did from the beginning, it should be free to be used by anyone, for any purpose.

Comment by fastball 12 hours ago

You should stand up for your ideals, but dying on the hill of what you call your ideals is actually getting in the way of that.

Because instead of making the point "this license isn't as permissive as it could/should be" (easy to understand), instead the point being made is "this isn't real open source", which comes across to most people as just some weird gate-keeping / No True Scotsman kinda thing.

Comment by JoshTriplett 6 hours ago

"No True Scotsman" is about specifically about changing the rules to exclude a new example you don't want to permit. The rules haven't changed, and the attempts to violate the requirements aren't new. Proprietary licenses continue to be proprietary. Open Source continues to not allow restrictions on commercial use.

Comment by whimsicalism 12 hours ago

no, “No True Scotsman” is just about people not categories like open source

Comment by fastball 12 hours ago

Good job missing the point.

Though given the stance you are taking in this conversation, I'm not surprised you want to quibble over that.

¯\_(ツ)_/¯

Comment by whimsicalism 12 hours ago

ultimately you have to imbue words with meaning, otherwise it is impossible to have a discussion. what i said about no true scotsman was false, i was just trying to prove a point.

Comment by fastball 11 hours ago

What point were you proving?

Comment by JoshTriplett 6 hours ago

And back in the day, people incorrectly called it "public domain". That was wrong too.

> if what you are talking about is the license, call it "open license".

If you want to build something proprietary, call it something else. "Open Source" is taken.

Comment by whimsicalism 14 hours ago

> Open source means "I can see the source" to most of the world

well we don't really want to open that can of worms though, do we?

I don't agree with ceding technical terms to the rest of the world. I'm increasingly told we need to stop calling cancer detection AI "AI" or "ML" because it is not the 'bad AI' and confuses people.

I guess I'm okay with being intransigent.

Comment by fastball 12 hours ago

If you are happy that time is being spent quibbling over definitions instead of actually focusing on the ideal, I'm not sure you care about the ideals as much as you say you do.

Who gives a shit what we call "cancer AI", what matters is the result.

Comment by jsnell 11 hours ago

I don't think you get access to source in this case. The release is a binary blob.

Comment by mkmk3 14 hours ago

Earnestly, what's the concern here? People complain about open source being mostly beneficial to megacorps, if that's the main change (idk I haven't looked too closely) then that's pretty good, no?

Comment by JimDabell 14 hours ago

They are claiming something is open-source when it isn’t. Regardless of whether you think the deviation from open-source is a good thing or not, you should still be in favour of honesty.

Comment by fastball 14 hours ago

*according to your definition of open-source

Comment by JimDabell 13 hours ago

No, according to the commonly accepted definition of open-source.

Whenever anybody tries to claim that a non-commercial licenses is open-source, it always gets complaints that it is not open-source. This particular word hasn’t been watered down by misuse like so many others.

There is no commonly-accepted definition of open-source that allows commercial restrictions. You do not get to make up your own meaning for words that differs from how other people use it. Open-source does not have commercial restrictions by definition.

Comment by fastball 12 hours ago

Where are you getting this compendium of commonly-accepted definitions?

Looking up open-source in the dictionary does include definitions that would allow for commercial restrictions, depending on how you define "free" (a matter that is most certainly up for debate).

Comment by whimblepop 11 hours ago

"Open-source" isn't a term that emerged organically from conversations between people. It is a term that was very deliberately coined for a specific purpose, defined into existence by an authority. It's a term of art, and its exact definition is available here: https://opensource.org/osd

The term "open-source" exists for the purposes of a particular movement. If you are "for" the misuse and abuse of the term, you not only aren't part of that movement, but you are ignorant about it and fail to understand it— which means you frankly have no place speaking about the meanings of its terminology.

Comment by fastball 10 hours ago

yeahhhhhhh, that's not how this works.

Unless this authority has some ownership over the term and can prevent its misuse (e.g. with lawsuits or similar), it is not actually the authority of the term, and people will continue to use it how they see fit.

Indeed, I am not part of a movement (nor would I want to be) which focuses more on what words are used rather than what actions are taken.

Comment by JoshTriplett 6 hours ago

> people will continue to use it how they see fit.

People can also say 2+2=5, and they're wrong. And people will continue to call them out on it. And we will keep doing so, because stopping lets people move the Overton window and try to get away with even more.

Comment by fastball 6 hours ago

2+2 is a mathematical concept. Definitions do not need to be agreed upon beyond fundamental axioms.

The same is not true for "open source", which is a purely linguistic construct.

Comment by JimDabell 3 hours ago

> people will continue to use it how they see fit.

And whenever they do so, this pointless argument will happen. Again, and again, and again. Because that’s not what the word means and your desired redefinition has been consistently and continuously rejected over and over again for decades.

What do you gain from misusing this term? The only thing it does is make you look dishonest and start arguments.

Comment by JoshTriplett 6 hours ago

*according to the industry standard definition of Open Source

This kind of thing is how people try to shift the Overton window. No.

Comment by udev4096 13 hours ago

"I don't know anything about open source licenses hence I must spread my ignorance everywhere"

Comment by fastball 12 hours ago

Is there some Open Source™ council I am unaware of that bequeaths the open source moniker on certain licenses?

Comment by pxc 10 hours ago

Comment by fastball 8 hours ago

So if I invent a new license and call it "open source", they will sue me, or...?

Comment by badsectoracula 14 hours ago

Mainly about the dilution of the term. Though TBH i do not think that open source is beneficial mostly to megacorps either.

Comment by simonw 14 hours ago

Mistral have used janky licenses in that a few times in the past. I was hoping the competition from China might have snapped them out of it.

Comment by jrm4 14 hours ago

All "Open Source" licenses are to an extent, janky. Obligatory "Stallman was right;" -- If it's not GPL/Free Software, YMMV.

Comment by squigz 14 hours ago

Is such a term even enforceable? How would it be? How could Mistral know how much a company makes if that information isn't public?

Comment by lillecarl 12 hours ago

They don't have to enforce it, evil megacorps won't risk the legal consequences of using it without talking to Mistral first. In reality they just won't use it.

Comment by tigranbs 11 hours ago

Somehow it writes bad React code and misses to check linting prompts half the time. But surprisingly, the Python coding was great!

Comment by whimsicalism 14 hours ago

> Model Size (B tokens)

How is that a measure of model size? It should either be parameter size, activated parameters, or cost per output token.

Looks like a typo because the models line up with reported param sizes.

Comment by Poudlardo 13 hours ago

will definetey try mistral vibe with gpt-oss-20b

Comment by qwertox 13 hours ago

Let's see which company becomes the first to sell "coding appliances": hardware with a model good enough for normal coding.

If Mistral is so permissive they could be the first ones, provided that hardware is then fast/cheap/efficient enough to create a small box that can be placed in an office.

Maybe in 5 years.

Comment by giancarlostoro 9 hours ago

My Macbook Pro with an M4 Pro chip can handle a number of these models (I think it has 16GB of VRAM) with reasonable performance, my bottleneck continuously is the token caps. I assume someone with a much more powerful Mac Studio could run way more than I can, considering they get access to about 96GB of VRAM out of the system RAM iirc.

Comment by bakies 12 hours ago

I bought a framework desktop hoping to do this.

Comment by sosodev 10 hours ago

And it can do it, right? I think AMD AI Max line the first realistic offering for this type of thing.

The Apple offerings are interesting but the lack of x86, Linux, and general compatibility make it hard sell imo.

Comment by brazukadev 13 hours ago

my bet is a deepseek box

Comment by baq 13 hours ago

llm in a box connected via usb is the dream.

...so it won't ever happen, it'll require wifi and will only be accessible via the cloud, and you'll have to pay a subscription fee to access the hardware you bought. obviously.

Comment by tgtweak 8 hours ago

PSA: 10X savings when you have to prompt it 10 times to get the correct solution is not actually faster.

Comment by kevin061 15 hours ago

I am very disappointed they don't have an equivalent subscription for coding to the 200 EUR ChatGPT or Claude one, and it is only available for Enterprise deployments.

The only thing I found is a pay-as-you-go API, but I wonder if it is any good (and cost-effective) vs Claude et al.

Comment by pzo 13 hours ago

> Devstral 2 is currently offered free via our API. After the free period, the API pricing will be $0.40/$2.00 per million tokens (input/output) for Devstral 2

With pricing so low I don't see any reason why someone would buy sub for 200 EUR. These days those subs are so much limited in Claude Code or Cursor than it used to be (or used to unlimited). Better pay-as-you-go especially when there are days when you probably use AI less or not at all (weekends/holidays etc.) as long as those credits don't expire.

Comment by kevin061 13 hours ago

True, I just wish I could pay once for code AND the chat, but the chat subscription does not include Code sadly.

Comment by esafak 14 hours ago

At these rates you can afford to pay by the token.

Comment by cyp0633 15 hours ago

In a figure: Model size (B tokens)?

Comment by abuson 12 hours ago

did anyone test how up to date is knowledge?

After querying the model about .NET, it seems that its knowledge comes from around June 2024.

Comment by huqedato 5 hours ago

I confirm that. It had no idea how to use Deno v2+.

Comment by moffkalast 12 hours ago

Looks like another Deepseek distil like the new Ministrals. For every other use case that would be an insult, but for coding that's a great approach given how much lead in coding performance Qwen and Deepseek have on Mistral's internal datasets. The Small 24B seems to have a decent edge on 30BA3B, though it'll be comparatively extremely slow to run.

Comment by da_grift_shift 14 hours ago

Can Vibe CLI help me vibe code PRs for when I vibe on the https://github.com/buttplugio/buttplug repo?

Comment by andai 14 hours ago

You can do anything if you believe.

Comment by jedisct1 14 hours ago

Yet another CLI.

Why does every AI provider need to have its own tool, instead of contributing to existing tools like Roo Code or Opencode?

Comment by Lapel2742 14 hours ago

My 2ct: Because providers want to make their model run optimally and maybe some of them try to build a moat.

Comment by jedisct1 13 hours ago

> providers want to make their model run optimally

Because they couldn't do it by contributing to existing opensource tools?

Comment by villgax 12 hours ago

Modified MIT?????

Just call it Mistral License & flush it down