Thoughts on Go vs. Rust vs. Zig
Posted by yurivish 5 days ago
Comments
Comment by kibwen 5 days ago
Well, no, creating a mutable global variable is trivial in Rust, it just requires either `unsafe` or using a smart pointer that provides synchronization. That's because Rust programs are re-entrant by default, because Rust provides compile-time thread-safety. If you don't care about statically-enforced thread-safety, then it's as easy in Rust as it is in Zig or C. The difference is that, unlike Zig or C, Rust gives you the tools to enforce more guarantees about your code's possible runtime behavior.
Comment by michaelscott 5 days ago
Moving back to a language that does this kind of thing all the time now, it seems like insanity to me wrt safety in execution
Comment by hu3 5 days ago
Novices start slapping global variables everywhere because it makes things easy and it works, until it doesn't and some behaviour breaks because... I don't even know what broke it.
On a smaller scale, mutable date handling libraries also provide some memorable WTF debugging moments until one learns (hopefully) that adding 10 days to a date should probably return a new date instance in most cases.
Comment by port11 4 days ago
Comment by iLemming 4 days ago
Comment by port11 4 days ago
Comment by ClayShentrup 4 days ago
Comment by ajross 5 days ago
This is a tombstone-quality statement. It's the same framing people tossed around about C++ and Perl and Haskell (also Prolog back in the day). And it's true, insofar as it goes. But languages where "trivial" things "just require" rapidly become "not so trivial" in the aggregate. And Rust has jumped that particular shark. It will never be trivial, period.
Comment by kibwen 5 days ago
Sure. And in C and Zig, it's "trivial" to make a global mutable variable, it "just requires" you to flawlessly uphold memory access invariants manually across all possible concurrent states of your program.
Stop beating around the bush. Rust is just easier than nearly any other language for writing concurrent programs, and it's not even close (though obligatory shout out to Erlang).
Comment by sharifhsn 5 days ago
Rust makes it easy to write correct software quickly, but it’s slower for writing incorrect software that still works for an MVP. You can get away with writing incorrect concurrent programs in other languages… for a while. And sometimes that’s what business requires.
I actually wish “rewrite in Rust” was a more significant target in the Rust space. Acknowledging that while Rust is not great for prototyping, the correctness/performance advantages it provides justifies a rewrite for the long-term maintenance of software—provided that the tools exist to ease that migration.
Comment by josephg 5 days ago
I've taken to using typescript for prototyping - since its fast (enough), and its trivial to run both on the server (via bun) or in a browser. The type system is similar enough to rust that swapping back and forth is pretty easy. And there's a great package ecosystem.
I'll get something working, iterate on the design, maybe go through a few rewrites and when I'm happy enough with the network protocol / UI / data layout, pull out rust, port everything across and optimize.
Its easier than you think to port code like this. Our intuition is all messed up when it comes to moving code between languages because we look at a big project and think of how long it took to write that in the first place. But rewriting code from imperative language A to B is a relatively mechanical process. Its much faster than you think. I'm surprised it doesn't happen more often.
Comment by theshrike79 5 days ago
With Python I can easily iterate on solutions, observe them as they change, use the REPL to debug things and in general just write bad code just to get it working. I do try to add type annotations etc and not go full "yolo Javascript everything is an object" -style :)
But in the end running Python code on someone else's computer is a pain in the ass, so when I'm done I usually use an LLM to rewrite the whole thing in Go, which in most cases gives me a nice speedup and more importantly I get a single executable I can just copy around and run.
In a few cases the solution requires a Python library that doesn't have a Go equivalent I just stick with the Python one and shove it in a container or something for distribution.
Comment by millerm 4 days ago
You mentioned running someone else's python is painful, and it most certainly is. No other language have I dealt with more of the "Well, it works on my machine" excuse, after being passed done the world's worst code from a "data scientist". Then the "well, use virtual environments"... Oh, you didn't provide that. What version are you using? What libraries did you manually copy into your project? I abhor the language/runtime. Since most of us don't work in isolation, I find the intermediate prototype in another language for Go a waste of time and resources.
Now... I do support an argument for "we prototype in X because we do not run X in production". That means that prototype code will not be part of our releases. Let someone iterate quickly in a sandbox, but they can't copy/paste that stuff into the main product.
Just a stupid rant. Sorry. I'm unemployed. Career is dead. So, I shouldn't even hit "reply"... but I will.
Comment by vjdingdong 4 days ago
With Rust, it was amazing - it was a pain to get it compiled and get past the restrictions (coming from a Python coder) - the code just ran without a hitch, and it was fast, never even tried to optimize it.
As a Python 'old-timer' , I also am not impressed with all the gratuitous fake typing , and especially Pydantic. Pydantic feels so un-pythonic, they're trying to make it like Go or Rust, but its falling flat, at least for me.
Comment by ixsploit 5 days ago
The typing system makes it somewhat slow for me and I am faster prototyping in Go then in Python, despite that I am writing more Python code. And yes I use type annotations everywhere, ideally even using pydantic.
I tend to use it a lot for data analytics and exploration but I do this now in nushell which holds up very well for this kind of tasks.
Comment by theshrike79 5 days ago
When I'm receiving some random JSON from an API, it's so much easier to drop into a Python REPL and just wander around the structure and figure out what's where. I don't need to have a defined struct with annotations for the data to parse it like in Go.
In the first phase I don't bother with any linters or type annotations, I just need the skeleton of something that works end to end. A proof of concept if you will.
Then it's just iterating with Python, figuring out what comes in and what goes out and finalising the format.
Comment by ixsploit 5 days ago
For me it's pretty hard to work without type annotations, it just slows me down.
Don't get me wrong, I really like python for what it is, I simply missing out on the fast prototype stuff that everyone else is capable of.
Comment by timschmidt 5 days ago
I don't find that to be the case. It may be slower for a month or two while you learn how to work with the borrow checker, but after the adjustment period, the ideas flow just as quickly as any other language.
Additionally, being able to tell at a glance what sort of data functions require and return saves a ton of reading and thinking about libraries and even code I wrote myself last week. And the benefits of Cargo in quickly building complex projects cannot be overstated.
All that considered, I find Rust to be quite a bit faster to write software in than C++, which is probably it's closest competitor in terms of capabilities. This can be seen at a macro scale in how quickly the Rust library ecosystem has grown.
Comment by nu11ptr 5 days ago
I do agree that OFTEN you can get good velocity, but there IS a cost to any large scale program written in Rust. I think it is worth it (at least for me, on my personal time), but I can see where a business might find differently for many types of programs.
Comment by timschmidt 5 days ago
As is C++ which I compared it to, where there is even more boilerplate for similar tasks. I spent so much time working with C++ just integrating disparate build systems in languages like Make and CMake which just evaporates to nothing in Rust. And that's before I even get to writing my code.
> I do agree that OFTEN you can get good velocity, but there IS a cost to any large scale program written in Rust.
I'm not saying there's no cost. I'm saying that in my experience (about 4 years into writing decently sized Rust projects now, 20+ years with C/C++) the cost is lower than C++. C++ is one of the worst offenders in this regard, as just about any other language is easier and faster to write software in, but also less capable for odd situations like embedded, so that's not a very high bar. The magical part is that Rust seems just as capable as C++ with a somewhat lower cost than C++. I find that cost with Rust often approaches languages like Python when I can just import a library and go. But Python doesn't let me dip down to the lower level when I need to, whereas C++ and Rust do. Of the languages which let me do that, Rust is faster for me to work in, no contest.
So it seems like we agree. Rust often approaches the productivity of other languages (and I'd say surpasses some), but doesn't hide the complexity from you when you need to deal with it.
Comment by nu11ptr 5 days ago
I was responding to "as any other language". Compared to C++, yes, I can see how iteration would faster. Compared to C#/Go/Python/etc., no, Rust is a bit slower to iterate for some things due to need to provide low level details sometimes.
Comment by timschmidt 5 days ago
Sometimes specific tasks in Rust require a little extra effort - like interacting with the file picker from WASM required me to write an async function. In embedded sometimes I need to specify an allocator or executor. Sometimes I need to wrap state that's used throughout the app in an Arc(Mutex()) or the like. But I find that there are things like that in all languages around the edges. Sometimes when I'm working in Python I have to dip into C/C++ to address an issue in a library linked by the runtime. Rust has never forced me to use a different language to get a task done.
I don't find the need to specify types to be a particular burden. If anything it speeds up my development by making it clearer throughout the code what I'm operating on. The only unsafe I've ever had to write was for interacting with a GL shader, and for binding to a C library, just the sort of thing it's meant for, and not really possible in those other languages without turning to C/C++. I've always managed to use existing datastructures or composites thereof, so that helps. But that's all you get in languages like C#/Go/Python/etc. as well.
The big change for me was just learning how to think about and structure my code around data lifetimes, and then I got the wonderful experience other folks talk about where as soon as the code compiles I'm about 95% certain it works in the way I expect it to. And the compiler helps me to get there.
Comment by zozbot234 5 days ago
Comment by pjmlp 5 days ago
Unfortunately too many people accept using computers requires using broken produts, something that most people would return on the same day with other kind of goods.
Comment by littlestymaar 5 days ago
YMMV on that, but IMHO the bigger part of that is the ecosystem , especially for back-end. And by that metric, you should never use anything else than JS for prototyping.
Go will also be faster than Rust to prototype backend stuff with because most of what you need is in the standard library. But not by a large margin and you'll lose that benefit by the time you get to production.
I think most people vastly overestimate the friction added by the borrow checker once you get up to speed.
Comment by smallstepforman 5 days ago
Comment by Zambyte 5 days ago
No it doesn't. Zig doesn't require you to think about concurrency at all. You can just not do concurrency.
> Stop beating around the bush. Rust is just easier than nearly any other language for writing concurrent programs
This is entirely unrelated to the problem of defining shared global state.
var x: u64 = 10;
There. I defined shared global state without caring about writing concurrent programs.Rust (and you) makes an assertion that all code should be able to run in a concurrent context. Code that passes that assertion may be more portable than code that does not.
What is important for you to understand is: code can be correct under a different set of assertions. If you assert that some code will not run in a concurrent environment, it can be perfectly correct to create a mutable global variable. And this assertion can be done implicitly (ie: I wrote the program knowing I'm not spawning any threads, so I know this variable will not have shared mutable access).
Comment by ii41 5 days ago
Comment by littlestymaar 5 days ago
In it's not. The only thing that makes having a shared global state unsafe in Rust is the fact that this “global” state is shared across threads.
If you know you want the exact same guarantees as in Zig (that is code that will work as long as you don't use multiple threads but will be UB if you do) then it's just: static mut x: u64 = 0;
The only difference between Zig and Rust being that you'll need to wrap access to the shared variable in an unsafe block (ideally with a comment explaining that it's safe as long as you do it from only one thread).
See https://doc.rust-lang.org/nightly/reference/items/static-ite...
Comment by gpm 5 days ago
It really doesn't. Rust's standard library does to an extent, because rust's standard library gives you ways to run code in concurrent contexts. Even then it supports non-concurrent primitives like thread locals and state that can't be transferred or shared between threads and takes advantage of that fact. Rust the language would be perfectly happy for you to define a standard library that just only supports the single threaded primitives.
You know what's not (generally) safe in a single threaded context? Mutable global variables. I mean it's fine for an int so long as you don't have safe ways to get pointer types to it that guarantee unique access (oops, rust does. And it's really nice for local reasoning about code even in single threaded contexts - I wouldn't want to give them up). But as soon as you have anything interesting, like a vector, you get invalidation issues where you can get references to memory it points to that you can then free while you're still holding the reference and now you've got a use after free and are corrupting random memory.
Rust has a bunch of abstractions around the safe patterns though. Like you can have a `Cell<u64>` instead of a `u64` and stick that in a thread local and access it basically like a u64 (both reading and writing), except you can't get those pointers that guarantee nothing is aliasing them to it. And a `Cell<Vec<u64>>` won't let you get references to the elements of the vector inside of it at all. Or a `RefCell<_>` which is like a RwLock except it can't be shared between threads, is faster, and just crashes instead of blocking because blocking would always result in a deadlock.
Comment by DSingularity 5 days ago
Comment by wolvesechoes 5 days ago
But no, clearly there is no cult build around Rust, and everyone that suggest otherwise is dishonest.
Comment by Joker_vD 5 days ago
Which, for certain kinds of programs, is trivially simple for e.g. "set value once during early initialization, then only read it". No, it's not thread-local. And even for "okay, maybe atomically update it once in a blue moon from one specific place in code" scenario is pretty easy to do locklessly.
Comment by kibwen 4 days ago
Comment by rowanG077 5 days ago
Comment by innocentoldguy 5 days ago
Comment by jpfromlondon 5 days ago
The difference is it doesn't prevent you so it doesn't "just require"
Comment by kibwen 4 days ago
Seriously, I'm begging people to try writing a program that uses ordinary threads in Rust via `std::thread::scope`, it's eye-opening how lovely thread-based concurrency is when you have modern tools at your disposal.
Comment by DSingularity 5 days ago
Comment by afiori 5 days ago
Go is by default not thread safe. Here the author shows that by looping
for {
globalVar = &Ptr { val: &myval }
globalVar = &Int { val: 42 }
}
You can create a pointer with value 42 as the type and value are two different words and are not updated atomicallySo I guess go is easier to write, but not with the same level of safety
Comment by fpoling 5 days ago
Rust channels implemented as a library are more powerful covering more cases and explicit low-level synchronization is memory-safe.
My only reservation is the way async was implemented in Rust with the need to poll futures. As a user of async libraries it is very ok, but when one needs to implement a custom future it complicates things.
Comment by gjsjchd6 5 days ago
Comment by ajross 4 days ago
And the situations where you really need a "systems programming" environment have been really at best a wash with Rust. It's mostly replacing boring middleware (c.f. the linked article). Where are the rustacean routing engines and database backends and codecs and kernels? Not in deployment anywhere, not yet. C still rules that world, even for new features.
[1] Well, everything big enough to need a typesafe high performance platform. The real "everything", to first approximation, should be in python.
Comment by estebank 4 days ago
That might be true. I personally still prefer to use a language with sum-types and exhaustive pattern matching for encoding business logic.
> and much cheaper to maintain.
[citation needed]
> Where are the rustacean routing engines and database backends and codecs and kernels? Not in deployment anywhere, not yet.
It is used at Amazon on Firecracker, S3, EC2, CloudFront, Route 53, and that's just what was publicly talked about in 2020[0].
It is used in Android, including in the Kernel[1].
It is used at Microsoft, including in the Kernel[2].
It is used extensively in Firefox, and less extensively in Chrome. JPEG XL might be reincorporated into them because there's a Rust codec in the works.
For databases, the earliest I remember is TiKV[3], which hit 1.0 back in 2018. There are others since.
> C still rules that world, even for new features.
Sure. So?
[0]: https://aws.amazon.com/blogs/opensource/why-aws-loves-rust-a...
[1]: https://security.googleblog.com/2025/11/rust-in-android-move...
[2]: https://www.thurrott.com/windows/282471/microsoft-is-rewriti...
Comment by afiori 5 days ago
Comment by ViewTrick1002 5 days ago
Keep in mind that one requirement is being able to create things like Embassy.
Comment by afiori 5 days ago
In a different universe rust still does not have async and in 5 years it might get an ocaml-style effect system.
Comment by ViewTrick1002 5 days ago
Comment by afiori 5 days ago
Comment by ViewTrick1002 4 days ago
From 2023:
> In that regard, async/await has been phenomenally successful. Many of the most prominent sponsors of the Rust Foundation, especially those who pay developers, depend on async/await to write high performance network services in Rust as one of their primary use cases that justify their funding.
Comment by 0xedd 5 days ago
Comment by JuniperMesos 5 days ago
And it's a good concept, because it makes people feel a bit uncomfortable to type the word "unsafe", and they question whether a globally mutable variable is in fact what they want. Which is great! Because this is saving every future user of that software from concurrency bugs related to that globally mutable variable, including ones that aren't even preserved in the software now but that might get introduced by a later developer who isn't thinking about the implications of that global unsafe!
Comment by nixpulvis 5 days ago
If you treat shared state like owned state, you're in for a bad time.
Comment by apitman 5 days ago
Comment by rrgok 5 days ago
Comment by vaylian 5 days ago
Comment by metaltyphoon 5 days ago
Comment by Dylan16807 5 days ago
Maybe, but the language being hard in aggregate is very different from the quoted claim that this specific thing is hard.
Comment by vovavili 5 days ago
Comment by bnolsen 1 day ago
Comment by hu3 5 days ago
Comment by adastra22 5 days ago
Comment by nixpulvis 5 days ago
Comment by adastra22 5 days ago
Comment by globalnode 5 days ago
Comment by andsoitis 5 days ago
My understanding is that Rust prevents data races, but not all race conditions. You can still get a logical race where operations interleave in unexpected ways. Rust can’t detect that, because it’s not a memory-safety issue.
So you can still get deadlocks, starvation, lost wakeups, ordering bugs, etc., but Rust gives you:
- No data races
- No unsynchronized aliasing of mutable data
- Thread safety enforced through type system (Send/Sync)
Comment by throwawaymaths 5 days ago
Comment by oconnor663 3 days ago
Comment by tczMUFlmoNk 5 days ago
This fits quite naturally in Rust. You can let your mutex own the pair: locking a `Mutex<(u32, u32)>` gives you a guard that lets you access both elements of the pair. Very often this will be a named `Mutex<MyStruct>` instead, but a tuple works just as well.
Comment by PartiallyTyped 5 days ago
Because rust guarantees you won't have multiple exclusive (and thus mutable refs), you won't have a specific class of race conditions.
Sometimes however, these programs are very strict, and you need to relax these guarantees. To handle those cases, there are structures that can give you the same shared/exclusive references and borrowing rules (ie single exclusive, many shared refs) but at runtime. Meaning that you have an object, which you can reference (borrow) in multiple locations, however, if you have an active shared reference, you can't get an exclusive reference as the program will (by design) panic, and if you have an active exclusive reference, you can't get any more references.
This however isn't sufficient for multithreaded applications. That is sufficient when you have lots of pieces of memory referencing the same object in a single thread. For multi-threaded programs, we have RwLocks.
Comment by treyd 5 days ago
Comment by sesm 5 days ago
Rust approach to shared memory is in-place mutation guarded by locks. This approach is old and well-know, and has known problems: deadlocks, lock contention, etc. Rust specifically encourages coarse-granular locks by design, so lock contention problem is very pressing.
There are other approaches to shared memory, like ML-style mutable pointers to immutable data (perfected in Clojure) and actors. Rust has nothing to do with them, and as far as I understand the core choices made by the language make implementing them very problematic.
Comment by aw1621107 5 days ago
Would you mind elaborating on this? At least off the top of my head a mut Arc<T> seems like it should suffice for a mutable pointer to immutable data, and it's not obvious to me what about actors makes implementing them in Rust very problematic.
Comment by ClayShentrup 4 days ago
Comment by ViewTrick1002 5 days ago
Logical race conditions and deadlocks can still happen.
Comment by kibwen 5 days ago
Comment by globalnode 5 days ago
Comment by timschmidt 5 days ago
Comment by mh2266 5 days ago
Of course the borrow checker and when you use lifetimes can be complex to learn, especially if you’re coming from GC-land, just the language syntax isn’t really that weird.
Comment by timschmidt 5 days ago
Comment by peterfirefly 4 days ago
Comment by TylerE 5 days ago
Comment by pornel 5 days ago
Rust data types can be "Send" (can be moved to another thread) and "Sync" (multiple threads can access them at the same time). Everything else is derived from these properties (structs are Send if their fields are Send. Wrapping non-Sync data in a Mutex makes it Sync, thread::spawn() requires Send args, etc.)
Rust doesn't even reason about thread-safety of functions themselves, only the data they access, and that is sufficient if globals are required to be "Sync".
Comment by semiinfinitely 4 days ago
Comment by forrestthewoods 5 days ago
Comment by themafia 5 days ago
They are to be used with caution. If your execution environment is simple enough they can be quite useful and effective. Engineering shouldn't be a religion.
> I can not count how many times I have been pulled in to debug some gnarly crash and the result was, inevitably, a mutable global variable.
I've never once had that happen. What types of code are you working on that this occurs so frequently?
Comment by forrestthewoods 5 days ago
Saud by many an engineer whose code was running in systems that were in fact not that simple!
What is irksome is that globals are actually just kinda straight worse. Like the code that doesn't use a singleton and simply passes a god damn pointer turns out to be the simpler and easier thing to do.
> What types of code are you working on that this occurs so frequently?
Assorted C++ projects.
It is particularly irksome when libraries have globals. No. Just no never. Libraries should always have functions for "CreateContext" and "DestroyContext". And the public API should take a context handle.
Design your library right from the start. Because you don't know what execution environments will run in. And it's a hell of a lot easier to do it right from the start than to try and undo your evilness down the road.
All I want in life is a pure C API. It is simple and elegant and delightful and you can wrap it to run in any programming environment in existence.
Comment by kibwen 4 days ago
Sure thing boss, here's that header file populated exclusively by preprocessor macros that you asked for.
Comment by forrestthewoods 4 days ago
Comment by smallstepforman 5 days ago
Comment by forrestthewoods 5 days ago
The way Blizzard implemented this is super super clever. They created an entirely duplicate "replay world". When you die the server very quickly "backfills" data in the "replay world". (Server doesn't send all data initially to help prevent cheating). The camera then flips to render the "replay world" while the "gameplay world" continues to receives updates. After a few seconds the camera flips back to the "gameplay world" which is still up-to-date and ready to rock.
Implementing this feature required getting rid of all their evil dirty global variables. Because pretty much every time someone asserted "oh we'll only ever have one of these!" that turned out to be wrong. This is a big part of the talk. Mutables globals are bad!
> Extra large codebases have controllers/managers that must be accessible by many modules.
I would say in almost every single case the code is better and cleaner to not use mutable globals. I might make a begrudging exception for logging. But very begrudgingly. Go/Zig/Rust/C/C++ don't have a good logging solution. Jai has an implict context pointer which is clever and interesting.
Rust uses the unsafe keyword as an "escape hatch". If I wrote a programming language I probably would, begrudgingly, allow mutable globals. But I would hide their declaration and usage behind the keyworld `unsafe_and_evil`. Such that every single time a programmer either declared or accessed a mutable global they would have to type out `unsafe_and_evil` and acknowledge their misdeeds.
Comment by kibwen 5 days ago
1. Read-only (`const`s in Rust). These are fine, no objections.
2. Automatic-lazily-initialized write-once, read-only thereafter (`LazyLock` in Rust). These are also basically fine.
3. Manually-initialized write-once, read-only thereafter (`OnceLock` in Rust). These are also basically fine, but slightly more annoying because you need to be sure to manually cover all possible initialization pathways.
4. Write-only. This is where loggers are, and these are also basically fine.
5. Arbitrary read/write. This is the root of all evil, and what we classically mean when we say "global mutable state".
Comment by forrestthewoods 4 days ago
2 and 3 are basically fine. Just so long as you don’t rely on initialization order. And don’t have meaningful cleanup. C++ initialization fiasco is great pain. Crash on shutdown bugs are soooo common with globals.
4 of have to think about.
And yes 5 is the evilness.
Comment by rascul 5 days ago
Comment by forrestthewoods 4 days ago
Comment by taneq 5 days ago
Comment by forrestthewoods 5 days ago
Comment by gpm 5 days ago
Anyways, I think there are probably better solutions to the problem than globals, we just haven't seen a language quite solve it yet.
Comment by Panzerschrek 5 days ago
Comment by gpm 5 days ago
There's lots of interest things you could do with a rust like (in terms of correctness properties) high level language, and getting rid of global variables might be one of them (though I can see arguments in both directions). Hopefully someone makes a good one some day.
Comment by tcfhgj 5 days ago
doesn't imply you have to expose it as a global mutable variable
Comment by Horusiath 5 days ago
Comment by etse 5 days ago
Comment by kibwen 5 days ago
Comment by dxxvi 5 days ago
Comment by adastra22 5 days ago
Global state is allowed. It just has to be thread safe.
Comment by therein 5 days ago
Comment by spacechild1 4 days ago
Comment by whytevuhuni 3 days ago
As another comment said, global state is allowed. It just has to be proven thread-safe via Rust's Send and Sync traits, and 'static lifetime. I've used things like LazyLock and ArcSwap to achieve this in the past.
Comment by dxxvi 4 days ago
Comment by stouset 5 days ago
If you use unsafe to opt out of guarantees that the compiler provides against data races, it’s no different than doing the exact same thing in a language that doesn’t protect against data races.
Comment by lowbloodsugar 5 days ago
Second, unsafe means the author is responsible for making it safe. Safe in rust means that the same rules must apply as unsafe code. It does not mean that you don't have to follow the rules. If one instead used it to violate the rules, then the code will certainly cause crashes.
I can see that some programmers would just use unsafe to "get around a problem" caused by safe rust enforcing those rules, and doing so is almost guaranteed to cause crashes. If the compiler won't let you do something, and you use unsafe to do it anyway, there's going to be a crash.
If instead we use unsafe to follow the rules, then it won't crash. There are tools like Miri that allow us to test that we haven't broken the rules. The fact that Miri did find two issues in my crate shows that unsafe is difficult to get right. My crate does clever bit-tricks and has object graphs, so it has to use unsafe to do things like having back pointers. These are all internal, and you can use the crate in safe rust. If we use unsafe to implement things like doubly-linked lists, then things are fine. If we use unsafe to allow multiple threads to mutate the same pointers (Against The Rules), then things are going to crash.
The thing is, when you are programming in C or C++, it's the same as writing unsafe rust all the time. In C/C++, the "pocket of unsafe code" is the entire codebase. So sure, you can write safe C, like I can write safe "unsafe rust". But 99% of the code I write is safe rust. And there's no equivalent in C or C++.
Comment by disappoint 5 days ago
Comment by bigstrat2003 5 days ago
I mean, it does. I'm not sure what you consider the default approach, but to me it would be to wrap the data in a Mutex struct so that any thread can access it safely. That works great for most cases.
> Perhaps mutable global variables are not a common use case.
I'm not sure how common they are in practice, though I would certainly argue that they shouldn't be common. Global mutable variables have been well known to be a common source of bugs for decades.
> Unsafe might make it easier, but it’s not obvious and probably undesired.
All rust is doing is forcing you to acknowledge the trade-offs involved. If you want safety, you need to use a synchronization mechanism to guard the data (and the language provides several). If you are ok with the risk, then use unsafe. Unsafe isn't some kind of poison that makes your program crash, and all rust programs use unsafe to some extent (because the stdlib is full of it, by necessity). The only difference between rust and C is that rust tells you right up front "hey this might bite you in the ass" and makes you acknowledge that. It doesn't make that global variable any more risky than it would've been in any other language.
Comment by nu11ptr 5 days ago
I'm a Rust fan, and I would generally agree with this. It isn't difficult, but trivial isn't quite right either. And no, global vars aren't terribly common in Rust, and when used, are typically done via LazyLock to prevent data races on intialization.
> I don’t know Rust, but I’ve heard pockets of unsafe code in a code base can make it hard to trust in Rust’s guarantees. The compromise feels like the language didn’t actually solve anything.
Not true at all. First, if you aren't writing device drivers/kernels or something very low level there is a high probability your program will have zero unsafe usages in it. Even if you do, you now have an effective comment that tells you where to look if you ever get suspicious behavior. The typical Rust paradigm is to let low level crates (libraries) do the unsafe stuff for you, test it thoroughly (Miri, fuzzing, etc.), and then the community builds on these crates with their safe programs. In contrast, C/C++ programs have every statement in an "unsafe block". In Rust, you know where UB can or cannot happen.
Comment by irishcoffee 5 days ago
By the time suspicious behavior happens, isn’t it kind of a critical inflection point?
For example, the news about react and next that came out. Once the code is deployed, re-deploying (especially with a systems language that quite possibly lives on an air-gapped system with a lot of rigor about updates) means you might as well have used C, the dollar cost is the same.
Comment by stouset 5 days ago
One, the dollar cost is not the same. The baseline floor of quality will be higher for a Rust program vs. a C program given equal development effort.
Second, the total possible footprint of entire classes of bugs is zero thanks to design features of Rust (the borrowck, sum types, data race prevention), except in a specifically delineated areas which often total zero in the vast majority of Rust programs.
Comment by irishcoffee 5 days ago
Hmm, according to whom, exactly?
> Second, the total possible footprint of entire classes of bugs is zero thanks to design features of Rust (the borrowck, sum types, data race prevention), except in a specifically delineated areas which often total zero in the vast majority of Rust programs.
And yet somehow the internet went down because of a program written in rust that didn’t validate input.
Comment by bigstrat2003 5 days ago
Well, Google for one. https://security.googleblog.com/2025/11/rust-in-android-move...
> And yet somehow the internet went down because of a program written in rust that didn’t validate input.
You're ignoring other factors (it wasn't just Cloudflare's rust code that led to the issue), but even setting that aside your framing is not accurate. The rust program went down because the programmer made a choice that, given invalid input, it should crash. This could happen in every language ever made. It has nothing to do with rust.
Comment by disappoint 5 days ago
Comment by disappoint 5 days ago
Except it does. This also has to do with culture. In Rust, I get the impression that one can set it up as roughly two communities.
The first does not consider safety, security and correctness to be the responsibility of the language, instead they consider it their own responsibility. They merely appreciate it when the language helps with all that, and take precautions when the language hinders that. They try to be honest with themselves.
The second community is careless, might make various unfounded claims and actions that sometimes border on cultish and gang mob behavior and beliefs, and can for instance spew unwrap() all over codebases even when not appropriate for that kind of project, or claim that a Rust project is memory safe even when unsafe Rust is used all over the place with lots of basic bugs and UB-inducing bugs in it.
The second community is surprisingly large, and is severely detrimental to security, safety and correctness.
Comment by LinXitoW 5 days ago
Tell me about how these supposed magical groups have anything at all to do with language features. What language can magically conjure triple the memory from thin air because the upstream query returned 200+ entries instead of the 60-ish you're required to support?
Comment by aw1621107 5 days ago
> This could happen in every language ever made. It has nothing to do with rust.
Comment by disappoint 5 days ago
Comment by kibwen 5 days ago
What? The Cloudflare bug was from a broken system configuration that eventually cascaded into (among other things) a Rust program with hardcoded limits that crashed loudly. In no way did that Rust program bring down the internet; it was the canary, not the gas leak. Anybody trying to blame Rust for that event has no idea what they're talking about.
Comment by nu11ptr 5 days ago
Tell me which magic language creates programs free of errors? It would have been better had it crashed and compromised memory integrity instead of an orderly panic due to an invariant the coder didn't anticipate? Type systems and memory safety are nice and highly valuable, but we all know as computer scientists we have yet to solve for logic errors.
Comment by SkiFire13 5 days ago
No, it _did validate_ the input, and since that was invalid it resulted in an error.
People can yap about that unwrap all they want, but if the code just returned an error to the caller with `?` it would have resulted in a HTTP 500 error anyway.
Comment by nu11ptr 5 days ago
When your unsafe area is small, you put a LOT of thought/testing into those small blocks. You write SAFETY comments explaining WHY it is safe (as you start with the assumption there will be dragons there). You get lots of eyeballs on them, you use automated tools like miri to test them. So no, not even in the same stratosphere as "might as well have used C". Your probability of success vastly higher. A good Rust programmer uses unsafe judiciously, where as a C programmer barely blinks as they need ensure every single snippet of their code is safe, which in a large program, is an impossible task.
As an aside, having written a lot of C, the ecosystem and modern constructs available in Rust make writing large scale programs much easier, and that isn't even considering the memory safety aspect I discuss above.
Comment by disappoint 5 days ago
https://github.com/rust-lang/rust/commit/71f5cfb21f3fd2f1740...
https://materialize.com/blog/rust-concurrency-bug-unbounded-...
Comment by aw1621107 5 days ago
Comment by disappoint 5 days ago
Comment by mh2266 5 days ago
> First, if you aren't writing device drivers/kernels or something very low level there is a high probability your program will have zero unsafe usages in it.
from the original comment. Meanwhile all C code is implicitly “unsafe”. Rust at least makes it explicit!
But even if you ignore memory safety issues bypassed by unsafe, Rust forces you to handle errors, it doesn’t let you blow up on null pointers with no compiler protection, it allows you to represent your data exhaustively with sum types, etc etc etc
Comment by irishcoffee 5 days ago
Don’t device drivers live in the Linux kernel tree?
So, unsafe code is generally approved in device driver code?
Why not just use C at that point?
Comment by speed_spread 4 days ago
Rust's rigid type system, compiler checks and insistence on explicitness forces a _culture change_ in the organization. In time, this means that normal developers will regain a chance to contribute to the kernel with much less chance of breaking stuff. Rust not only makes compiled binary more robust but also makes the codebase more accessible.
Comment by stouset 5 days ago
Comment by disappoint 5 days ago
https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
Comment by elsjaako 5 days ago
The third link is absolutely nuts. Why would you want to initialize a struct like that in Rust? It's like saying a functional programming language is hard because you can't do goto. The author sets themselves a challenge to do something that absolutely goes against how rust works, and then complains how hard it is.
If you want to do it to interface with non-rust code, writing a C-style string to some memory is easier.
Comment by Dylan16807 5 days ago
Comment by disappoint 5 days ago
And even your argument taken at face value is poor, since if it is much harder, and it is some of the most critical code and already-hard code, like some complex algorithm, it could by itself be worse overall. And Rust specifically have developers use unsafe for some algorithm implementations, for flexibility and performance.
Comment by aw1621107 5 days ago
(Emphasis added)
But is it worse overall?
It's easy to speculate that some hypothetical scenario could be true. Of course, such speculation on its own provides no reason for anyone to believe it is true. Are you able to provide evidence to back up your speculation?
Comment by steveklabnik 4 days ago
Comment by stouset 5 days ago
You have zero sense of perspective. Even if we accept the premise that unsafe Rust is harder than C (which frankly is ludicrous on the face of it) we’re talking about a tiny fraction of the overall code of Rust programs in the wild. You have to pay careful attention to C’s issues virtually every single line of code.
With all due respect this may be the singular dumbest argument I’ve ever had the displeasure of participating in on Hacker News.
Comment by aw1621107 5 days ago
I think there's a very strong dependence on exactly what kind of unsafe code you're dealing with. On one hand, you can have relatively straightforwards stuff like get_unsafe or calling into simpler FFI functions. On the other hand, you have stuff like exposing a safe, ergonomic, and sound APIs for self-referential structures, which is definitely an area of active experimentation.
Of course, in this context all that is basically a nitpick; nothing about your comment hinges on the parenthetical.
Comment by disappoint 5 days ago
Comment by disappoint 5 days ago
Comment by aw1621107 5 days ago
Well, you're the one asking for a comparison with C, and this subthread is generally comparing against C, so you tell us.
> Modern C++ provides a lot of features that makes this topic easier, also when programs scale up in size, similar to Rust. Yet without requirements like no universal aliasing. And that despite all the issues of C++.
Well yes, the latter is the tradeoff for the former. Nothing surprising there.
Unfortunately even modern C++ doesn't have good solutions for the hardest problems Rust tackles (yet?), but some improvement is certainly more welcome than no improvement.
> Which is wrong
Is it? Would you be able to show evidence to prove such a claim?
Comment by deathanatos 5 days ago
But you only need about 5% of the concepts in that comment to be productive in Rust. I don't think I've ever needed to know about #[fundamental] in about 12 years or so of Rust…
> In both Go and Rust, allocating an object on the heap is as easy as returning a pointer to a struct from a function. The allocation is implicit. In Zig, you allocate every byte yourself, explicitly. […] you have to call alloc() on a specific kind of allocator,
> In Go and Rust and so many other languages, you tend to allocate little bits of memory at a time for each object in your object graph. Your program has thousands of little hidden malloc()s and free()s, and therefore thousands of different lifetimes.
Rust can also do arena allocations, and there is an allocator concept in Rust, too. There's just a default allocator, too.
And usually a heap allocation is explicit, such as with Box::new, but that of course might be wrapped behind some other type or function. (E.g., String, Vec both alloc, too.)
> In Rust, creating a mutable global variable is so hard that there are long forum discussions on how to do it.
The linked thread is specifically about creating a specific kind of mutable global, and has extra, special requirements unique to the thread. The stock "I need a global" for what I'd call a "default situation" can be as "simple" as,
static FOO: Mutex<T> = Mutex::new(…);
Since mutable globals are inherently memory unsafe, you need the mutex.(Obviously, there's usually an XY problem in such questions, too, when someone wants a global…)
To the safety stuff, I'd add that Rust not only champions memory safety, but the type system is such that I can use it to add safety guarantees to the code I write. E.g., String can guarantee that it always represents a Unicode string, and it doesn't really need special support from the language to do that.
Comment by attractivechaos 5 days ago
The similar argument against C++ is applicable here: another programmer may be using 10% (or a different 5%) of the concepts. You will have to learn that fraction when working with him/her. This may also happen when you read the source code of some random projects. C programmers seldom have this problem. Complexity matters.
Comment by scuff3d 5 days ago
Comment by hu3 4 days ago
Before LLMs, only the author had a firm grasp of how their convoluted solution works.
Now sometimes not even the author knows wtf is going on among thousands of added lines of code.
Comment by diarrhea 4 days ago
But, for loops get tedious. So people will make helper functions. Generic ones today, non-generic in the past. The result is that you have a zoo of iteration-related helper functions all throughout. You'll need to learn those when onboarding to a new code base as well. Go's readability makes this easier, but by definitions everything's entirely non-standard.
Comment by jcgl 4 days ago
This is overblown, imo. Now just generics exist, you just define Map(), Filter(), and Reduce() in your internal util package. So yes, a new dev needs to find the util package. But they need to do that anyway.
What’s more, these particular functions don’t spread into the type signatures of other functions. That means a new dev only has to go looking for them when they themselves want to use those functions.
Sure, it’s not entirely ideal maybe. But the tone and content of your comment makes it sound a zillion times worse than it is.
Comment by cestith 4 days ago
Comment by kibwen 5 days ago
Comment by zozbot234 5 days ago
Comment by Capricorn2481 5 days ago
> Rust can also do arena allocations, and there is an allocator concept in Rust, too. There's just a default allocator, too.
Thank you. I've seen this repeated so many times. Casey Muratori did a video on batch allocations that was extremely informative, but also stupidly gatekeepy [1]. I think a lot of people who want to see themselves as super devs have latched onto this point without even understanding it. They talk like RAII makes it impossible to batch anything.
Last year the Zig Software Foundation wrote about Asahi Lina's comments around Rust and basically implied she was unknowingly introducing these hidden allocations, citing this exact Casey Muratori video. And it was weird. A bunch of people pointed out the inaccuracies in the post, including Lina [2]. That combined with Andrew saying Go is for people without taste (not that I like Go myself), I'm not digging Zig's vibe of dunking on other companies and languages to sell their own.
[1] https://www.youtube.com/watch?v=xt1KNDmOYqA [2] https://lobste.rs/s/hxerht/raii_rust_linux_drama
Comment by zozbot234 5 days ago
Comment by kibwen 5 days ago
Though I'd still reach for something like Bumpalo ( https://crates.io/crates/bumpalo ) unless I had good reason to avoid it.
Comment by Mawr 5 days ago
Is there a language that can't?
The author isn't saying it's literally impossible to batch allocate, just that the default happy path of programming in Rust & Go tends to produce a lot of allocations. It's a take more nuanced than the binary possible vs impossible.
Comment by seanmcdirmid 5 days ago
Comment by jeberle 5 days ago
https://docs.oracle.com/en/java/javase/25/docs/api/java.base...
Comment by Dylan16807 5 days ago
And a lot of people writing Java can't update to that.
Comment by jeberle 4 days ago
If you just want an arena interface, ByteBuffer has been there since Java 1.4 (2002). It also does off-heap w/ ByteBuffer.allocateDirect().
https://docs.oracle.com/en/java/javase/25/docs/api/java.base...
Comment by seanmcdirmid 4 days ago
Comment by dnautics 5 days ago
aren't allocators types in rust?
suppose you had an m:n system (like say an evented http request server split over several threads so that a thread might handle several inbound requests), would you be able to give each request its own arena?
Comment by codys 5 days ago
And so if in your example every request can have the same Allocator type, and then have distinct instances of that type . For example, you could say "I want an Arena" and pick the Arena type that impls Allocator, and then create a new instance of Arena for each `Vec::new_in(alloc)` call.
Alternately, if you want every request to have a distinct Allocator type as well as instance, one can use `Box<dyn Allocator>` as the allocators type (or use any other dispatch pattern), and provide whatever instance of the allocator is appropriate.
Comment by weavie 5 days ago
Comment by tapirl 5 days ago
Just a pure question: Is Rust allocator global? (Will all heap allocations use the same allocator?)
Comment by steveklabnik 4 days ago
The standard library provides a global allocator. The collections in the standard library currently use that allocator.
It also provides an unstable interface for allocators in general. That's of course useful someday, but also doesn't prevent people from using whatever allocators they want in the meantime. It just means that libraries that want to be generic over one cannot currently agree. The standard library collections also will use that once it becomes stable.
Comment by yccs27 5 days ago
Comment by 10000truths 5 days ago
This first-class representation of memory as a resource is a must for creating robust software in embedded environments, where it's vital to frontload all fallibility by allocating everything needed at start-up, and allow the application freedom to use whatever mechanism appropriate (backpressure, load shedding, etc) to handle excessive resource usage.
Comment by kibwen 5 days ago
But for operating systems with overcommit, including Linux, you won't ever see the act of allocation fail, which is the whole point. All the language-level ceremony in the world won't save you.
Comment by pornel 5 days ago
You can impose limits per process/cgroup. In server environments it doesn't make sense to run off swap (the perf hit can be so large that everything times out and it's indistinguishable from being offline), so you can set limits proportional to physical RAM, and see processes OOM before the whole system needs to resort to OOMKiller. Processes that don't fork and don't do clever things with virtual mem don't overcommit much, and large-enough allocations can fail for real, at page mapping time, not when faulting.
Additionally, soft limits like https://lib.rs/cap make it possible to reliably observe OOM in Rust on every OS. This is very useful for limiting memory usage of a process before it becomes a system-wide problem, and a good extra defense in case some unreasonably large allocation sneaks past application-specific limits.
These "impossible" things happen regularly in the services I worked on. The hardest part about handling them has been Rust's libstd sabotaging it and giving up before even trying. Handling of OOM works well enough to be useful where Rust's libstd doesn't get in the way.
Rust is the problem here.
Comment by tucnak 5 days ago
Comment by pornel 5 days ago
What I had in mind was servers scaled to run near maximum capacity of the hardware. When the load exceeds what the server can handle in RAM and starts shoving requests' working memory into swap, you typically won't get higher throughput to catch up with the overload. Swap, even if "fast enough", will slow down your overall throughput when you need it to go faster. This will make requests pile up even more, making more of them go into swap. Even if it doesn't cause a death spiral, it's not an economical way to run servers.
What you really need to do is shed the load before it overwhelms the server, so that each box runs at its maximum throughput, and extra traffic is load-balanced elsewhere, or rejected, or at least queued in some more deliberate and efficient fashion, rather than franticly moving server's working memory back and forth from disk.
You can do this scaling without OOM handling if you have other ways of ensuring limited memory usage or leaving enough headroom for spikes, but OOM handling lets you fly closer to the sun, especially when the RAM cost of requests can be very uneven.
Comment by zozbot234 5 days ago
Comment by BitFlogger 4 days ago
Don't look at swap as more memory on slow / hdds. Look at it as a place the kernel can use if it needs a place to put something temporarily.
This can happen on large memory systems fairly easily when memory gets fragments and something asks for a chunk of memory than can't be allocated because there isn't a large enough contiguous block, so the allocation fails.
I always do a least a couple of GBs now for swap... I won't really miss the storage and that at least gives the kernel a place to re-org/compact memory and keep chugging along.
Comment by 10000truths 5 days ago
Simplest example is to allocate and pin all your resources on startup. If it crashes, it does so immediately and with a clear error message, so the solution is as straightforward as "pass bigger number to --memory flag" or "spec out larger machine".
Comment by kibwen 5 days ago
Overcommit means that the act of memory allocation will not report failure, even when the system is out of memory.
Instead, failure will come at an arbitrary point later, when the program actually attempts to use the aforementioned memory that the system falsely claimed had been allocated.
Allocating all at once on startup doesn't help, because the program can still fail later when it tries to actually access that memory.
Comment by 10000truths 5 days ago
Comment by kibwen 5 days ago
Comment by interroboink 5 days ago
(I understand that mlock prevents paging-out, but in my mind that's a separate concern from pre-faulting?)
Comment by 10000truths 5 days ago
Comment by the_duke 5 days ago
Or, even simpler, just turn off over-commit.
But if swap comes into the mix, or just if the OS decides it needs the memory later for something critical, you can still get killed.
Comment by bluGill 5 days ago
Comment by knorker 5 days ago
Comment by wavemode 5 days ago
Comment by dlisboa 5 days ago
Comment by xyzzy_plugh 5 days ago
It's not a stretch to imagine that a different namespace might want different semantics e.g. to allow a container to opt out of overcommit.
It is hard to justify the effort required to enable this unless it'll be useful for more than a tiny handful of users who can otherwise afford to run off an in-house fork.
Comment by kibwen 5 days ago
Except this won't happen, because "cope with allocation failure" is not something that 99.9% of programs could even hope to do.
Let's say that you're writing a program that allocates. You allocate, and check the result. It's a failure. What do you do? Well, if you have unneeded memory lying around, like a cache, you could attempt to flush it. But I don't know about you, but I don't write programs that randomly cache things in memory manually, and almost nobody else does either. The only things I have in memory are things that are strictly needed for my program's operation. I have nothing unnecessary to evict, so I can't do anything but give up.
The reason that people don't check for allocation failure isn't because they're lazy, it's because they're pragmatic and understand that there's nothing they could reasonably do other than crash in that scenario.
Comment by AlotOfReading 5 days ago
For example, you could finish writing data into files before exiting gracefully with an error. You could (carefully) output to stderr. You could close remote connections. You could terminate the current transaction and return an error code. Etc.
Most programs are still going to terminate eventually, but they can do that a lot more usefully than a segfault from some instruction at a randomized address.
Comment by Dylan16807 5 days ago
Comment by bluGill 5 days ago
Comment by Dylan16807 5 days ago
Comment by bluGill 5 days ago
Comment by xyzzy_plugh 4 days ago
Comment by scottlamb 4 days ago
What would "cope" mean? Something like returning an error message like "can't load this image right now"? Such errors are arguably better than crashing the program entirely but still worth avoiding.
I think overcommit exists largely of fork(). In theory a single fork() call doubles the program's memory requirement (and the parent calling it n times in a row (n+1)s the memory requirement). In practice, the OS uses copy-on-write to avoid both this requirement and the expense of copying. Most likely the child won't really touch much of its memory before exit or exec(). Overallocation allows taking advantage of this observation to avoid introducing routine allocation failures after large programs fork().
So if you want to get rid of overallocation, I'd say far more pressing than introducing alloc failure handling paths is ensuring nothing large calls fork(). Fortunately fork() isn't really necessary anymore IMHO. The fork pool concurrency model is largely dead in favor of threading. For spawning child processes with other executables, there's posix_spawn (implemented by glibc with vfork()). So this is achievable.
I imagine there are other programs around that take advantage of overcommit by making huge writable anonymous memory mappings they use sparsely, but I can't name any in particular off the top of my head. Likely they could be changed to use another approach if there were a strong reason for it.
Comment by wavemode 5 days ago
Comment by wolvesechoes 5 days ago
Yet another similarity with Rust.
Comment by thrwyexecbrain 5 days ago
To me, the whole point of Zig's explicit allocator dependency injection design is to make it easy to not use the system allocator, but something more effective.
For example imagine a web server where each request handler gets 1MB, and all allocations a request handler does are just simple "bump allocations" in that 1MB space.
This design has multiple benefits: - Allocations don't have to synchronize with the global allocator. - Avoids heap fragmentation. - No need to deallocate anything, we can just reuse that space for the next request. - No need to care about ownership -- every object created in the request handler lives only until the handler returns. - Makes it easy to define an upper bound on memory use and very easy to detect and return an error when it is reached.
In a system like this, you will definitely see allocations fail.
And if overcommit bothers someone, they can allocate all the space they need at startup and call mlock() on it to keep it in memory.
Comment by zozbot234 5 days ago
Comment by incompatible 5 days ago
Comment by bnolsen 1 day ago
Comment by throwawaymaths 5 days ago
Comment by pjmlp 5 days ago
Notice how none of them kept involved with WG14, just did their own thing with C in Plan 9, and with Inferno, C was only used for the kernel, with everything else done in Limbo, finalizing by minor contributions to Go's first design.
People that worship UNIX and C, should spend some time learning that the authors moved on, trying to improve the flaws they considered their original work suffered from.
Comment by bluecalm 5 days ago
Comment by munificent 5 days ago
How does that work in the presence of recursion or calls through function pointers?
Comment by 10000truths 5 days ago
Function pointers: Zig has a proposal for restricted function types [1], which can be used to enforce compile-time constraints on the functions that can be assigned to a function pointer.
[0]: https://github.com/ziglang/zig/issues/1006 [1]: https://github.com/ziglang/zig/issues/23367
Comment by Guvante 5 days ago
Certainly I agree that allocations in your dependencies (including std) are more annoying in Rust since it uses panics for OOM.
The no-std set of crates is all setup to support embedded development.
Comment by pjmlp 5 days ago
Comment by smallstepforman 5 days ago
Comment by veltas 5 days ago
Also malloc can fail even with overcommit, if you accidentally enter an obviously incorrect size like -1.
Comment by data-ottawa 4 days ago
Comment by vlovich123 5 days ago
> The idea seems to be that you can run your program enough times in the checked release modes to have reasonable confidence that there will be no illegal behavior in the unchecked build of your program. That seems like a highly pragmatic design to me.
This is only pragmatic if you ignore the real world experience of sanitizers which attempt to do the same thing and failing to prevent memory safety and UB issues in deployed C/C++ codebases (eg Android definitely has sanitizers running on every commit and yet it wasn’t until they switched to Rust that exploits started disappearing).
Comment by qouteall 5 days ago
Comment by vlovich123 4 days ago
https://security.googleblog.com/2025/11/rust-in-android-move...
Comment by wrs 5 days ago
Well, not exactly. This is actually a great example of the Go philosophy of being "simple" while not being "easy".
A Vec<T> has identity; the memory underlying a Go slice does not. When you call append(), a new slice is returned that may or may not share memory with the old slice. There's also no way to shrink the memory underlying a slice. So slices actually very much do not work like Vec<T>. It's a common newbie mistake to think they do work like that, and write "append(s, ...)" instead of "s = append(s, ...)". It might even randomly work a lot of the time.
Go programmer attitude is "do what I said, and trust that I read the library docs before I said it". Rust programmer attitude is "check that I did what I said I would do, and that what I said aligns with how that library said it should be used".
So (generalizing) Go won't implement a feature that makes mistakes harder, if it makes the language more complicated; Rust will make the language more complicated to eliminate more mistakes.
Comment by RVuRnvbM2e 5 days ago
Sorry, that is incorrect: https://pkg.go.dev/slices#Clip
> It's a common newbie mistake to think they do work like that, and write "append(s, ...)" instead of "s = append(s, ...)". It might even randomly work a lot of the time.
"append(s, ...)" without the assignment doesn't even compile. So your entire post seems like a strawman?
https://go.dev/play/p/icdOMl8A9ja
> So (generalizing) Go won't implement a feature that makes mistakes harder, if it makes the language more complicated
No, I think it is more that the compromise of complicating the language that is always made when adding features is carefully weighed in Go. Less so in other languages.
Comment by knorker 5 days ago
Clipping doesn't seem to automatically move the data, so while it does mean appending will reallocate, it doesn't actually shrink the underlying array, right?
Comment by wrs 4 days ago
Comment by auxiliarymoose 5 days ago
Comment by masklinn 5 days ago
b := append(a, …)Comment by weakfish 4 days ago
Comment by masklinn 4 days ago
a := make([]int, 0, 5)
a = append(a, 0, 0)
b := append(a, 1)
a = append(a, 0)
fmt.Println(b)
prints [0 0 0]
because the following happens: a := make([]int, 0, 5)
// a = [() _ _ _ _ _]
// a has length 0 but the backing buffer has capacity 5, between the parens is the section of the buffer that's currently part of a, between brackets is the total buffer
a = append(a, 0, 0)
// a = [(0 0) _ _ _]
// a now has length 2, with the first two locations of the backing buffer zeroed
b := append(a, 1)
// b = [(0 0 1) _ _]
// b has length 3, because while it's a different slice it shares a backing buffer with a, thus while a does not see the 1 it is part of its backing buffer:
// a = [(0 0) 1 _ _]
a = append(a, 0)
// append works off of the length, so now it expands `a` and writes at the new location in the backing buffer
// a = [(0 0 0) _ _]
// since b still shares a backing buffer...
// b = [(0 0 0) _ _]Comment by weakfish 1 day ago
Comment by skybrian 5 days ago
Comment by dlisboa 5 days ago
I agree and think Go gets unjustly blamed for some things: most of the foot guns people say Go has are clearly laid out in the spec/documentation. Are these surprising behaviors or did you just not read?
Getting a compiler and just typing away is not a great way of going about learning things if that compiler is not as strict.
Comment by int_19h 5 days ago
Comment by dlisboa 5 days ago
As an example all three of the languages in the article have different error handling techniques, none of which are actually the most popular choice.
Built in data structures in particular, each language does them slightly differently to there’s no escaping learning their peculiarities.
Comment by throwawaymaths 5 days ago
Comment by FridgeSeal 5 days ago
Comment by wrs 4 days ago
Comment by publicdebates 5 days ago
For Go, I wouldn't say that the choice to avoid generics was either intentional or minimalist by nature. From what I recall, they were just struggling for a long time with a difficult decision, which trade-offs to make. And I think they were just hoping that, given enough time, the community could perhaps come up with a new, innovative solution that resolves them gracefully. And I think after a decade they just kind of settled on a solution, as the clock was ticking. I could be wrong.
For Rust, I would strongly disagree on two points. First, lifetimes are in fact what tripped me up the most, and many others, famously including Brian Kernighan, who literally wrote the book on C. Second, Rust isn't novel in combining many other ideas into the language. Lots of languages do that, like C#. But I do recall thinking that Rust had some odd name choices for some features it adopted. And, not being a C++ person myself, it has solutions to many problems I never wrestled with, known by name to C++ devs but foreign to me.
For Zig's manual memory management, you say:
> this is a design choice very much related to the choice to exclude OOP features.
Maybe, but I think it's more based on Andrew's need for Data-Oriented Design when designing high performance applications. He did a very interesting talk on DOD last year[1]. I think his idea is that, if you're going to write the highest performance code possible, while still having an ergonomic language, you need to prioritize a whole different set of features.
Comment by gwd 5 days ago
Indeed, in 2009 Russ Cox laid out clearly the problem they had [1], summed up thus:
> The generic dilemma is this: do you want slow programmers, slow compilers and bloated binaries, or slow execution times?
My understanding is that they were eventually able to come up with something clever under the hood to mitigate that dilemma to their satisfaction.
Comment by mirashii 5 days ago
Comment by gwd 5 days ago
> Go generics combines concepts from "monomorphisation" (stenciling) and "boxing" (dynamic dispatch) and is implemented using GCshape stenciling and dictionaries. This allows Go to have fast compile times and smaller binaries while having generics.
Comment by nasretdinov 4 days ago
Comment by zozbot234 5 days ago
Comment by gwd 5 days ago
Comment by samdoesnothing 5 days ago
Comment by hu3 4 days ago
I only found a blog-like post with bold claims and no statistical significance.
Comment by librasteve 5 days ago
I am sad that it does not mention Raku (https://raku.org) ... because in my mind there is a kind of continuum: C - Zig - C++ - Rust - Go ... OK for low level, but what about the scriptier end - Julia - R - Python - Lua - JavaScript - PHP - Raku - WL?
Comment by librasteve 5 days ago
Raku
Raku stands out as a fast way to working code, with a permissive compiler that allows wide expression.
Its an expressive, general-purpose language with a wide set of built-in tools. Features like multi-dispatch, roles, gradual typing, lazy evaluation, and a strong regex and grammar system are part of its core design. The language aims to give you direct ways to reflect the structure of a problem instead of building abstractions from scratch.
The grammar system is the clearest example. Many languages treat parsing as a specialized task requiring external libraries. Raku instead provides a declarative syntax for defining rules and grammars, so working with text formats, logs, or DSLs often requires less code and fewer workarounds. This capability blends naturally with the rest of the language rather than feeling like a separate domain.
Raku programs run on a sizeable VM and lean on runtime dispatch, which means they typically don’t have the startup speed or predictable performance profile of lower-level or more static languages. But the model is consistent: you get flexibility, clear semantics, and room to adjust your approach as a problem evolves. Incremental development tends to feel natural, whether you’re sketching an idea or tightening up a script that’s grown into something larger.
The language’s long development history stems from an attempt to rethink Perl, not simply modernize it. That history produced a language that tries to be coherent and pleasant to write, even if it’s not small. Choose Raku if you want a language that let's you code the way you want, helps you wrestle with the problem and not with the compiler.
Comment by librasteve 5 days ago
Some comments below on “I want a Go, but with more powerful OO” - well Raku adheres to the Smalltalk philosophy… everything is an object, and it has all the OO richness (rope) of C++ with multiple inheritance, role composition, parametric roles, MOP, mixins… all within an easy to use, easy to read style.
my $forty-two = 42 but 'forty two';
Look away now if you hate sigils.Comment by forgotpwd16 4 days ago
That said, agree Raku is cool. A big disadvantage though it has (or had?), more than the sigils-everywhere syntax & small ecosystem, is performance. It's slower than pre-JIT Python. Go also natively-compiles to self-contained binaries, which some people appreciate. (And there're those that prefer Go's simplicity and don't want very high expressiveness other than specific features.)
Comment by librasteve 4 days ago
Comment by Rikudou 5 days ago
Comment by auxiliarymoose 5 days ago
Between the lack of "colored functions" and the simplicity of communicating with channels, I keep surprising myself with how (relatively) quick and easy it is to develop concurrent systems with correct behavior in Go.
Comment by PaulKeeble 5 days ago
Comment by silisili 5 days ago
Perhaps less guaranteed in patterns that feed a fixed limited number of long running goroutines.
Comment by theshrike79 5 days ago
Comment by kibwen 5 days ago
Comment by auxiliarymoose 5 days ago
Consider a server handling transactional requests, which submit jobs and get results from various background workers, which broadcast change events to remote observers.
This is straightforward to set up with channels in Go. But I haven't seen an example of this type of workload using structured concurrency.
Comment by pornel 5 days ago
Channels communicating between persistent workers are fine when you need decoupled asynchronous operation like that. However, channels and detached coroutines are less appropriate in a bunch of other situations, like fork-join, data parallelism, cancellation of task trees, etc. You can still do it, but you're responsible for adding that structure, and ensuring you don't forget to wait for something, don't forget to cancel something.
Comment by auxiliarymoose 4 days ago
So at least those are a subset of Go's concurrency model.
Comment by pornel 4 days ago
That's why the article about structured concurrency compared it to goto. Everything is a subset of goto. It can do everything that structured programming can do, and more! With goto you can implement your own conditions, switches, loops, and everything else.
The problem is not the lack of power, but lack of enforced structure. You can implement fork-join, but an idiomatic golang implementation won't stop you from forking and forgetting to join.
Another aspect of it is not really technical, but conventions that fell out of what the language offers. It's just way more common to DIY something custom from a couple of channels, even if it could be done with some pre-defined standard pattern. To me, this makes understanding behavior of golang programs harder, because instead of seeing something I already know, like list.par_iter().map().collect(), I need to recognize such behavior across a larger block of code, and think twice whether each channel-goroutine dance properly handles cancellations, thread pool limits, recursive dependencies, is everything is correctly read-only/atomic/locked, and so on.
Comment by yxhuvud 5 days ago
Comment by kristianp 5 days ago
Comment by auxiliarymoose 5 days ago
Comment by macintux 5 days ago
Erlang programmers might disagree with you there.
Comment by kibwen 5 days ago
But distributed systems are hard. If your system isn't inherently distributed, then don't rush towards a model of concurrency that emulates a distributed system. For anything on a single machine, prefer structured concurrency.
Comment by throwawaymaths 5 days ago
the biggest bugbear for concurrent systems is mutable shared data. by inherently being distributable you basically "give up on that" so for concurrent erlang systems you ~mostly don't even try.
if for no other reason than that erlang is saner than go for concurrency
like goroutines aren't inherently cancellable, so you see go programmers build out the kludgey context to handle those situations and debugging can get very tricky
Comment by Zambyte 5 days ago
Comment by maherbeg 4 days ago
Really the only thing I found difficult is finding the concrete implementation of an interface when the interface is defined close to where it is, and when interfaces are duplicated everywhere.
Comment by tialaramex 5 days ago
I can't figure out what the author is envisioning here for Rust.
Maybe, they actually think if they make a pointer to some local variable and then return the pointer, that's somehow allocating heap? It isn't, that local variable was on the stack and so when you return it's gone, invalidating your pointer - but Rust is OK with the existence of invalid pointers, after all safe Rust can't dereference any pointers, and unsafe Rust declares the programmer has taken care to ensure any pointers being dereferenced are valid (which this pointer to a long dead variable is not)
[If you run a new enough Rust I believe Clippy now warns that this is a bad idea, because it's not illegal to do this, but it's almost certainly not what you actually meant]
Or maybe in their mind, Box<Goose> is "a pointer to a struct" and so somehow a function call Box::new(some_goose) is "implicit" allocation, whereas the function they called in Zig to allocate memory for a Goose was explicit ?
Comment by saghm 5 days ago
Comment by dmoy 5 days ago
Comment by vips7L 5 days ago
The options I’ve seen so far are: OCaml, D, Swift, Nim, Crystal, but none of them have seen to be able to capture a significant market.
Comment by metaltyphoon 5 days ago
Comment by gf000 5 days ago
Comment by metaltyphoon 5 days ago
Comment by vips7L 4 days ago
Java’s type system isn’t as strong as it could be either. It is still lacking proper compile time support for null and there’s been no investment in making error handling better. I’ve written it every day for 10 years and the type system definitely doesn’t help you write correct programs.
Comment by metaltyphoon 2 days ago
Idk how up to date ou are in .NET but so you can have an idea how trivial it is in C#:
echo ‘Console.WriteLine(“hello world”);’ >> app.cs
dotnet publish app.cs
That’s it. By default C# is natively compiling.Comment by gf000 4 days ago
> the type system definitely doesn’t help you write correct programs.
It surely helps significantly. You are just looking for even more from the type system, but that's another (fair) statement to make.
Comment by vips7L 4 days ago
However, I don’t think that shields Java from its inability to make the language better. We still don’t have checked nulls and at this rate, even though there’s a draft JEP, I am not sure we will get them within this decade. The community still blindly throws unchecked exceptions because checked exceptions have received no investment to make them easy to work with.
The point of this thread is that people do want that. They want a natively compiled language (by default), that has checked nulls, errors represented in the type system, and has a GC.
Comment by gf000 4 days ago
As for unchecked exceptions, that may be a bit of an "unreasonable ask". The only language that properly solves the problem are languages with effect types, which are an active research area. Every other language have either FP-like error values, or just unchecked exceptions (and there are terrible "solutions" like errno and whatever go does), or most likely both. E.g. Haskell will also throw exceptions, not everything is encoded as a value.
In my opinion both is the most reasonable approach, when you expect an error case, encode it as the return type (e.g. parsing an Integer is expected to fail). But system failures should not go there, exceptions solve it better (stuff like the million kind of connection/file system issues).
Comment by vips7L 4 days ago
Comment by gf000 4 days ago
I believe there was a proposal to incorporate it into the switch expression? That may make it slightly too complex though, with null handling and pattern matching.
Comment by vips7L 3 days ago
A a;
try {
a = someThrowingFn();
} catch (AException ex) {
throw new IllegalStateException(ex); // not possible
}
becomes var a = try! someThrowingFn();
or with Brian's proposal: var a = switch (someThrowingFn()) {
case A anA -> anA;
case throws AException ex -> throw new IllegalStateException(ex);
}
...still a bit verbose and funkyYou should check out Kotlin's proposal for error unions, I think it's pretty good and prevents a lot of boiler plate associated with results/exceptions: https://github.com/Kotlin/KEEP/blob/main/proposals/KEEP-0441.... They propose a similar construct to try! with !! like they have for nullable types.
Comment by vips7L 4 days ago
Comment by mrsmrtss 4 days ago
Comment by vips7L 4 days ago
Comment by mrsmrtss 4 days ago
Comment by vips7L 4 days ago
Comment by Yoric 5 days ago
Comment by throwaway894345 5 days ago
Also, at least at the time, the community was really hostile, but that was true of C++, Ada, and Java communities as well well. But I think those guys have chilled out, so maybe OCaml has too?
Comment by Yoric 5 days ago
So far, I like what I've seen.
Comment by myaccountonhn 5 days ago
Its a really nice language
Comment by yawaramin 5 days ago
$ dune init project my-project
$ dune build
That's it, now you have a compiling project and can start hacking.Comment by dmoy 5 days ago
Comment by Yoric 5 days ago
Comment by dmoy 5 days ago
Though as a side note I see no open gitlab positions mentioning ocaml. Lot of golang and ruby. Whereas jane street kinda always has open ocaml positions advertised. They even hire PL people for ocaml
Comment by zozbot234 5 days ago
Comment by Yoric 5 days ago
I haven't looked at benchmarks, though, so take this with a pinch of salt.
Comment by tayo42 5 days ago
Comment by Philpax 5 days ago
Comment by throwaway894345 5 days ago
* thiserror: I spend ridiculous and unpredictable amounts of time debugging macro expansions
* manually implementing `Error`, `From`, etc traits: I spend ridiculous though predictable amounts of time implementing traits (maybe LLMs fix this?)
* anyhow: this gets things done, but I'm told not to expose these errors in my public API
Beyond these concerns, I also don't love enums for errors because it means adding any new error type will be a breaking change. I don't love the idea of committing to that, but maybe I'm overthinking?
And when I ask these questions to various Rust people, I often get conflicting answers and no one seems to be able to speak with the authority of canon on the subject. Maybe some of these questions have been answered in the Rust Book since I last read it?
By contrast, I just wrap Go errors with `fmt.Errorf("opening file `%s`: %w", filePath, err)` and handle any special error cases with `errors.As()` and similar and move on with life. It maybe doesn't feel _elegant_, but it lets me get stuff done.
Comment by Yoric 5 days ago
What `thiserror` or manually implementing `Error` buys you is the ability to actually do something about higher-level errors. In Rust design, not doing so in a public facing API is indeed considered bad practice. In Go, nobody seems to care about that, which of course makes code easier to write, but catching errors quickly becomes stringly typed. Yes, it's possible to do it correctly in Go, but it's ridiculously complicated, and I don't think I've ever seen any third-party library do it correctly.
That being said, I agree that manually implementing `Error` in Rust is way too time-consuming. There's also the added complexity of having to use a third-party crate to do what feels like basic functionality of error-handling. I haven't encountered problems with `thiserror` yet.
> Beyond these concerns, I also don't love enums for errors because it means adding any new error type will be a breaking change. I don't love the idea of committing to that, but maybe I'm overthinking?
If you wish to make sure it's not a breaking change, mark your enum as `#[non_exhaustive]`. Not terribly elegant, but that's exactly what this is for.
Hope it helped a bit :)
Comment by dmoy 5 days ago
Yea this is exactly what I'm talking about. It's doable in golang, but it's a little bit of an obfuscated pain, few people do it, and it's easy to mess up.
And yes on the flip side it's annoying to exhaustively check all types of errors, but a lot of the times that matters. Or at least you need an explicit categorization that translates errors from some dep into retryable vs not, SLO burning vs not, surfaced to the user vs not, etc. In golang the tendency is to just slap a "if err != nil { return nil, fmt.Errorf" forward in there. Maybe someone thinks to check for certain cases of upstream error, but it's reaaaallly easy to forget one or two.
Comment by Yoric 4 days ago
In Go, `if err != nil { return nil, fmt.Errorf(...) }` is considered handling an error.
In Rust, the equivalent `.context(...)?` is considered passing an error. Handling it is about finding out what happened and doing something about it.
Comment by throwaway894345 5 days ago
In Go we just use errors.Is() or errors.As() to check for specific error values or types (respectively). It’s not stringly typed.
> If you wish to make sure it's not a breaking change, mark your enum as `#[non_exhaustive]`. Not terribly elegant, but that's exactly what this is for.
That makes sense. I think the main grievance with Rust’s error handling is that, while I’m sure there is the possibility to use anyhow, thiserror, non_exhaustive, etc in various combinations to build an overall elegant error handling system, that system isn’t (last I checked) canon, and different people give different, sometimes contradictory advice.
Comment by Yoric 4 days ago
errors.Is() works only if the error is a singleton.
errors.As() works only if the developer has defined their own error implementing both `Error() string` (which is part of the `error` interface) and either `Unwrap() error` or `Unwrap() error[]` (neither of which is part of the `error` interface). Implementing `Unwrap()` is annoying and not automatizable, to the point that I've never seen any third-party library doing it correctly.
So, in my experience, very quickly, to catch a specific error, you end up calling `Error()` and comparing strings. In fact, if my memory serves, that's exactly what `assert` does.
> I think the main grievance with Rust’s error handling is that, while I’m sure there is the possibility to use anyhow, thiserror, non_exhaustive, etc in various combinations to build an overall elegant error handling system, that system isn’t (last I checked) canon, and different people give different, sometimes contradictory advice.
Yeah, this is absolutely a problem in Rust. I _think_ it's moving slowly in the right direction, but I'm not holding my breath.
Comment by snuxoll 5 days ago
Is it a new error condition that downstream consumers want to know about so they can have different logic? Add the enum variant. The entire point of this pattern is to do what typed exceptions in Java were supposed to do, give consuming code the ability to reason about what errors to expect, and handle them appropriately if possible.
If your consumer can't be reasonably expected to recover? Use a generic failure variant, bonus points if you stuff the inner error in and implement std::Error so consumers can get the underlying error by calling .source() for debugging at least.
> By contrast, I just wrap Go errors with `fmt.Errorf("opening file `%s`: %w", filePath, err)` and handle any special error cases with `errors.As()` and similar and move on with life. It maybe doesn't feel _elegant_, but it lets me get stuff done.
Nothing stopping you from doing the same in Rust, just add a match arm with a wildcard pattern (_) to handle everything but your special cases.
In fact, if you suspect you are likely to add additional error variants, the `#[non_exhaustive]` attribute exists explicitly to handle this. It will force consumers to provide a match arm with a wildcard pattern to prevent additions to the enum from causing API incompatibility. This does come with some other limitations, so RTFM on those, but it does allow you to add new variants to an Error enum without requiring a major semver bump.
Comment by iterance 5 days ago
However, I wouldn't recommend it. Breakage over errors is not necessarily a bad thing. If you need to change the API for your errors, and downstreams are required to have generic cases, they will be forced to silently accept new error types without at least checking what those new error types are for. This is disadvantageous in a number of significant cases.
Comment by grufkork 5 days ago
On that topic, I've looked some at building games in Rust but I'm thinking it mostly looks like you're creating problems for yourself? Using it for implementing performant backend algorithms and containerised logic could be nice though.
Comment by saghm 5 days ago
Comment by thijsr 5 days ago
You can annotate your error enum with #[non_exhaustive], then it will not be a breaking change if you add a new variant. Effectively, you enforce that anybody doing a match on the enum must implement the "default" case, i.e. that nothing matches.
Comment by written-beyond 5 days ago
Comment by simgt 5 days ago
Comment by evanmoran 5 days ago
My hope is they will see these repeated pain points and find something that fits the error/result/enum issues people have. (Generics will be harder, I think)
Comment by pa7ch 5 days ago
I see the desire to avoid mucking with control flow so much but something about check/handle just seemed so elegant to me in semi-complex error flows. I might be the only one who would have preferred that over accepting generics.
I can't remember at this point because there were so many similar proposals but I think there was a further iteration of check/handle that I liked better possibly but i'm obviously not invested anymore.
Comment by Rikudou 5 days ago
I kinda got used to it eventually, but I'll never ever consider not having enums a good thing.
Comment by mixedCase 5 days ago
Comment by scythmic_waves 5 days ago
Though I think it's more of a hobby language. The last commit was > 1 year ago.
Comment by sylens 5 days ago
Comment by mrsmrtss 4 days ago
Comment by qudat 5 days ago
Comment by gf000 5 days ago
Comment by hgs3 5 days ago
Comment by gf000 5 days ago
Meanwhile Go's is just multiple value-returns with no checks whatsoever and you can return both a valid value and an error.
Comment by beautron 5 days ago
I appreciate that Go tends to avoid making limiting assumptions about what I might want to do with it (such as assuming I don't want to return a value whenever I return a non-nil error). I like that Go has simple, flexible primitives that I can assemble how I want.
Comment by gf000 5 days ago
Also, just let the use site pass in (out variable, pointer, mutable object, whatever your language has) something to store partial results.
Comment by Orygin 5 days ago
It's not a convention in Go, so it's not breaking any expectations
Comment by lock1 5 days ago
Comment by oncallthrow 5 days ago
Comment by kachapopopow 5 days ago
the most odd one probably being 'const expected = [_]u32{ 123, 67, 89, 99 };'
and the 2nd most being the word 'try' instead of just ?
the 3rd one would be the imports
and `try std.fs.File.stdout().writeAll("hello world!\n");` is not really convincing either for a basic print.
Comment by usrnm 5 days ago
Comment by dannersy 5 days ago
For example, Pythons syntax is quite nice for the most part, but I hate indentation being syntax. I like braces for scoping, I just do. Rust exists in both camps for me; I love matching with Result and Option, but lifetime syntax confuses me sometimes. Not everyone will agree, they are opinions.
Comment by kachapopopow 5 days ago
Comment by the__alchemist 5 days ago
Comment by bigstrat2003 5 days ago
Comment by kibwen 5 days ago
No, this is a wild claim that shows you've either never written async Rust or never written heavily templated C++. Feel free to give code examples if you want to suggest otherwise.
Comment by rustystump 5 days ago
But for real the ratings for me stem from how much arcane symbology i must newly memorize. I found rust to be up there but digestible. The thought of c++ makes me want to puke but not over the syntax.
Comment by kachapopopow 5 days ago
template<auto V>
concept non_zero = (V != 0);
template<typename T>
concept arithmetic = std::is_arithmetic_v<T>;
template<arithmetic T>
requires non_zero<T{42}>
struct complicated {
template<auto... Values>
using nested_alias = std::tuple<
std::integral_constant<decltype(Values), Values>...,
std::conditional_t<(Values > 0 && ...), T, std::nullptr_t>
>;
template<typename... Ts>
static constexpr auto process() {
return []<std::size_t... Is>(std::index_sequence<Is...>) {
return nested_alias<(sizeof(Ts) + Is)...>{};
}(std::make_index_sequence<sizeof...(Ts)>{});
}
};
I most definitely agree.Comment by usrnm 5 days ago
Comment by bnolsen 1 day ago
Comment by von_lohengramm 5 days ago
All control flow in Zig is done via keyword
Comment by dwb 5 days ago
Comment by kachapopopow 5 days ago
Comment by int_19h 5 days ago
If you mean C-style declarations, the fact that tools such as https://linux.die.net/man/1/cdecl even exist to begin with shows what's wrong with it.
Comment by kachapopopow 5 days ago
<fn> <generic> <name>(<type/argument>[:] <type/argument> [(->/:) type]
[import/use/using] (<package>[/|:|::|.]<type> | "file") (ok header files are a relic of the past I have to admit that)
I tried writing zig and as someone who has pretty much written in every commonly used language it just felt different enough where I kept having to look up the syntax.
Comment by dwb 5 days ago
Comment by gf000 5 days ago
Comment by throwawaymaths 5 days ago
constant array with u32, and let the compiler figure out how many of em there are (i reserve the right to change it in the future)
Comment by bnolsen 1 day ago
Comment by throwaway894345 5 days ago
Go isn't like C in that you can actually fit the entire language in your head. Most of us who think we have fit C in our head will still stumble on endless cases where we didn't realize X was actually UB or whatever. I wonder how much C's reputation for simplicity is an artifact of its long proximity to C++?
Comment by kanbankaren 5 days ago
Give an example of UB code that you have committed in real life, not from blogs. I am genuinely curious.
Comment by gf000 5 days ago
Comment by ojosilva 5 days ago
Mutable globals are easy in Zig (presented as freedom, not as "you can now write data races.")
Runtime checks you disable in release builds are "highly pragmatic," with no mention of what happens when illegal behavior only manifests in production.
The standard library having "almost zero documentation" is mentioned but not weighted as a cost the way Go's boilerplate or Rust's learning curve are.
The RAII critique is interesting but also somewhat unfair because Rust has arena allocators too, and nothing forces fine-grained allocation. The difference is that Rust makes the safe path easy and the unsafe path explicit whereas Zig trusts you to know what you're doing. That's a legitimate design, hacking-a!
The article frames Rust's guardrails as bureaucratic overhead while framing Zig's lack of them as liberation, which is grading on a curve. If we're cataloging trade-offs honestly
> you control the universe and nobody can tell you what to do
...that cuts both ways...
Comment by ekropotin 5 days ago
At first glance you can just use static variable of a type supporting interior mutability - RefCell, Mutex, etc…
Comment by umanwizard 5 days ago
They're not.
fn main() {
unsafe {
COUNTER += 1;
println!("COUNTER = {}", COUNTER);
}
unsafe {
COUNTER += 10;
println!("COUNTER = {}", COUNTER);
}
}
Global mutable variables are as easy in Rust as in any other language. Unlike other languages, Rust also provides better things that you can use instead.Comment by raggi 5 days ago
use std::sync::Mutex;
static LIST: Mutex<Vec<String>> = Mutex::new(Vec::new());
fn main() -> Result<(), Box<dyn std::error::Error>> {
LIST.lock()?.push("hello world".to_string());
println!("{}", LIST.lock()?[0]);
Ok(())
}Comment by hu3 4 days ago
It doesn't increment anything for starters. The example would be more convoluted if it did the same thing.
And strings in rust always delivers the WTFs I need o na Friday:
"hello world".to_string()Comment by raggi 4 days ago
use std::sync::Mutex;
fn main() -> Result<(), Box<dyn std::error::Error>> {
static PEDANTRY: Mutex<u64> = Mutex::new(0);
*PEDANTRY.lock()? += 1;
println!("{}", PEDANTRY.lock()?);
Ok(())
}Comment by hu3 3 days ago
And declaring a static variable inside a function, even if in main, smells.
Comment by raggi 3 days ago
Comment by umanwizard 2 days ago
static mut COUNTER: u32 = 0;
(at top-level)If on 2024 edition, you will additionally need
#![allow(static_mut_refs)]Comment by raggi 2 days ago
Comment by masklinn 5 days ago
And that’s where a number of people blow a gasket.
Comment by masklinn 5 days ago
Since 1.80 the vast majority of uses are a LazyLock away.
Comment by written-beyond 5 days ago
Comment by gaanbal 5 days ago
Comment by vibe_assassin 5 days ago
Comment by vegabook 5 days ago
Comment by xpe 5 days ago
In a sense, it is a powerful kind of freedom to choose a language that protects us from the statistically likely blunders. I prefer a higher-level kind of freedom -- one that provides peace of mind from various safety properties.
This comment is philosophical -- interpret and apply it as you see fit -- it is not intended be interpreted as saying my personal failure modes are the same as yours. (e.g. Maybe you don't mind null pointer exceptions in the grand scheme of things.)
Random anecdote: I still have a fond memory of a glorious realization in Haskell after a colleague told me "if you design your data types right, the program just falls into place".
Comment by aw1621107 5 days ago
There's a similar quote from The Mythical Man Month [0, page 102]:
> Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowcharts; they’ll be obvious.
And a somewhat related one from Linus [1]:
> I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important. Bad programmers worry about the code. Good programmers worry about data structures and their relationships.
[0]: https://www.cs.cmu.edu/afs/cs/academic/class/15712-s19/www/p...
Comment by discreteevent 5 days ago
A programming language forces a culture on everybody in the project - it's not just a personal decision like your example.
Comment by xpe 5 days ago
When looking at various programming languages, we see a combination of constraints, tradeoffs, surrounding cultures, and nudges.
For example in Rust, the unsafe capabilities are culturally discouraged unless needed. Syntax-wise it requires extra ceremony.
Comment by saghm 5 days ago
Comment by Klonoar 5 days ago
This is perhaps somewhat natural; people like and want to be good at things. Where you fall on the trade off is up to you.
Comment by bnolsen 1 day ago
Comment by nixpulvis 5 days ago
Comment by hu3 4 days ago
And the compiler had nothing to say about it. "Carry on, thisi is perfectly fine rust code that might crash your app with a panic if left unchecked, no biggie. LGTM" - rust compiler
Comment by nixpulvis 4 days ago
This lies completely on the developers.
Comment by hu3 4 days ago
At the very least, something that brings the application down when the dev assumption fails should be called a much more dangerous word than "unwrap".
So yes, the language has failed there.
"You're holding it wrong" doesn't uphold when one of the language's touted characteristics is having a bitchy compiler designed to save devs from their own stupidity.
Comment by aw1621107 4 days ago
The thing is that Rust's promises are more tightly scoped to very specific types of (mis)behavior. I don't believe it has ever claimed to prevent any and all types of stupidity, let alone ones that have non-stupid uses.
Comment by dismalaf 5 days ago
Odin vs Rust vs Zig would be more apt, or Go vs Java vs OCaml or something...
Comment by forgotpwd16 5 days ago
Comment by gf000 5 days ago
Comment by pm90 5 days ago
Comment by int_19h 5 days ago
Comment by BenGosub 5 days ago
Comment by oconnor663 3 days ago
I love this. I'm gonna steal this :)
> I’m not the first person to pick on this particular Github comment, but it perfectly illustrates the conceptual density of Rust:
The overall point here is fair, but I think it's important to clarify that (iiuc) this comment is talking about a "soundness hole". Soundness holes are cases where there's a bug in the compiler or in a library that allows someone to commit UB without writing `unsafe`. Given the goals of Rust, it doesn't matter how ungodly complicated and contrived the example is. If it produces UB without `unsafe`, then it's a bug that needs to be fixed. In practice, that means a lot of issue threads about soundness involve mind-numbing code samples that mash different features together in unintuitive ways.
But that's a good thing! No one's saying you'll ever need to look at code like this in the wild. They're saying that no matter how hard you (or your coworkers or your dependencies) try, Rust should never fail to protect memory safety in safe code.
Comment by rishabhaiover 5 days ago
This is exactly why I find Go to be an excellent language. Most of the times, Go is the right tool.
Rust doesn't feel like a tool. Ceremonial yet safe and performant.
Comment by auxym 5 days ago
Sure, you can fit all of C in your head, including all the obscure footguns that can lead to UB: https://gist.github.com/Earnestly/7c903f481ff9d29a3dd1
And other fun things like aliasing rules and type punning.
Comment by legobmw99 4 days ago
Comment by Ycros 5 days ago
Comment by eliasdorneles 3 days ago
Comment by Aperocky 5 days ago
Out of all languages I do development in the past few months: Go, Rust, Python, Typescript; Rust is the one that LLM has the least churn/problems in terms of producing correct and functional code given a problem of similar complexity.
I think this outside factor will eventually win more usage for Rust.
Comment by damslunk 5 days ago
Like rust seems particularly well suited for an agent based workflow, in that in theory an agent with a task could keep `cargo check`-ing it's solutions, maybe pulling from docs.rs or source for imported modules, and get to a solution that works with some confidence (assuming the requirements were well defined/possible etc etc).
I've had a mixed bag of an experience trying this with various rust one off projects. It's definitely gotten me some prototype things working, but the evolving development of rust and crates in the ecosystem means there's always some patchwork to get things to actually compile. Anecdotally I've found that once I learned more about the problem/library/project I'll end up scrapping or rewriting a lot of the LLM code. It seems pretty hard to tailor/sandbox the context and workflow of an agent to the extent that's needed.
I think the Bun acquisition by Anthropic could shift things too. Wouldn't be surprised if the majority of code generated/requested by users of LLM's is JS/TS, and Anthropic potentially being able to push for agentic integration with the Bun runtime itself could be a huge boon for Bun, and maybe Zig (which Bun is written in) as a result? Like it'd be one thing for an agent to run cargo check, it'd be another for the agent to monitor garbage collection/memory use while code is running to diagnose potential problems/improvements devs might not even notice until later. I feel like I know a lot of devs who would never touch any of the langs in this article (thinking about memory? too scary!) and would love to continue writing JS code until they die lol
Comment by skywhopper 5 days ago
Comment by thefox 5 days ago
Comment by archargelod 5 days ago
Comment by forgotpwd16 5 days ago
Thus not a general article. For some criteria Python will be a good Rust alternative.
>Can I have a #programming language/compiler similar to #Rust, but with less syntactic complexity?
That's a good question. But considering Zig is manually memory managed and Crystal/Go are garbage collected, you sidestep Rust's strongest selling point.
Comment by scotty79 5 days ago
- Can't `for (-1..1) {`. Must use `while` instead.
- if you allocated something inside of a block and you want it to keep existing outside of a block `defer` won't help you to deallocate it. I didn't find a way to defer something till the end of the function.
- adding variable containing -1 to usize variable is cumbersome. You are better of running everything with isize and converting to usize as last operation wherever you need it.
- language evolved a bunch and LLMs are of very little help.
Comment by scotty79 5 days ago
Deallocating the wrong thing or the right thing too soon bit me in th ass so much already that I feel craving for destructors.
Comment by riku_iki 1 day ago
Comment by gt0 5 days ago
Comment by frobisher 5 days ago
Which can be a worthwhile cost if the benefits of speed and security are needed. But I think it's certainly a cognitive cost.
Comment by notnullorvoid 5 days ago
Comment by Havoc 5 days ago
1) Complementary tools. I picked python and rust for obvious reasons given their differences
2) Longevity. Rust in kernel was important to me because it signaled this isn’t going anywhere. Same for rust invading the tool stacks of various other languages and the rewrite everything in rust. I know it irritates people but for me it’s a positive signal on it being worth investing time into
Comment by gzell 5 days ago
Comment by truth_seeker 5 days ago
Comment by bird0861 4 days ago
Comment by kennykartman 5 days ago
It was fun to read, but I don't see anything new here, and I don't agree too much.
Comment by jasfi 5 days ago
Comment by teleforce 5 days ago
https://github.com/rust-lang/rust/issues/68015#issuecomment-...
Wow, Rust does take programming complexity to another level.
Everything, including programming languages, need to be simple but no simpler. I'm of the opinion that most the computing and memory resources complexity should be handled and abstracted by the OS for example the address space isolation [1].
The author should try D language where it's the Goldilocks of complexity and meta programming compared to Go, Rust and Zig [2].
[1] Linux address space isolation revived after lowering performance hit (59 comments):
https://news.ycombinator.com/item?id=44899488
[2] Ask HN: Why do you use Rust, when D is available? (255 comments):
Comment by gf000 5 days ago
Comment by srameshc 5 days ago
Comment by ux266478 5 days ago
> What is the dreaded UB? I think the best way to understand it is to remember that, for any running program, there are FATES WORSE THAN DEATH. If something goes wrong in your program, immediate termination is great actually!
This has nothing to do with UB. UB is what it says on the tin, it's something for which no definition is given in the execution semantics of the language, whether intentionally or unintentionally. It's basically saying, "if this happens, who knows". Here's an example in C:
int x = 555;
long long *l = (long long*)&x;
x = 123;
printf("%d\n", *l);
This is a violation of the strict aliasing rule, which is undefined behavior. Unless it's compiled with no optimizations, or -fno-strict-aliasing which effectively disables this rule, the compiler is "free to do whatever it wants". Effectively though, it'll just print out 555 instead of 123. All undefined behavior is just stuff like this. The compiler output deviates from the expected input, and only maybe. You can imagine this kind of thing gets rather tricky with more aggressive optimizations, but this potential deviation is all that occurs.Race conditions, silent bugs, etc. can occur as the result of the compiler mangling your code thanks to UB, but so can crashes and a myriad of other things. It's also possible UB is completely harmless, or even beneficial. It's really hard to reason about that kind of thing though. Optimizing compilers can be really hard to predict across a huge codebase, especially if you aren't a compiler dev yourself. That unpredictability is why we say it's bad. If you're compiling code with something like TCC instead of clang, it's a completely different story.
That's it. That's all there is to UB.
Comment by iainmerrick 5 days ago
You don’t think that’s pretty bad?
Comment by ux266478 5 days ago
Comment by publicdebates 5 days ago
Interestingly enough, and only semi related, I had to use volatile for the first time ever in my latest project. Mainly because I was writing assembly that accessed memory directly, and I wanted to make sure the compiler didn't optimize away the variable. I think that's maybe the last C keyword on my bucket list.
Comment by mirashii 5 days ago
People are taught it’s very bad because otherwise they do exactly this, which is the problem. What does your compiler do here may change from invocation to invocation, due to seemingly unrelated flags, small perturbations in unrelated code, or many other things. This approach encourages accepting UB in your program. Code that invokes UB is incorrect, full stop.
Comment by publicdebates 5 days ago
Comment by mirashii 5 days ago
But do they? Where?
More likely, you mean that a particular compiler may say "while the standard says this is UB, it is not UB in this compiler". That's something wholly different, because you're no longer invoking UB.
Comment by ux266478 5 days ago
Comment by ux266478 5 days ago
That's not true at all, who taught you that? Think of it like this, signed integer over/underflow is UB. All addition operations over ints are potentially invoking UB.
int add (int a, int b) { return a + b; }
So this is incorrect code by that metric, that's clearly absurd.Compilers explicitly provide you the means to disable optimizations in a granular way over undefined behavior precisely because a lot of useful behavior is undefined, but compilation units are sometimes too complex to reason about how the compiler will mangle it. -fno-strict-aliasing doesn't suddenly make pointer aliasing defined behavior.
We have compiler behavior for incorrect code, and it's refusing to compile the code in the first place. Do you think it just a quirky oversight that UB triggers a warning at most? The entire point of compilers having free reign over UB was so they could implement platform-specific optimizations in its place. UB isn't arbitrary.
Comment by mirashii 5 days ago
No, it just protects you from a valid but unexpected optimization to your incorrect code. It's even spelled out clearly in the docs: https://www.gnu.org/software/c-intro-and-ref/manual/html_nod...
"Code that misbehaves when optimized following these rules is, by definition, incorrect C code."
> We have compiler behavior for incorrect code, and it's refusing to compile the code in the first place
This isn't and will never be true in C because whether code is correct can be a runtime property. That add function defined above isn't incorrect on its own, but when combined with code that at runtime calls it with values that overflows, is incorrect.
Comment by ux266478 4 days ago
Comment by mirashii 5 days ago
Potentially invoking and invoking are not the same.
Comment by kibwen 5 days ago
Careful. It's not just "consult your compiler", because the behavior of a given compiler on code containing UB is also allowed to vary based on specific compiler version, and OS, and hardware, and the phase of the moon.
Comment by oncallthrow 5 days ago
> It seems the Go development team has a high bar for adding features to the language. The end result is a language that forces you to write a lot of boilerplate code to implement logic that could be more succinctly expressed in another language.
Being able to implement logic more succinctly is not always a good thing. Take error handling syntactic sugar for example. Consider these two snippets:
let mut file = File::create("foo.txt")?;
and: f, err := os.Create("filename.txt")
if err != nil {
return fmt.Errorf("failed to create file: %w", err)
}
The first code is more succinct, but worse: there is no context added to the error (good luck debugging!).Sometimes, being forced to write code in a verbose manner makes your code better.
Comment by howenterprisey 5 days ago
Comment by snuxoll 5 days ago
If you want to add 'proper' error types, wrapping them is just as difficult in Go and Rust (needing to implement `Error` in Go or `std::Error` in Rust). And, while we can argue about macro magic all day, the `thiserror` crate makes said boilerplate a non-issue and allows you to properly propagate strongly-typed errors with context when needed (and if you're not writing library code to be consumed by others, `anyhow` helps a lot too).
Comment by never_inline 4 days ago
Comment by snuxoll 4 days ago
In practice, this ends up with several issues (and I'm just as guilty of doing a bunch of them when I'm writing code not intended for public consumption, to be completely fair).
fmt.Errorf is stupid easy to use. There's a lot of Go code out there that just doesn't use anything else, and we really want to make sure we wrap errors to provide 'context' since there's no backtraces in errors (and nobody wants to force consuming code to pay that runtime cost for every error, given there's no standard way to indicate you want it).
errors.New can be used to create very basic errors, but since it gives you a single instance of a struct implementing `error` there's not a lot you can do with it.
The signature of a function only indicates that it returns `error`, we have to rely on the docs to tell users what specific errors they should expect. Now, to be fair, this is an issue for languages that use exception's - checked exceptions in Java notwithstanding.
Adding a new error type that should be handled means that consumers need to pay attention to the API docs and/or changelog. The compiler, linters, etc don't do anything to help you.
All of this culminates to an infuriating, inconsistent experience with error handling.
Comment by oncallthrow 5 days ago
The proof is in the pudding, though. In my experience, working across Go codebases in open source and in multiple closed-source organizations, errors are nearly universally wrapped and handled appropriately. The same is not true of Rust, where in my experience ? (and indeed even unwrap) reign supreme.
Comment by kstrauser 5 days ago
I have to say that's the first time I've heard someone say Rust doesn't have enough return types. Idiomatically, possible error conditions would be wrapped in a Result. `foo()?` is fantastic for the cases where you can't do anything about it, like you're trying to deserialize the user's passed-in config file and it's not valid JSON. What are you going to do there that's better than panicking? Or if you're starting up and can't connect to the configured database URL, there's probably not anything you can do beyond bombing out with a traceback... like `?` or `.unwrap()` does.
For everything else, there're the standard `if foo.is_ok()` or matching on `Ok(value)` idioms, when you want to catch the error and retry, or alert the user, or whatever.
But ? and .unwrap() are wonderful when you know that the thing could possibly fail, and it's out of your hands, so why wrap it in a bunch of boilerplate error handling code that doesn't tell the user much more than a traceback would?
Comment by filleduchaos 5 days ago
`?` (i.e. the try operator) and `.unwrap()` do not do the same thing.
Comment by codys 5 days ago
As for the example you gave:
File::create("foo.txt")?;
If one added context, it would be File::create("foo.txt").context("failed to create file")?;
This is using eyre or anyhow (common choices for adding free-form context).If rolling your own error type, then
File::create("foo.txt").map_err(|e| format!("failed to create file: {e}"))?;
would match the Go code behavior. This would not be preferred though, as using eyre or anyhow or other error context libraries build convenient error context backtraces without needing to format things oneself. Here's what the example I gave above prints if the file is a directory: Error:
0: failed to create file
1: Is a directory (os error 21)
Location:
src/main.rs:7Comment by bargainbin 5 days ago
It’s odd that the .unwrap() hack caused a huge outage at Cloudflare, and my first reaction was “that couldn’t happen in Go haha” but… it definitely could, because you can just ignore returned values.
But for some reason most people don’t. It’s like the syntax conveys its intent clearly: Handle your damn errors.
Comment by mh2266 5 days ago
And maybe not quite as standard, but thiserror if you don’t want a stringly-typed error?
Comment by fragmede 5 days ago
Comment by loeg 5 days ago
let mut file = File::create("foo.txt").context("failed to create file")?;
Of all the things I find hard to understand in Rust, this isn't one of them.Comment by snuxoll 5 days ago
Comment by fragmede 5 days ago
Comment by loeg 5 days ago
If you reject the concept of a 'return on error-variant else unwrap' operator, that's fine, I guess. But I don't think most people get especially hung up on that.
Comment by filleduchaos 5 days ago
I don't understand this line of thought at all. "You have to learn the language's syntax to understand it!"...and so what? All programming language syntax needs to be learned to be understood. I for one was certainly not born with C-style syntax rattling around in my brain.
To me, a lot of the discussion about learning/using Rust has always sounded like the consternation of some monolingual English speakers when trying to learn other languages, right down to the "what is this hideous sorcery mark that I have to use to express myself correctly" complaints about things like diacritics.
Comment by snuxoll 5 days ago
If I return Result<T, E> from a function in Rust I have to provide an exhaustive match of all the cases, unless I use `.unwrap()` to get the success value (or panic), or use the `?` operator to return the error value (possibly converting it with an implementation of `std::From`).
No more verbose than Go, from the consumer side. Though, a big difference is that match/if/etc are expressions and I can assign results from them, so it would look more like
let a = match do_thing(&foo) {
Ok(res) => res,
Err(e) => return e
}
instead of: a, err := do_thing(foo)
if err != nil {
return err // (or wrap it with fmt.Errorf and continue the madness
// of stringly-typed errors, unless you want to write custom
// Error types which now is more verbose and less safe than Rust).
}
I use Go on a regular basis, error handling works, but quite frankly it's one of the weakest parts of the language. Would I say I appreciate the more explicit handling from both it and Rust? Sure, unchecked exceptions and constant stack unwinding to report recoverable errors wasn't a good idea. But you're not going to have me singing Go's praise when others have done it better.Do not get me started on actually handling errors in Go, either. errors.As() is a terrible API to work around the lack of pattern matching in Go, and the extra local variables you need to declare to use it just add line noise.
Comment by edflsafoiewq 5 days ago
f = open('foo.txt', 'w')
is even more succinct, and the exception thrown on failure will not only contain the reason, but the filename and the whole backtrace to the line where the error occurred.Comment by 9rx 5 days ago
try:
f = open('foo.txt', 'w')
except Exception as e:
raise NecessaryContext("important information") from e
Else your callers are in for a nightmare of a time trying to figure out why an exception was thrown and what to do with it. Worse, you risk leaking implementation details that the caller comes to depend on which will also make your own life miserable in the future.Comment by tayo42 5 days ago
The exceptions from something like open are always pretty clear. Like, the files not found, and here is the exact line of code and the entire call stack. what else do you want to know to debug?
Comment by 9rx 5 days ago
Look, if you're just writing a script that doesn't care about failure — where when something goes wrong you can exit and let the end user deal with whatever the fault was, you don't have to worry about this. But Go is quite explicitly intended to be a systems language, not a scripting language. That shit doesn't fly in systems.
While you can, of course, write systems in Python, it is intended to be a scripting language, so I understand where you are coming from thinking in terms of scripts, but it doesn't exactly fit the rest of the discussion that is about systems.
Comment by tayo42 5 days ago
Comment by 9rx 5 days ago
That doesn't make sense. Go errors provide exactly whatever information is relevant to the error. The error type is an interface for good reason. The only limiting bound on the information that can be provided is by what the computer can hold at the hardware level.
> They might as well be lists of strings.
If a string is all your error is, you're doing something horribly wrong.
Or, at very least, are trying to shoehorn Go into scripting tasks, of which it is not ideally suited for. That's what Python is for! Python was decidedly intended for scripting. Different tools for different jobs.
Go was never designed to be a scripting language. But should you, for some odd reason, find a reason to use in that that capacity, you should at least being using its exception handlers (panic/recover) to find some semblance of scripting sensibility. The features are there to use.
Which does seem to be the source of your confusion. You still seem hung up on thinking that we're talking about scripting. But clearly that's not true. Like before, if we were, we'd be looking at using Go's exception handlers like a scripting language, not the patterns it uses for systems. These are very different types of software with very different needs. You cannot reasonably conflate them.
Comment by tayo42 5 days ago
The error type in go is literally just a string
type error interface { Error() string }
That's the whole thing.
So i dont know what your talking about then.
The wrapped error is a list of error types. Which all include a string for display. Displaying an error is how you get that information to the user.
If you implement your own error, and check it with some runtime type assertion, you have the same problem you described in python. Its a runtime check, the API your relying on in whatever library can change the error returned and your code won't work anymore. The same fragile situation you say exists in python. Now you have even less information, theres no caller info.
Comment by 9rx 5 days ago
No, like I said before, it's literally an interface. Hell, your next line even proves it. If it were a string, it would be defined as:
type error string
But as you've pointed out yourself, that's not its definition at all.> So i dont know what your talking about then.
I guess that's what happens when you don't even have a basic understanding of programming. Errors are intended to be complex types; to capture all the relevant information that pertains to the error. https://go.dev/play/p/MhQY_6eT1Ir At very least a sentinel value. If your error is just a string, you're doing something horribly wrong — or, charitably, trying to shoehorn Go into scripting tasks. But in that case you'd use Go's exception handlers, which bundles the stack trace and all alongside the string, so... However, if your workload is script in nature, why not just use Python? That's what it was designed for. Different tools for different jobs.
Comment by hombre_fatal 5 days ago
The cherry on top is that you always have a place to add context, but it's not the main point.
In the Python example, anything can fail anywhere. Exceptions can be thrown from deep inside libraries inside libraries and there's no good way to write code that exhaustively handles errors ahead of time. Instead you get whack-a-mole at runtime.
In Go, at least you know where things will fail. It's the poor man's impl of error enumeration, but you at least have it. The error that lib.foo() returned might be the dumbest error in the world (it's the string "oops") but you know lib.foo() would error, and that's more information you have ahead of time than in Python.
In Rust or, idk, Elm, you can do something even better and unify all downstream errors into an exhaustive AGDT like RequestError = NetworkError(A | B | C) | StreamError(D | E) | ParseError(F | G) | FooError, where ABCDEFG are themselves downstream error types from underlying libraries/fns that the request function calls.
Now the callsite of `let result = request("example.com")` can have perfect foresight into all failures.
Comment by tayo42 5 days ago
exceptions vs returned errors i think is a different discussion then what im getting at here.
Comment by Orygin 5 days ago
Stack traces are reserved for crashes where you didn't handle the issue properly, so you get technical info of what broke and where, but no info on what happened and why it did fail like it did.
Comment by tayo42 4 days ago
Comment by 9rx 4 days ago
You can get away with not doing that when cowboy coding scripts. Python was designed to be a scripting language, so it is understandable that in Python you don't often need to worry about it. But Go isn't a scripting language. It was quite explicitly created to be a systems language. Scripts and systems are very different types of software with very different needs and requirements. If you are stuck thinking in terms of what is appropriate for scripting, you're straight up not participating in the same thread.
Comment by 9rx 5 days ago
The Go team actually did a study on exactly that; including stack traces with errors. Like you, they initially thought it would be useful (hence the study), but in the end, when the data was in, they discovered nobody ever actually used them. Meaningful errors proved to be far more useful.
Science demands replication, so if your study disagrees, let's see it. But in the absence of that, the Go study is the best we've got and it completely contradicts what you are telling us. Making random claims up on the spot based on arbitrary feelings isn't indicative of anything.
That said, I think we can all agree there is a limited place for that type of thing (although in that place you shouldn't use Go at all — there are many other languages much better suited to that type of problem space), but in that place if you had to use Go for some strange reason you'd use panic and recover which already includes the stack trace for you. The functionality is already there exactly as you desire when you do need to bend Go beyond what it is intended for.
Comment by arccy 5 days ago
Comment by verdverm 5 days ago
That simple example in Python is missing all the other stuff you have to put around it. Go would have another error check, but I get to decide, at that point in the execution, how I want to handle it in this context
Comment by pansa2 5 days ago
Comment by ivanyu 5 days ago
Comment by arccy 5 days ago
Comment by zephen 5 days ago
Comment by pansa2 5 days ago
Comment by oncallthrow 5 days ago
... with no other context whatsoever, so you can't glean any information about the call stack that led to the exception.
Exceptions are really a whole different kettle of fish (and in my opinion are just strictly worse than even the worst errors-as-values implementations).
Comment by reissbaker 5 days ago
Comment by Mawr 5 days ago
Comment by tptacek 5 days ago
Comment by nu11ptr 5 days ago
Comment by tptacek 5 days ago
Again: I think Rust as a language gets this right, better than Go does, but if I had to rank, it'd be (1) Rust explicit enum/match style, (2) Go's explicit noisy returns, (3) Rust terse error propagation style.
Basically, I think Rust idiom has been somewhat victimized by a culture of error golfing (and its attendant error handling crates).
Comment by nu11ptr 5 days ago
I think the problem is Rust does a great job at providing the basic mechanics of errors, but then stops a bit short.
First, I didn't realize until relatively recently that any `String` can be coerced easily into a `Box<dyn Error + Send + Sync>` (which should have a type alias in stdlib lol) using `?`, so if all you need is strings for your users, it is pretty simple to adorn or replace any error with a string before returning.
Second, Rust's incomplete error handling is why I made my crate, `uni_error`, so you can essentially take any Result/Error/Option and just add string context and be done with it. I believe `anyhow` can mostly do the same.
I do sorta like Go's error wrapping, but I think with either anyhow or my crate you are quickly back in a better situation as you gain compile time parameter checking in your error messages.
I agree Rust has over complicated error handling and I don't think `thiserror` and `anyhow` with their libraries vs applications distinction makes a lot of sense. I find my programs (typically API servers) need the the equivalent of `anyhow` + `thiserror` (hence why I wrote `uni_error` - still new and experimental, and evolving).
An example of error handling with `uni_error`:
use uni_error::*;
fn do_something() -> SimpleResult<Vec<u8>> {
std::fs::read("/tmp/nonexist")
.context("Oops... I wanted this to work!")
}
fn main() {
println!("{}", do_something().unwrap_err());
}
Ref: https://crates.io/crates/uni_errorComment by tptacek 5 days ago
Which is why it's weird to me that the error handling culture of Rust seems to steer so directly towards where Go tries to get to!
Comment by nu11ptr 5 days ago
I have a love/hate relationship with Go. I like that it lets me code ideas very fast, but my resulting product just feels brittle. In Rust I feel like my code is rock solid (with the exception of logic, which needs as much testing as any other lang) often without even testing, just by the comfort I get from lack of nil, pattern matching, etc.
Comment by tptacek 5 days ago
The joke I like to snark about in these kinds of comparisons is that I actually like computer science, and I like to be able to lay out a tree structure when it makes sense to do so, without consulting a very large book premised on how hard it is to write a doubly-linked list in Rust. The fun thing is landing that snark and seeing people respond "well, you shouldn't be freelancing your own mutable tree structures, it should be hard to work with trees", from people who apparently have no conception of a tree walk other than as a keyed lookup table implementation.
But, like, there are compensating niceties to writing things like compilers in Rust! Enums and match are really nice there too. Not so nice that I'd give up automated memory management to get them. But nice!
I'm an ex-C++/C programmer (I dropped out of C++ around the time Alexandrescu style was coming into vogue), if my background helps any.
Comment by nu11ptr 5 days ago
It doesn't? In Go, I allocate (new/make or implicit), never free. In Rust, I allocate (Box/Arc/Rc/String), never free. I'm not sure I see the difference (other than allocation is always more explicit in Rust, but I don't see that as a downside). Or are you just talking about how Go is 100% implicit on stack vs heap allocation?
> Sometimes being able to make those decisions is useful, but usually it is not.
Rust makes you think about ownership. I generally like the "feeling" this gives me, but I will agree it is often not necessary and "just works" in GC langs.
> I actually like computer science, and I like to be able to lay out a tree structure when it makes sense to do so, without consulting a very large book premised on how hard it is to write a doubly-linked list in Rust. The fun thing is landing that snark and seeing people respond "well, you shouldn't be freelancing your own mutable tree structures, it should be hard to work with trees", from people who apparently have no conception of a tree walk other than as a keyed lookup table implementation.
I LOVE computer science. I do trees quite often, and they aren't difficult to do in Rust, even doubly linked, but you just have to use indirection. I don't get why everyone thinks they need to do them with pointers, you don't.
enum Node {
Leaf,
Branch {child: Rc<Node>, parent: Option<Rc<Node>> },
}
Compared to something like Java/C# or anything with a bump allocator this would actually be slower, as Rust uses malloc/free, but Go suffers from the same achilles heel here (see any tree benchmark). In Rust, I might reach for Bumpalo to build the tree in a single allocation (an arena crate), but only if I needed that last ounce of speed.If you need to edit your tree, you would also want the nodes wrapped in a `RefCell`.
Comment by jadsfwasdfsd 5 days ago
Comment by sethops1 5 days ago
Comment by phibz 5 days ago
But in go you can just _err and never touch it.
Also while not part of std::Result you can use things like anyhow or error_context to add context before returning if theres an error.
Comment by wging 5 days ago
You can do that in Rust too. This code doesn't warn:
let _ = File::create("foo.txt");
(though if you want code that uses the File struct returned from the happy path of File::create, you can't do that without writing code that deals somehow with the possibility of the create() call failing, whether it is a panic, propagating the error upwards, or actual error handling code. Still, if you're just calling create() for side effects, ignoring the error is this easy.)Comment by oncallthrow 5 days ago
Comment by afavour 5 days ago
Comment by dlisboa 5 days ago
Comment by Mawr 5 days ago
- https://github.com/kubernetes/kubernetes/pull/132799/files
- https://github.com/kubernetes/kubernetes/pull/80700/files
- https://github.com/kubernetes/kubernetes/pull/27793/files
- https://github.com/kubernetes/kubernetes/pull/110879/files
- https://github.com/moby/moby/pull/10321/files
- https://github.com/cockroachdb/cockroach/pull/74743/files
Do we have linters that catch these?
Comment by NooneAtAll3 5 days ago
Rust used to not have operator?, and then A LOT of complaints have been "we don't care, just let us pass errors up quickly"
"good luck debugging" just as easily happens simply by "if err!=nil return nil,err" boilerplate that's everywhere in Golang - but now it's annoying and takes up viewspace
Comment by oncallthrow 5 days ago
This isn't true in my experience. Most Go codebases I've worked in wrap their errors.
If you don't believe me, go and take a look at some open-source Go projects.
Comment by frizlab 5 days ago
do {
let file = try FileManager.create(…)
} catch {
logger.error("Failed creating file", metadata: ["error": "\(error)"])
}
Note the try is not actual CPU exceptions, but mostly syntax sugar.You can opt-out of the error handling, but it’s frowned upon, and explicit:
let file = try? FileManager.create(…)
or let file = try! FileManager.create(…)
The former returning an optional file if there is an error, and the latter crashing in case of an error.Comment by hnaccount19293 5 days ago
Comment by nu11ptr 5 days ago
In Rust I could have done (assuming `anyhow::Error` or `Box<dyn Error + Send + Sync>` return types, which are very typical):
let mut file = File::create("foo.txt")
.map_err(|e| format!("failed to create file: {e}")?;
Rust having the subtle benefit here of guaranteeing at compile type that the parameter to the string is not omitted.In Go I could have done (and is just as typical to do):
f, err := os.Create("filename.txt")
if err != nil {
return err
}
So Go no more forces you to do that than Rust does, and both can do the same thing.Comment by Mawr 5 days ago
?
is too strong.The UX is terrible — the path of least resistance is that of laziness. You should be forced to provide an error message, i.e.
?("failed to create file: {e}")
should be the only valid form.In Go, for one reason or another, it's standard to provide error context; it's not typical at all to just return a bare `err` — it's frowned upon and unidiomatic.
Comment by nu11ptr 5 days ago
You could have done that in Go but you wouldn't, because the allure of just typing two words
return err
is too strong.Quite literally the same thing and the only difference is bias and habit.
Comment by dystopiandevel 5 days ago
Comment by YmiYugy 5 days ago
Comment by edflsafoiewq 5 days ago
Comment by Mawr 5 days ago
Comment by YmiYugy 5 days ago
Comment by tayo42 5 days ago
Go's wrapping of errors is just a crappy exception stack trace with less information.
Comment by the_gipsy 5 days ago
You are also not forced to add context. Hell, you can easily leave errors unhandled, without compiler errors nor warnings, which even linters won't pick up, due to the asinine variable syntax rules.
Comment by Mawr 5 days ago
It's quite ridiculous that you're claiming errors can be easily left unhandled while referring to what, a single unfortunate pattern of code that will only realistically happen due to copy-pasting and gets you code that looks obviously wrong? Sigh.
Comment by the_gipsy 5 days ago
"Easily" doesn't mean "it happens all the time" in this context (e.g. PHP, at least in the olden days).
"Easily" here means that WHEN it happens, it is not usually obvious. That is my experience as a daily go user. It's not the result of copy-pasting, it's just the result of editing code. Real-life code is not a beautiful succession of `op1, op2, op3...`. You have conditions in between, you have for loops that you don't want to exit in some cases (but aggregate errors), you have times where handling an error means not returning it but doing something else, you have retries...
I don't use rust at work, but enough in hobby/OSS work to say that when an error is not handled, it sticks out much more. To get back on topic of succinctness: you can obviously swallow errors in rust, but then you need to be juggling error vars, so this immediately catches the eye. In go, you are juggling error vars all the time, so you need to sift through the whole thing every goddamn time.
Comment by oncallthrow 5 days ago
This really isn't an issue in practice. The only case where an error wouldn't uniquely identify its call stack is if you were to use the exact same context string within the same function (and also your callees did the same). I've never encountered such a case.
> You are also not forced to add context
Yes, but in my experience Go devs do. Probably because they're having to go to the effort of typing `if err != nil` anyway, and frankly Go code with bare:
if err != nil {
return err
}
sticks out like a sore thumb to any experienced Go dev.> which even linters won't pick up, due to asinine variable syntax rules.
I have never encountered a case where errcheck failed to detect an unhandled error, but I'd be curious to hear an example.
Comment by the_gipsy 5 days ago
err1 := foo()
err2 := bar()
if err1 != nil || err2 != nil {
return err1 // if only err2 failed, returns nil!
}
```
func process() error {
err := foo()
if err != nil {
return err
} if something {
result, err := bar() // new err shadows outer err
if err != nil {
return err
}
use(result)
}
if somethingElse {
err := baz() // another shadow
log.Println(err)
}
return err // returns foo's err (nil), baz's error lost
}
```Comment by Mawr 5 days ago
if somethingElse {
err := baz()
log.Println(err)
}
Good luck!As for your first example,
// if only err2 failed, returns nil!
Yes, that's an accurate description of what the code you wrote does. Like, what? Whatever point you're trying to make still hinges on somebody writing code like that, and nobody who writes Go would.Now, can this result in bugs in real life? Sure, and it has. Is it a big deal to get a bug once in a blue moon due to this? No, not really.
Comment by intalentive 4 days ago
You can write your own allocator in C. You don't have to use malloc.
Comment by bnolsen 1 day ago
Comment by dana321 5 days ago
I actually love how rust gatekeeps the idiots from programming it, probably why Linus Torvalds allowed rust into the kernel, but not C++.
Comment by shevy-java 5 days ago
Comment by sheepscreek 5 days ago
Eh, that's not typical Rust project code though. It is Rust code inside the std lib. std libs of most languages including Python are a masterclass in dark arts. Rust is no exception.
Comment by dmix 5 days ago
The human brain demands "vs" articles
Comment by drnick1 5 days ago
Comment by jpfromlondon 5 days ago
Comment by msie 5 days ago
Comment by rohankhameshra 5 days ago
Comment by lucyjojo 4 days ago
use the stuff you want people.
Comment by ezst 5 days ago
Comment by forgotpwd16 5 days ago
Comment by ezst 4 days ago
- has the ergonomics, abstractions, expressiveness and conveniences of a high-level language with pointer-level semantics if/when needed (essentially covering the abstraction vs. cost spectrum of all those languages)
- can be used with or without GC (same)
- has the libraries and tooling to tackle massively concurrent/parallel workloads with ease (the niche that Go carved for itself)
- offers the same memory safety guarantees as Rust, and possibly more in the future with capture-checking (a more general concept than borrow-checking to guarantee resources scoping at compile-time)
Comment by echelon 5 days ago
I feel like Zig is for the C / C++ developers that really dislike Rust.
There have been other efforts like Carbon, but this is the first that really modernizes the language and scratches new itches.
> I’m not the first person to pick on this particular Github comment, but it perfectly illustrates the conceptual density of Rust: [crazy example elided]
That is totally unfair. 99% of your time with Rust won't be anything like that.
> This makes Rust hard, because you can’t just do the thing! You have to find out Rust’s name for the thing—find the trait or whatever you need—then implement it as Rust expects you to.
What?
Rust is not hard. Rust has a standard library that looks an awful lot like Python or Ruby, with similarly named methods.
If you're trying to shoehorn some novel type of yours into a particular trait interface so you can pass trait objects around, sure. Maybe you are going to have to memorize a lot more. But I'd ask why you write code like that unless you're writing a library.
This desire of wanting to write OO-style code makes me think that people who want OO-style code are the ones having a lot of struggle or frustration with Rust's ergonomics.
Rust gives you everything OO you'd want, but it's definitely more favorable if you're using it in a functional manner.
> makes consuming libraries easy in Rust and explains why Rust projects have almost as many dependencies as projects in the JavaScript ecosystem.
This is one of Rust's superpowers !
Comment by Quothling 5 days ago
I would read this in regard to Go and not so much in regards to Zig. Go is insanely productive, and while you're not going to match something like Django in terms of delivery speed with anything in Go, you almost can... and you can do it without using a single external dependency. Go loses a little of this in the embeded space, where it's not quite as simple, but the opinonated approach is still very productive even here.
I can't think of any language where I can produce something as quickly as I can in Go with the use of nothing but the standard library. Even when you do reach for a framework like SQLC, you can run the external parts in total isolation if that's your thing.
I will say that working with the interoperability of Zig in our C for Python binaries has been very easy, which it wasn't for Rust. This doesn't mean it's actually easier for other people, but it sure was for me.
> This is one of Rust's superpowers !
In some industries it's really not.
Comment by unshavedyak 5 days ago
I find Rust quite easy most of the time. I enjoy the hell out of it and generally write Rust not too different than i'd have written my Go programs (i use less channels in Rust though). But i do think my comment about rope is true. Some people just can't seem to help themselves.
Comment by nicoburns 5 days ago
Comment by unshavedyak 5 days ago
Though, i think my statement is missing something. I moved from Go to Rust because i found that Rust gave me better tooling to encapsulate and reuse logic. Eg Iterators are more complex under the hood, but my observed complexity was lower in Rust compared to Go by way of better, more generalized code reuse. So in this example i actually found Go to be more complex.
So maybe a more elaborated phrase would be something like Rust gives you more visible rope to hang yourself with.. but that doesn't sound as nice. I still like my original phrase heh.
Comment by 1313ed01 5 days ago
Not saying that should replace Rust. Both could exist side by side like C and C++.
Comment by wtetzner 5 days ago
Comment by 1313ed01 4 days ago
Better question is what to add to something like C. The bare minimum to make it perfectly safe. Then stop there.
Comment by mh2266 5 days ago
Rust OTOH is obsessively precise about enforcing these sort of things.
Of course Rust has a lot of features and compiles slower.
Comment by Mawr 5 days ago
Theoretically optional, maybe.
> the stdlib does things like assume filepaths are valid strings
A Go string is just an array of bytes.
The rest is true enough, but Rust doesn't offer just the bare minimum features to cover those weaknesses, it offers 10x the complexity. Is that worth it?
Comment by ErroneousBosh 5 days ago
Comment by aw1621107 5 days ago
...Is that not what mut is for? I'm a bit confused what you're talking about here.
Comment by ErroneousBosh 5 days ago
Comment by aw1621107 5 days ago
That being said, off the top of my head I think immutability is typically seen to have two primary benefits:
- No "spooky action at a distance" is probably the biggest draw. Immutability means no surprises due to something else you didn't expect mutating something out from under you. This is particularly relevant in larger codebases/teams and when sharing stuff in concurrent/parallel code.
- Potential performance benefits. Immutable objects can be shared freely. Safe subviews are cheap to make. You can skip making defensive copies. There are some interesting data structures which rely on their elements being immutable (e.g., persistent data structures). Lazy evaluation is more feasible. So on and so forth.
Rust is far from the first language to encourage immutability to the extent it does - making immutable objects has been a recommendation in Java for over two decades at this point, for example, to say nothing of its use of immutable strings from the start, and functional programming languages have been working with it even longer. Rust also has one nice thing as well which helps address this concern:
> or why you'd want to make copies of things so now you've got an updated variable and an out-of-date variable
The best way to avoid this in Rust (and other languages with similarly capable type systems) is to take advantage of how Rust's move semantics work to make the old value inaccessible after it's consumed. This completely eliminates the possibility that the old values anre accidentally used. Lints that catch unused values provide additional guardrails.
Obviously this isn't a universally applicable technique, but it's a nice tool in the toolbox.
In the end, though, it's a tradeoff, as I said. It's still possible to accidentally use old values, but the Rust devs (and the community in general, I think) seem to have concluded that the benefits outweigh the drawbacks, especially since immutability is just a default rather than a hard rule.
Comment by the__alchemist 5 days ago
Comment by bnolsen 1 day ago
Comment by meepmorp 23 hours ago
Comment by kace91 5 days ago
Perhaps it is because DDD books and the like usually have strong object oriented biases, but whenever I read about functional programming patterns I’m never clear on how to go from exercise stuff to something that can work in a real world monolith for example.
And to be clear I’m not saying functional programming is worse at that, simply that I have not been able to find information on the subject as easily.
Comment by myaccountonhn 5 days ago
Here is one about how to structure a project (roughly)
https://youtube.com/watch?v=XpDsk374LDE
I also think looking at the source code for elm and its website, as well as the elm real world example help a lot.
Comment by simonmic 5 days ago
Comment by Yoric 5 days ago
Also my feeling. Writing this as a former C++ developer who really likes Rust :)
Comment by ModernMech 4 days ago
I believe this is actually a significant source of consternation. I teach Rust to students, and I find those without a C/C++ background take to it more naturally than those who have a lot of experience with those languages. People with a C/C++ background approach Rust like C/C++ and fight it the whole way, whereas people without that background approach Rust as Rust, and they have a better time with it.
Comment by awesome_dude 5 days ago
> If you're trying to shoehorn some novel type of yours into a particular trait interface so you can pass trait objects around, sure. Maybe you are going to have to memorize a lot more. But I'd ask why you write code like that unless you're writing a library.
I think that you are missing the point - they're not saying (at least in my head) "Rust is hard because of all the abstractions" but, more "Rust is hard because you are having to explain to the COMPILER [more explicitly] what you mean (via all these abstractions)
And I think that that's a valid assessment (hell, most Rustaceans will point to this as a feature, not a bug)
Comment by tiltowait 5 days ago
Can you elaborate? While they obviously have overlap, Rust's stdlib is deliberately minimal (you don't even get RNG without hitting crates.io), whereas Python's is gigantic. And in actual use, they tend to feel extremely different.
Comment by 999900000999 5 days ago
If you know Java, you can read C#, JavaScript, Dart, and Haxe and know what's going on. You can probably figure out Go.
Rust is like learning how to program again.
Back when I was young and tried C++, I was like this is hard and I can't do this.
Then I found JavaScript and everything was great.
What I really want is JS that complies into small binaries and runs faster than C. Maybe clean up the npm dependency tree. Have a professional commite vet every package.
I don't think that's possible, but I can dream
Comment by dannersy 5 days ago
I'm a bit of a Rust fanboy because of writing so much Go and Javascript in the past. I think I just got tired of all the footguns and oddities people constantly run into but conveniently brush off as intentional by the design of the language. Even after years of writing both, I would still get snagged on Go's sharp edges. I have seen so many bugs with Go, written by seniors, because doing the thing seemed easy in code only for it to have unexpected behavior. This is where even after years of enjoying Go, I have a bit of a bone to pick with it. Go was designed to be this way (where Javascript/Typescript is attempting to make up for old mistakes). I started to think to myself: Well, maybe this shouldn't be "easy" because what I am trying to do is actually complicated behind the scenes.
I am not going to sit here and argue with people around language design or computer science. What I will say is that since I've been forced to be somewhat competent in Rust, I am a far better programmer because I have been forced to grasp concepts on a lower level than before. Some say this might not be necessary or I should have known these things before learning Rust, and I would agree, but it does change the way you write and design your programs. Rust is just as ugly and has snags that are frustrating like any other language, yes, but it was the first that forced me to really think about what it is I am trying to do when writing something that the compiler claims is a no-no. This is why I like Zig as well and the syntax alone makes me feel like there is space for both.
Comment by reeeli 5 days ago
Comment by throwaway2037 5 days ago
> OOP has been out of favor for a while now
I love these lines. Who writes this stuff? I'll tell you: The same people on HN who write "In Europe, X is true." (... when Europe is 50 countries!). > Zig is a language for data-oriented design.
But not OOP, right? Or, OOP couldn't do the same thing?One thing that I have found over umpteen years of reading posts online: Americans just love superlatives. They love the grand, sweeping gesture. Read their newspapers; you see it every day. A smidge more minimalism would make their writing so much more convincing.
I will take some downvotes for this ad hominem attack: Why does this guy have 387 connections on LinkedIn? That is clicking the "accept" button 387 times. Think about that.
Comment by yxhuvud 5 days ago
Comment by throwaway2037 5 days ago
Comment by bnolsen 23 hours ago
Comment by riku_iki 20 hours ago
arena allocators are necessity for high performance software, because heap allocations are magnitude slower.
Comment by grougnax 5 days ago
Comment by qaq 5 days ago
Rust for WASM
Zig is what I'd use if I started a greenfield DBMS project
Comment by raggi 5 days ago
There are bad cases of RAII APIs for sure, but it's not all bad. Andrew posted himself a while back about feeling bad for go devs who never get to debug by seeing 0xaa memory segments, and sure I get it, but you can't make over-extended claims about non-initialization when you're implicitly initializing with the magic value, that's a bit of a false equivalence - and sure, maybe you don't always want a zero scrub instead, I'm not sold on Go's mantra of making zero values always be useful, I've seen really bad code come as a result of people doing backflips to try to make that true - a constructor API is a better pattern as soon as there's a challenge, the "rule" only fits when it's easy, don't force it.
Back to RAII though, or what people think of when they hear RAII. Scope based or automatic cleanup is good. I hate working with Go's mutex's in complex programs after spending life in the better world. People make mistakes and people get clever and the outcome is almost always bad in the long run - bugs that "should never get written/shipped" do come up, and it's awful. I think Zig's errdefer is a cool extension on the defer pattern, but defer patterns are strictly worse than scope based automation for key tasks. I do buy an argument that sometimes you want to deviate from scope based controls, and primitives offering both is reasonable, but the default case for a ton of code should be optimized for avoiding human effort and human error.
In the end I feel similarly about allocation. I appreciate Zig trying to push for a different world, and that's an extremely valuable experiment to be doing. I've fought allocation in Go programs (and Java, etc), and had fights with C++ that was "accidentally" churning too much (classic hashmap string spam, hi ninja, hi GN), but I don't feel like the right trade-off anywhere is "always do all the legwork" vs. "never do all the legwork". I wish Rust was closer to the optimal path, and it's decently ergonomic a lot of the time, but when you really want control I sometimes want something more like Zig. When I spend too much time in Zig I get a bit bored of the ceremony too.
I feel like the next innovation we need is some sanity around the real useful value that is global and thread state. Far too much toxic hot air is spilled over these, and there are bad outcomes from mis/overuse, but innovation could spend far more time on _sanely implicit context_ that reduces programmer effort without being excessively hidden, and allowing for local specialization that is easy and obvious. I imagine it looks somewhere between the rust and zig solutions, but I don't know exactly where it should land. It's a horrible set of layer violations that the purists don't like, because we base a lot of ABI decisions on history, but I'd still like to see more work here.
So RAII isn't the big evil monster, and we need to stop talking about RAII, globals, etc, in these ways. We need to evaluate what's good, what's bad, and try out new arrangements maximize good and minimize bad.
Comment by bsder 5 days ago
I disagree and place RAII as the dividing line on programming language complexity and is THE "Big Evil Monster(tm)".
Once your compiled language gains RAII, a cascading and interlocking set of language features now need to accrete around it to make it ... not excruciatingly painful. This practically defines the difference between a "large" language (Rust or C++) and a "small" language (C, Zig, C3, etc.).
For me, the next programming language innovation is getting the garbage collected/managed memory languages to finally quit ceding so much of the performance programming language space to the compiled languages. A managed runtime doesn't have to be so stupidly slow. It doesn't have to be so stupidly non-deterministic. It doesn't have to have a pathetic FFI that is super complex. I see the "strong typing everywhere" as the first step along this path. Fil-C might become an interesting existence proof in this space.
I view having to pull out any of C, Zig, C++, Rust, etc. as a higher-level programming language failure. There will always be a need for something like them at the bottom, but I really want their scope to be super small. I don't want to operate at their level if I can avoid them. And I say all this as someone who has slung more than 100KLoC of Zig code lately.
For a concrete example, let's look at Ghostty which was written in Zig. There is no strong performance reason to be in Zig (except that implementations in every other programming language other than Rust seem to be so much slower). There is no strong memory reason to be in Zig (except that implementations in every other programming language other than Rust chewed up vast amounts of it). And, yet, a relatively new, unstable, low-level programming language was chosen to greenfield Ghostty. And all the other useful terminal emulators seem to be using Rust.
Every adherent of managed memory languages should take it as a personal insult that people are choosing to write modern terminal emulators in Rust and Zig.
Comment by zozbot234 5 days ago
How so? Garbage collection has inherent performance overhead wrt. manual memory management, and Rust now addresses this by providing the desired guarantees of managed memory without the overhead of GC.
A modern terminal emulator is not going to involve complex reference graphs where objects may cyclically reference one another with no clearly-defined "owner"; which is the one key scenario where GC is an actual necessity even in a low-level systems language. What do they even need GC for? Rather, they should tweak the high-level design of their program to emsure that object lifetimes are properly accounted for without that costly runtime support.
Comment by raggi 5 days ago
I somewhat disagree, specifically on the implicit claim that all GC has overhead and alternatives do not. Rust does a decent job of giving you some ergonomics to get started, but it is still quite unergonomic to fix once you have multiple different allocation problems to deal with. Zig flips that a bit on it's head, it's more painful to get started, but the pain level stays more consistent throughout deeper problems. Ideally though I want a better blend of both - to give a still not super concrete version of what I mean, I mean I want something that can be setup by the systems oriented developer say, near the top of a request path, and it becomes a more implicit dependency for most downstream code with low ceremony and allowing for progressive understanding of contributors way down the call chain who in most cases don't need to care - meanwhile enabling an easy escape hatch when it matters.
I think people make far too much of a distinction between a GC and an allocator, but the reality is that all allocators in common use in high level OS environments are a form of GC. That's of course not what they're talking about, but it's also a critical distinction.
The main difference between what people _call a GC_ and those allocators is that a typical "GC" pauses the program "badly" at malloc time, and a typical allocator pauses a program "badly" at free time (more often than not). It's a bit of a common oddity really, both "GC's" and "allocators" could do things "the other way around" as a common code path. Both models otherwise pool memory and in higher performance tunings have to over-allocate. There are lots of commonly used "faster" allocators in use today that also bypass their own duties at smarter allocation by simply using mmap pools, but those scale poorly: mmap stalls can be pretty unpredictable and have cross-thread side effects that are often undesirable too.
The second difference which I think is more commonly internalized is that typically "the GC" is wired into the runtime in various ways, such as into the scheduler (Go, most dynlangs, etc), and has significant implications at the FFI boundary.
It would be possible to be more explicit about a general purpose allocator that has more GC-like semantics, but also provides the system level malloc/free style API as well as a language assisted more automated API with clever semantics or additional integrations. I guess fil-C has one such system (I've not studied their implementation). I'm not aware of implicit constraints which dictate that there are only two kinds of APIs, fully implicit and intertwined logarithmic GCs, or general purpose allocators which do most of their smart work in free.
My point is I don't really like the GC vs. not-GC arguments very much - I think it's one of the many over-generalizations we have as an industry that people rally hard around and it has been implicitly limiting how far we try to reach for new designs at this boundary. I do stand by a lot of reasoning for systems work that the fully implicitly integrated GC's (Java, Go, various dynlangs) generally are far too opaque for scalable (either very big or very small) systems work and they're unpleasant to deal with once you're forced to. At the same time for that same scalable work you still don't get to ignore the GC you are actually using in the allocator you're using. You don't get to ignore issues like restarting your program that has a 200+GB heap has huge page allocation costs, no matter what middleware set that up. Similarly you don't want a logarithmic allocation strategy on most embedded or otherwise resource constrained systems, those designs are only ok for servers, they're bad for batteries and other parts of total system financial cost in many deployments.
I'd like to see more work explicitly blending these lines, logarithmically allocating GC's scale poorly in many similar ways to more naive mmap based allocators. There are practical issues you run into with overallocation and the solution is to do something more complex than the classical literature. I'd like to see more of this work implemented as standalone modules rather than almost always being implicitly baked into the language/runtime. It's an area that we implicitly couple stuff too much, and again good on Zig for pushing the boundary on a few of these in the standard language and library model it has (and seemingly now also taking the same approach for IO scheduling - that's great).
Comment by zozbot234 5 days ago
Not a claim I made. Obviously there are memory management styles (such as stack allocation, pure static memory or pluggable "arenas"/local allocators) that are even lower overhead than a generic heap allocator, and the Rust project does its best to try and support these styles wherever they might be relevant, especially in deep embedded code.
In principle it ought to be also possible to make GC's themselves a "pluggable" feature (the design space is so huge and complex that picking a one-size-fits-all implementation and making it part of the language itself is just not very sensible) to be used only when absolutely required - a bit like allocators in Zig - but this does require some careful design work because the complete systems-level interface to a full tracing GC (including requirements wrt. any invariants that might be involved in correct tracing, read-write barriers, pauses, concurrency etc. etc.) is vastly more complex than one to a simple allocator.
Comment by raggi 3 days ago
Comment by riku_iki 19 hours ago
all GC have overhead, but specific allocation pattern: allocators with arena has as minimum overhead as possible, and are orders of magnitude faster because of this. Feel free to tell why this statement is wrong.
Comment by simonask 5 days ago
You will be very rich.
Comment by raggi 5 days ago
Comment by iainmerrick 5 days ago
Comment by raggi 5 days ago
Comment by frizlab 5 days ago
Comment by sestep 5 days ago
Comment by neonsunset 5 days ago
Comment by badmonster 5 days ago
Comment by black_13 5 days ago
Comment by 0x457 5 days ago