COM Like a Bomb: Rust Outlook Add-in
Posted by piker 1 day ago
Comments
Comment by bri3d 1 day ago
I found the interface and a C++ sample in about two minutes of GitHub searching:
https://github.com/microsoft/SampleNativeCOMAddin/blob/5512e...
https://github.com/microsoft/SampleNativeCOMAddin/blob/5512e...
but I don't actually think this would have helped the Rust implementation; the authors already knew they wanted a BSTR and a BSTR*, they just didn't understand the COM conventions for BSTR ownership.
Comment by vintagedave 1 day ago
Comment by throwup238 1 day ago
Comment by antonvs 1 day ago
Life is definitely easier if you can restrict everything to being in the same language.
Comment by vintagedave 19 hours ago
Then late 2010s, C++Builder (its sister product) dropped ATL to DAX -- Delphi ActiveX aka COM -- and using COM from C++ uses the same inbuilt support, including keyword suggestions and RTL types. It's not quite as clean since it uses language bridging to do so, but it's still a lot nicer than normal C++ and COM.
Seeing someone do COM from first principles in 2025 is jarring.
Comment by pjmlp 3 hours ago
.NET COM support was never as nice, with the RCW/CCW layer, now they have redoned it for modern .NET Core, still you need some knowledge how to use it from C++ to fully master it.
Then there is CsWinRT, which is supposed to be the runtime portion of .NET Native, which to this day has enough bugs and not as easy to use as it was .NET Native.
Finally, on the C++ side it has been a wasteland of frameworks, since MFC there have been multiple attempts, and when they finally had something close to C++ Builder with C++/CX, an internal team managed to sell to their managers the idea to kill C++/CX and replace it with C++/WinRT.
Nowadays C++/WinRT is sold as the way to do COM and WinRT, it is actually in maintenance, stuck in C++17, those folks moved on to the windows-rs project mentioned on the article, and the usuability story sucks.
Editing IDL files without any kind of code completion or syntax highlighting, non-existing tooling since COM was introduced, manually merging the generated C++ code into the ongoing project.
To complement your last sentence, seeing Microsoft employees push COM from first principles in 2025 is jarring.
Comment by antonvs 17 hours ago
Comment by pjmlp 3 hours ago
Comment by bigstrat2003 1 day ago
Sure, but I think that this perfectly illustrates why LLMs are not good at programming (and may well never get good): they don't actually understand anything. An LLM is fundamentally incapable of going "this is COM so let me make sure that the function signature matches the calling conventions", it just generates something based on the code it has seen before.
I don't blame the authors for reaching for an LLM given that Microsoft has removed the C++ example code (seriously, what's up with that nonsense?). But it does very nicely highlight why LLMs are such a bad tool.
Comment by omneity 22 hours ago
In the case of LLMs with reasoning, they might pull this off because reasoning is in fact a search in the direction of extra considerations that improve its performance on the task. This is measured by the verifier during reasoning training, which the LLM learns to emulate during inference hence improved performance.
As for RL coding training, the difference can be slightly blurry since reasoning is also done with RL, but for coding models specifically they also discover additional considerations, or even recipes, through self play against a code execution environment. If that environment includes COM and the training data has COM-related tasks, then the process has a chance to discover the behavior you described and reinforce it during training increasing its likelihood during actual coding.
LLMs are not really just autocomplete engines. Perhaps the first few layers or for base models can be seen as such, but as you introduce instruct and reinforcement tuning LLMs build progressively higher levels of conceptual abstractions from words to sentences to tasks like CNNs learn basic geometric features then composing those into face parts and so on.
Comment by piker 1 day ago
The LLM gave us an initial boost of productivity and (false) confidence that enabled us to get at the problem with Rust. While the LLM's output was flawed, using it did actually cause us to learn a lot about COM by allowing us to even getting started. That somewhat flies in the face of a lot of the "tech debt" criticisms levied at LLMs (including by me). Yes, we accumulated a bit of debt while working on the project, but were in this case able to pay it off before shipping and it gave us the leverage we needed to approach this problem using pure Rust.
Comment by jlarocco 1 day ago
In Python, Ruby and the Microsoft languages COM objects integrate seamlessly into the language as instances of the built-in class types.
Also, there's a fairly straightfoward conversion from C# to C++ signatures, which becomes apparent after you see a few of them. It might be explicitly spelled out in the docs somewhere.
Comment by asveikau 1 day ago
I remember a few years back hearing hate about COM and I didn't feel like they understood what it was.
I think the legit criticisms include:
* It relies heavily on function pointers (virtual calls) so this has performance costs. Also constantly checking those HRESULTs for errors, I guess, gives you a lot more branching than exceptions.
* The idea of registration, polluting the Windows registry. These days this part is pretty optional.
Comment by snuxoll 1 day ago
Virtual dispatch absolutely has an overhead, but absolutely nobody in their right mind should be using COM interfaces in a critical section of code. When we're talking things like UI elements, HTTP clients, whatever, the overhead of an indirect call is negligible compared to the time spent inside a function.
The one thing I'm personally trying to see if there's any room for improvement on in a clean slate design, is error handling / HRESULT values. Exceptions get abused for flow control and stack unwinding is expensive, so even if there was a sane way to implement cross-language exception handling it's a non starter. But HRESULT leads to IErrorInfo, ISupportErrorInfo, thread local state SetErrorInfo/GetErrorInfo, which is a whole extra bunch of fun to deal with.
There's the option of going the GObject and AppKit route, using an out parameter for an Error type - but you have to worry about freeing/releasing this in your language bindings or risk leaking memory.
Comment by pjmlp 3 hours ago
All modern Windows APIs introduced since Vista have been COM, classical Win32 C APIs are seldom introduced nowadays.
Certainly current Windows 11 performance problems have nothing to do with using COM all over the place, rather Webwidgets instead of native code, hiring people that apparently never did Windows programming, that apparently do AI driven coding.
Ah, macOS and iDevices driver model is equally based in COM like design, one would expect drivers to be something where performance matters.
Then there is XPC, Android IPC, and one could consider D-BUS as well, if it was more widely adopted across the GNU/Linux world.
Comment by dleary 1 day ago
I could definitely be wrong, but I think C++ style "virtual dispatch" (ie, following two pointers instead of one to get to your function) doesn't really cost anything anymore, except for the extra pointers taking up cache space.
Don't all of the Windows DirectX gaming interfaces use COM? And isn't AAA gaming performance critical?
Comment by snuxoll 1 day ago
Yes, on both counts. You will also, on average, be making fewer calls to ID3D12CommandQueue methods than one would think - you'd submit an entire vertex buffer for a model (or specific components of it that need the same pipeline state, at least) at once, allocate larger pools of memory on the GPU and directly write textures to it, etc.
This is the entire design behind D3D12, Vulkan, and Metal - more direct interaction with the GPU, batching submission, and caching command buffers for reuse.
When I'm talking about "critical sections" of code, I mean anything with a tight loop where you can reasonably expect to pin a CPU core with work. For a game, this would be things like creating vertex buffers, which is why all three major API's take these as bare pointers to data structures in memory instead of requiring discrete calls to create and populate them.
Comment by WorldMaker 1 day ago
Comment by pjmlp 1 day ago
WinRT tooling on Win32 side is a bad joke.
I almost lost count of how many COM frameworks have come and gone since OLE 1.0 days.
Comment by bri3d 1 day ago
Even in "core" COM there's also marshaling, the whole client/server IPC model, and apartments.
And, I think most people encounter COM with one of its friends attached (like in this case, OLE/Automation in the form of IDispatch), which adds an additional layer of complexity on top.
Honestly I think that COM is really nice, though. If they'd come up with some kind of user-friendly naming scheme instead of UUIDs, I don't even think it would get that much hate. It feels to me that 90% of the dislike for COM is the mental overhead of seeing and dealing with UUIDs when getting started.
Once you get past that part, it's really fast to do pretty complex stuff in; compared to the other things people have come up with like dbus or local gRPC and so on, it works really well for coordinating extensibility and lots of independent processes that need to work together.
Comment by pjc50 22 hours ago
Comment by bri3d 12 hours ago
Comment by recursive 1 day ago
The task was some automated jobs doing MS word automation. This all happened about 20 years ago. I never did figure out how to get it to stop leaking memory after a couple days of searching. I think I just had the process restart periodically.
Compared to what I was accustomed to COM seemed weird and just unnecessarily difficult to work with. I was a lot less experienced then, but I haven't touched COM since. I still don't know what the intent of COM is or where it's documented, and nor have I tried to figure it out. But it's colored my impression of COM ever since.
I think there may be a lot of people like me. They had to do some COM thing because it was the only way to accomplish a task, and just didn't understand. They randomly poked it until it kind of worked, and swore never to touch it again.
Comment by asveikau 1 day ago
A shorter version than the other reply:
COM allows you to have a reference counted object with callable virtual methods. You can also ask for different virtual methods at runtime (QueryInterface).
Some of the use cases include: maybe those methods are implemented in a completely different programming language that you are using, for example I think one of the historical ones is JavaScript or vbscript interacting with C++. It standardizes the virtual calls in such a way that you can throw in such an abstraction. And since reference counting happens via a virtual call, memory allocation is also up to the callee. Another historical use case is to have the calls be handled in a different process.
Comment by duped 1 day ago
COM is an ABI (application binary interface). You have two programs, compiled in different languages with different memory management strategies, potentially years apart. You want them to communicate. You either
-1 use a Foreign Function Interface (FFI) provided to those languages -2 serialize/deserialize data and send it over some channel like a socket
(2) is how the internet works so we've taken to doing it that way for many different systems, even if they don't need it. (1) is how operating systems work and how the kernel and other subsystems are exposed to user space.
The problem with FFI is that it's pretty barebones. You can move bytes and call functions, but there's no standard way of composing those bytes and function calls into higher level constructs like you use in OOP languages.
COM is a standard for defining that FFI layer using OOP patterns. Programs export objects which have well defined interfaces. There's a root interface all objects implement called "Unknown", and you can find out if an object supports another interface by calling `queryInterface()` with the id of a desired interface (all interfaces have a globally unique ID). You can make sure the object doesn't lose its data out of nowhere by calling `addRef()` to bump its reference count, and `release()` to decrement it (thus removing any ambiguity over memory management, for the most part - see TFA for an example where that fails).
> where it's documented
https://learn.microsoft.com/en-us/windows/win32/com/the-comp...
Comment by asveikau 1 day ago
Sometimes they are even the same language. Windows has a few problems that I haven't seen in the Unix world, such as: each DLL potentially having an incompatible implementation of malloc, where allocating using malloc(3) in one DLL then freeing it with free(3) in another being a crash.
Comment by pjmlp 1 day ago
Outside UNIX, the C standard library is a responsibility of the C compiler vendor, not the OS.
Nowadays Windows might seem the odd one, however 30 years ago the operating system was more diverse.
You will also find similar issues on dynamic libraries in mainframes/micros from IBM and Unisys, still being sold.
Comment by asveikau 20 hours ago
Comment by jstimpfle 1 day ago
I'm still not sure that it brings a lot to the table for ordinary application development.
Comment by asveikau 1 day ago
Comment by pjmlp 1 day ago
For whatever reason all attempts to make COM easier to use in Visual C++, keep being sabotaged by internal teams.
It is like Windows team feels like it is a manhood test to use such low level tooling.
Comment by ok123456 1 day ago
Comment by CrimsonCape 1 day ago
You should be able to compile a relatively small, trimmed, standalone, AOT compiled library that uses native interop. (Correct me if i'm wrong, dotnet users). Then there would be no dependency on the framework.
Comment by sedatk 1 day ago
Comment by Kwpolska 1 day ago
Comment by pjc50 22 hours ago
Yes-ish. We do AOT at work on a fairly large app and keep tripping over corners. Admittedly we don't use COM. I believe if you know the objects you are using upfront then code generation will take care of this for you. The other options are:
- self-contained: this just means "compiler puts a copy of the runtime alongside your executable". Works fine, at the cost of tens of megabytes
- self-contained single file: the above, but the runtime is zipped into the executable. May unpack into a temporary directory behind the scenes. Slightly easier to handle, minor startup time cost.
Comment by pjmlp 1 day ago
Comment by merb 1 day ago
I mean yes you can build it with native interop and aot. But then you would loose the .net benefits as well.
Comment by rconti 1 day ago
Comment by garaetjjte 20 hours ago
Probably because the COM "intended" way is to generate them from type library. Type library for these interfaces is embedded in Office MSO.DLL. You can use oleview.exe from Windows SDK to convert them to IDL syntax. This yields such signature:
HRESULT GetCustomUI(
[in] BSTR RibbonID,
[out, retval] BSTR* RibbonXml);
And then you can use MIDL tool to generate C headers: DECLSPEC_XFGVIRT(IRibbonExtensibility, GetCustomUI)
/* [helpcontext][id] */ HRESULT ( STDMETHODCALLTYPE *GetCustomUI )(
IRibbonExtensibility * This,
/* [in] */ BSTR RibbonID,
/* [retval][out] */ BSTR *RibbonXml);
https://learn.microsoft.com/en-us/windows/win32/com/how-deve...Comment by piker 15 hours ago
Comment by meibo 1 day ago
Still better than whatever JS rats nest they came up with for the new Outlook.
Comment by snuxoll 1 day ago
WinRT, which is ultimately just an evolution of COM, has HSTRING which can own the data inside it (as well as contain a reference to an existing chunk of memory with fast-pass strings).
Comment by JanneVee 1 day ago
From the CComBSTR documentation from microsoft: "The CComBSTR class is a wrapper for BSTRs, which are length-prefixed strings. The length is stored as an integer at the memory location preceding the data in the string. A BSTR is null-terminated after the last counted character but may also contain null characters embedded within the string. The string length is determined by the character count, not the first null character." https://learn.microsoft.com/en-us/cpp/atl/reference/ccombstr...
From the book ATL internals that I read about 24 years ago.
"Minor Rant on BSTRs, Embedded NUL Characters in Strings, and Life in General From the book ATL internals that i read about 24 years ago.
The compiler considers the types BSTR and OLECHAR* to be synonymous. In fact, the BSTR symbol is simply a typedef for OLECHAR. For example, from wtypes.h: typedef / [wire_marshal] / OLECHAR __RPC_FAR BSTR;
This is more than somewhat brain damaged. An arbitrary BSTR is not an OLECHAR, and an arbitrary OLECHAR is not a BSTR. One is often misled on this regard because frequently a BSTR works just fine as an OLECHAR *.
STDMETHODIMP SomeClass::put_Name (LPCOLESTR pName) ; BSTR bstrInput = ... pObj->put_Name (bstrInput) ; // This works just fine... usually SysFreeString (bstrInput) ;
In the previous example, because the bstrInput argument is defined to be a BSTR, it can contain embedded NUL characters within the string. The put_Name method, which expects a LPCOLESTR (a NUL-character-terminated string), will probably save only the characters preceding the first embedded NUL character. In other words, it will cut the string short."
I wont link to the pirated edition which is never than the one I read.
So if there is code in outlook that relies on the preceding bytes being the string length it can be the cause of the memory corruption. It would require a sesssion in the debugger to figure it out.
Comment by hyperrail 1 day ago
Comment by LegionMammal978 1 day ago
Comment by ptx 1 day ago
Comment by Kwpolska 1 day ago
Comment by wunderwuzzi23 1 day ago
So, I built an MCP server that can host any COM server. :)
Now, AI can launch and work on Excel, Outlook and even resurrect Internet Explorer.
https://embracethered.com/blog/posts/2025/mcp-com-server-aut...
Comment by comex 1 day ago
unsafe fn GetCustomUI(&self, _ribbon_id: *const BSTR, out: *mut BSTR) -> HRESULT {}
But as linked in bri3d's post, the original C++ signature is: STDMETHOD(GetCustomUI)(BSTR RibbonID, BSTR* RibbonXml);
It really is true that the second parameter is a pointer to BSTR and the first is not. This difference is because the second parameter is an out parameter.Ultimately, I think windows-rs is at fault here for confusing API design. The BSTR type that it defines is fundamentally different from the BSTR type in C++. The Rust BSTR has a destructor and is always owned, whereas the C++ BSTR is just a typedef for a raw pointer which may be considered owned or borrowed depending on the context. It's not like C++ doesn't support destructors; this particular type just doesn't use them.
It makes sense for Rust bindings to define a safe wrapper type with a destructor. But if I were designing the bindings, I would have given the wrapper a different name from the original type to make the difference in semantics more obvious.
The Rust BSTR type is still ABI-compatible with the C++ one (because it's repr(transparent)), so it can be valid to use it in FFI definitions, but only if that BSTR happens to be owned (like with the second parameter).
A more thorough wrapper for BSTR would provide a safe borrowed type in addition to the owned type, like what &str is to String. But it appears that windows-rs doesn't provide such a type. However, windows-rs does provide an unsafe type which can be used for the purpose. Confusingly, this type is also named BSTR, but it's defined in the windows-sys crate instead of windows-strings. This BSTR is like the C++ BSTR, just an alias for a raw pointer:
https://docs.rs/windows-sys/latest/windows_sys/core/type.BST...
You should probably use that type for the _ribbon_id parameter. Or you could just manually write out `*const u16`. But not `*const BSTR`, which is a pointer to a pointer. `*const BSTR` happens to be the same size as `BSTR` so it doesn't cause problems for an unused parameter, but it would break if you tried to use it.
Which probably doesn't matter to your application. But since you published a "correct signature for future LLMs", you should probably fix it.
See also this issue report I found (not exactly on point but related):
Comment by garaetjjte 20 hours ago
Comment by comex 16 hours ago
I am slightly suspicious that Raymond Chen might have been confused. The link in that post has the text "forget to set the output pointer to NULL", but in the linked post (the original link is broken but it's at [1]), the implementation actually set the output pointer to a garbage value rather than leaving it untouched. I wonder what the marshalling implementation actually looks like…
But at any rate, treating the out pointer as uninitialized is definitely the safe option. I'm not 100% sure whether it can legitimately point to non-null, but if it does point to non-null, then that value is definitely garbage rather than something that needs freeing.
[1] https://devblogs.microsoft.com/oldnewthing/20091007-00/?p=16...
Comment by garaetjjte 13 hours ago
I didn't disagree with you, I just wanted to point another issue.
Actually *mut BSTR (owned) is also acceptable, iff you remember to use std::ptr::write instead of normal assignment.
> I'm not 100% sure whether it can legitimately point to non-null
Note that in none of the examples on this and other posts (like https://devblogs.microsoft.com/oldnewthing/20040326-00/?p=40...) output value is initialized, so it will be whatever is lying on the stack.
Comment by piker 12 hours ago
I believe this approach can work while retaining the most apparently-idiomatic mapping. What do you guys think?
impl IRibbonExtensibility_Impl for Addin_Impl {
unsafe fn GetCustomUI(&self, _ribbon_id: BSTR, out: *mut BSTR) -> HRESULT {
log("GetCustomUI called()");
std::mem::forget(_ribbon_id);
if out.is_null() {
return windows::Win32::Foundation::E_POINTER;
}
unsafe {
std::ptr::write(out, BSTR::from(RIBBON_XML));
}
S_OK
}
}Comment by piker 12 hours ago