Towards trust in Emacs
Posted by eshelyaron 2 days ago
Comments
Comment by accelbred 5 hours ago
Comment by eshelyaron 3 hours ago
Comment by pkal 2 hours ago
(add-hook 'lisp-interaction-mode-hook (lambda () (setq-local trusted-content :all)))
Comment by eshelyaron 1 hour ago
Only the scratch buffer is to be exempted, not every buffer that gets this mode.
Comment by pkal 50 minutes ago
(when (equal (buffer-name) "*scratch*") ...)Comment by rpdillon 1 hour ago
(with-current-buffer "*scratch*"
(setq-local trusted-content :all))Comment by pkal 45 minutes ago
If we are already experimenting with different ideas, this should also work (and gives a hint of how you want to fix the issue upstream):
(define-advice get-scratch-buffer-create (:filter-return (buf) trusted)
(with-current-buffer buf
(setq-local trusted-content :all))
buf)Comment by jFriedensreich 36 minutes ago
Comment by TheChaplain 4 hours ago
I wish this was understood clearly by more security engineers, but, alas...
Comment by quotemstr 5 hours ago
Macro expansion is data transformation. Form in, form out. Most macros are pure functions of their inputs. Even the ones that aren't seldom have effects that would allow exploitation. That's because a well-written macro does not have side-effects during expansion time, but instead generates code that when itself evaluated, has the desired effect.
Yes, in general, for arbitrary values of "macro" and "form", using a macro to expand a form leads to arbitrary code execution. This much is true. But the risk only manifests when both the macro and its input form are untrusted.
The vast majority of macros are dumb pure functions and do not perform dangerous actions on untrusted input. It is safe to use these macros to expand untrusted forms. Doing os would make flymake, find-function, and other features work correctly in most cases. To blanket-prohibit expansion even by macros doing obviously safe transformations is to misunderstand the issue.
At a minimum, it must be possible to define a macro and mark it safe for expanding untrusted code. Yes, it's prudent to have a whitelist and not a blacklist. Right now, we don't even have a whitelist. All macros on any untrusted form are deemed unsafe. That's too conservative.
Beyond that, it would be safe to run the macro-expander itself in an environment without access to mutating global operations. Since almost all macros are intrinsically safe to expand, we'd have far fewer situations in which people had subpar development experiences from overly conservative security mitigations.
In addition, after I've eval-buffered a buffer, then Emacs should perform macro expansions in this buffer, at least until I revert it from disk. If I have evaled a malicous buffer, I have already accepted its malice into my Emacs and expanding macros for find-function can do no more harm.
Comment by shevy-java 3 hours ago
Age verification aaaaaand Trusted Computing now! \o/
(Just kidding - have to point at the question of what trust is exactly. Because I can not accept the "trusted files" claim; I don't think anyone can ever trust anything, unless there is some really objective criterium that is unchangeable. But if something is unchangeable, can it be useful for anything? Yes, you can ensure that a calculator would correctly put a given input into the correct output, or a function to do so, but in real calculation this is not the only factor to be guaranteed, not even in quantum computing. What if you manage to influence the calculation process via light/laser information or any other means? I can't accept the term "trusted" here, because it implies one could and should trust something; that is a similar problem to the term AI - I never could accept that "AI" has anything to do with real intelligence with the given hardware, it is just a simulation of intelligence; pattern matching and recognition only makes it more likely to produce useful results, but that does not imply intelligence at all. It lack true understanding - that is why it has to sniff for data, to improve the mapping of generated output. One can see this on many AI-centric videos on youtube, the AI is often hallucinating and creating videos that are not possible, e. g. suddenly a leg appearing in motion that is twisted in the opposite direction. That shows that the AI does not understand what it is doing. Any human could realise that this is physically just not possible. I see this on cheaper AI videos even more, e. g. chuck norris videos where chuck would kick everyone yet the motions are totally wrong and detached from the "real" scene.)
Comment by like_any_other 7 hours ago
Comment by nextos 6 hours ago
In Linux, sandboxing with Firejail or bwrap is quite easy to configure and allows fine-grained permissions.
Also, the new Landlock LSM and LSM-eBPF are quite promising.
Comment by boxedemp 6 hours ago
Comment by spectrumx 2 hours ago
Comment by phplovesong 4 hours ago
I guess the most valuable thing you loose is the "what" and "how". You cant learn these things from just reading code, because the mental model just is not there.
Also i dislike code reviews, it feels "like a waste of time" because sure i can spot some things, but i never can give the real feedback, because the mental model is not there (i did not write this code).
Having said that, I still use AI for my own code review, AI can spot some one-offs, typos, or even a few edge cases i missed.
But i still write my own code, i want to own it, i want to have an intimate knowledge of it and really understand the whys and whats.
Comment by andsoitis 4 hours ago