Show HN: Smol machines – subsecond coldstart, portable virtual machines

Posted by binsquare 17 hours ago

Counter340Comment105OpenOriginal

Comments

Comment by binsquare 17 hours ago

Hello, I'm building a replacement for docker containers with a virtual machine with the ergonomics of containers + subsecond start times.

I worked in AWS previously in the container space + with firecracker. I realized the container is an unnecessary layer that slowed things down + firecracker was a technology designed for AWS org structure + usecase.

So I ended up building a hybrid taking the best of containers with the best of firecracker.

Let me know your thoughts, thanks!

Comment by PufPufPuf 15 hours ago

Hey this is super cool. I've been researching tech like this for my AI sandboxing solution, ended up with Lima+Incus: https://github.com/JanPokorny/locki

My problem with microVMs was that they usually won't run docker / kubernetes, I work on apps that consist of whole kubernetes clusters and want the sandbox to contain all that.

Does your solution support running k3s for example?

Comment by mkagenius 2 hours ago

With instavm (https://instavm.io), you can provide an OCI image built from a dockerfile.

Comment by fqiao 14 hours ago

we will evaluate. I created this issue to track this: https://github.com/smol-machines/smolvm/issues/150

Really appreciate the feedback!

Comment by 6 hours ago

Comment by topspin 14 hours ago

What is the status of supporting live migration?

That's the one feature of similar systems that always gets left out. I understand why: it's not a priority for "cloud native" workloads. The world, however, has work loads that are not cloud native, because that comes at a high cost, and it always will. So if you'd like a real value-add differentiator for your micro-VM platform (beyond what I believe you already have,) there you go.

Otherwise this looks pretty compelling.

Comment by genxy 14 hours ago

It helps if you offer a concrete use case, as in how large the heap is, what kinda of blackout period you can handle, and whether the app can handle all of it's open connections being destroyed, etc. The more an app can handle resetting some of it's own state, the easier LM is going to be to implement. If your workload jives with CRIU https://github.com/checkpoint-restore/criu you could do this already.

By what I assume is your definition, there are plenty of "non cloud native" workloads running on clouds that need live migration. Azure and GCP use LM behind the scenes to give the illusion of long uptime hosts. Guest VMs are moved around for host maintenance.

Comment by topspin 14 hours ago

"Azure and GCP use LM behind the scenes"

As does OCI, and (relatively recently) AWS. That's a lot of votes.

Use case: some legacy database VM needs to move because the host needs maintenance, the database storage (as opposed to the database software) is on a iSCSI/NFS/NVMe-oF array somewhere, and clients are just smart enough to transparently handle a brief disconnect/reconnect (which is built-in to essentially every such database connection pool stack today.)

Use case: a web app platform (node/spring/django/rails/whatever) with a bunch of cached client state needs to move because the host needs maintenance. The developers haven't done all the legwork to make the state survive restart, and they'll likely never get time needed to do that. That's essentially the same use case as previous. It's also rampant.

Use case: a long running batch process (training, etc.) needs to move because reasons, and ops can't wait for it to stop, and they can't kill it because time==money. It's doesn't matter that it takes an hour to move because big heap, as long as the previous 100 hours isn't lost.

"as in how large the heap is"

That's an undecidable moving target, so let the user worry about it. Trust them to figure out what is feasible given the capabilities of their hardware and talent. They'll do fine if you provide the mechanism. I've been shuffling live VMs between hosts for 10+ years successfully, and Qemu/KVM has been capable of it for nearly 20, never mind VMware.

"CRIU"

Dormant, and still containers. Also, it's re-solving solved problems once you're running in a VM, but with more steps.

Comment by fqiao 14 hours ago

Really appreciate the suggestion! By "live migration", do you mean keeping the existing files and migrate them elsewhere with the vm?

Thanks

Comment by topspin 14 hours ago

I mean making any given VM stop on host A and appear on host B; e.g. standard Qemu/KVM:

    virsh migrate --live GuestName DestinationURL
This is feasible when network storage is available and useful when a host needs to be drained for maintenance.

Comment by benswerd 9 hours ago

Live migrations and the tech powering it was the hardest thing I ever built. Its something that I think will come naturally to projects like smolVM as more of the hypervisors build it in, but its a deeply challenging task to do in userspace.

My team spent 4 months on our implementation of vm memory that let us do it and its still our biggest time suck. We also were able to make assumptions like RDMA that are not available.

All that to say — as someone not working on smolVMs — I am confident smolVMs and most other OSS sandbox implementations will get live migration via hypervisor upgrades in the next 12 months.

Until then there are enterprise-y providers like that have it and great OSS options that already solve this like cloud hypervisor.

Comment by fqiao 14 hours ago

I see. so right now smolvm can be stopped, and then "packed" (think of it as compressed), and restart on a different host. files in the disks are preserved, but memory snapshotting is still hard tbh

Comment by sureglymop 12 hours ago

It's also feasible without network storage, --copy-storage-all will migrate all disks too.

Comment by lacoolj 13 hours ago

What percentage of this code was written by LLM/AI?

Comment by binsquare 12 hours ago

For myself, I'd estimate ~50%

Not useful for things it hadn't been trained on before. But now I have the core functionality in place - it's been of great help.

Comment by RALaBarge 12 hours ago

Hey mathematician, how much of this formula did you calculate with an abacus instead of a calculator?

Comment by anthk 11 hours ago

Hey 'software engineer', how much of the output of an LLM it's actually reproducible vs the one from a calculator or any programming language with the same input in different sessions?

Comment by onion2k 3 hours ago

Not really related to this 'discussion' but this is an interesting problem in the AI space. It's essentially a well understood problem in unreliable distributed systems - if you have a series of steps that might not respond with the same answer every time (because one might fail usually) then how do you get to a useful and reliable outcome? I've been experimenting with running a prompt multiple times and having an agent diff the output to find parts that some runs missed, or having it vote on which run resulted in the best response, with a modicum of success. If you're concerned about having another layer of AI in there then getting the agents to return some structured output that you can just run through a deterministic function is an alternative.

Non-determinism is a problem that you can mitigate to some extent with a bit of effort, and is important if your AI is running without a human-in-the-loop step. If you're there prompting it though then it doesn't actually matter. If you don't get a good result just try again.

Comment by weird-eye-issue 10 hours ago

Why are you so concerned about the LLM producing the exact same code across different sessions? Seems like a really weird thing to focus on. Why aren't you focused on things like security, maintainability, UI/UX, performance?

Comment by harshdoesdev 16 hours ago

+1. i built something similar called shuru.run because i wanted an easy way to set up microVM sandboxes to run some of my AI apps, and firecracker wasn't available for macOS (and, as you said, it is just too heavy for normal user-level workloads).

Comment by sahil-shubham 15 hours ago

Nice work on Shuru — I remember looking at it when I was researching this space. You went with a Rust wrapper on Apple’s Virtualization framework right?

I have been working on something similar but on top of firecracker, called it bhatti (https://github.com/sahil-shubham/bhatti).

I believe anyone with a spare linux box should be able to carve it into isolated programmable machines, without having to worry about provisioning them or their lifecycle.

The documentation’s still early but I have been using it for orchestrating parallel work (with deploy previews), offloading browser automation for my agents etc. An auction bought heztner server is serving me quite well :)

Comment by harshdoesdev 15 hours ago

bhatti's cli looks very ergonomic! great job!

also, yes, shuru was (still) a wrapper over the Virtualization.framework, but it now supports Linux too (wrapper over KVM lol)

Comment by fqiao 16 hours ago

Yes, having a light-weight solution for local devices as well is one primary goal of the design. Another one is to make it easy for hosting, self or managed

Comment by JuniperMesos 12 hours ago

What were the biggest challenges in terms of designing the VM to have subsecond start times? And what are the current bottlenecks for deceasing the start time even further?

Comment by binsquare 11 hours ago

No special programming tricks were used.

Linux was built in the 90s. Hardware improved more than a 1000x. Linux virtual machine startup times stayed relatively the same.

Turns out we kept adding junk to the linux kernel + bootup operations.

So all I did was cut and remove unnecessary parts until it still worked.

This ended up also getting boot up times to under 1s. The kernel changes are the 10 commits I made, you can verify here: https://github.com/smol-machines/libkrunfw

There's probably more fat to cut to be honest.

Comment by thepoet 9 hours ago

The images or rather portable artifacts rehydration on any platform plus the packaging is neat. I have been working on https://instavm.io for some time around VM based sandboxes and related infra for agents and this is refreshing to see.

Comment by thm 16 hours ago

You could add OrbStack to the comp. table

Comment by fqiao 16 hours ago

Will do. Thanks for the suggestion!

Comment by BobbyTables2 7 hours ago

How is this different than Kara Containers?

Comment by binsquare 7 hours ago

kata containers is a container runtime that focuses on running containers inside a vm.

smolvm is a vm with some of the properties & ergonomics of containers - it's meant as a replacement for containers.

Comment by sdrinf 16 hours ago

hi, great project! Windows support is sorely lacking, though. As someone working a lot with sandboxed LLMs right now, the options-space on windows for sandboxing is _extremely lacking_. Any plans to support it?

Comment by fqiao 15 hours ago

Hey, thanks so much! yah we will definitely add windows support later. We are exploring how to get this work with WSL and will release it asap. Stay tuned and thanks!

Comment by binsquare 15 hours ago

Yeah, it's in my mind.

WSL2 runs a linux virtual machine. Need to take some time and care to wire that up, but definitely feasible.

Comment by traceroute66 12 minutes ago

Sounds very similar to the various unikernel implementations floating around ? Such as Unikernel[1]

[1] https://unikraft.org

Comment by binsquare 8 minutes ago

unikraft's internals are not open source so I can't say.

But smol machines are not an implementation of unikernel - it's basically just the linux kernel but slimmed down. So, more compatible with most software.

Comment by gavinray 15 hours ago

The feature that lets you create self-contained binaries seems like a potentially simpler way to package JVM apps than GraalVM Native.

Probably a lot of other neat usecases for this, too

  smolvm pack create --image python:3.12-alpine -o ./python312
  ./python312 run -- python3 --version
  # Python 3.12.x — isolated, no pyenv/venv/conda needed

Comment by binsquare 15 hours ago

yeah, it's analogous to Electron.

Electron ships your web app bundled with a browser.

Smol machines ship your software packaged with a linux vm. No need for dependency management or compatibility issues because it is baked in.

I think this is how Codex or Claude Code should be shipped by default, to avoid any isolation issues tbh

Comment by tkocmathla 4 hours ago

How "fat" are the packed machines? In other words, how much bloat is inevitable, or is that entirely controlled by the base image + the user's smolvm machine spec? How does smolvm's pack compare to something like dockerc [0] in terms of speed and size? Disclaimer: I just learned about dockerc!

I can't actually create and test a pack right now because of [1], but I love the idea of using this to distribute applications you might otherwise use a Docker image for.

[0] https://github.com/NilsIrl/dockerc

[1] https://github.com/smol-machines/smolvm/issues/159

Comment by fqiao 12 hours ago

yah, i guess everybody share the experience of "i messed up with my dev env" right? We want this "machine" to be shippable, meaning that once it is configured correctly, it can be shared to anyone and use right away.

Comment by mrbluecoat 14 hours ago

Can .smolmachine be digitally signed and self authenticate when run? Similar to https://docs.sylabs.io/guides/main/user-guide/signNverify.ht...

Comment by Palmik 3 hours ago

Could it be made even faster using some of the ideas from https://github.com/zerobootdev/zeroboot ?

Comment by cr125rider 16 hours ago

Great job with the comparison table. Immediately I was like “neat sounds like firecracker” then saw your table to see where it was similar and different. Easy!

Nice job! This looks really cool

Comment by fqiao 16 hours ago

Thanks so much

Comment by chwzr 12 hours ago

I see the alpine and python:3.12-alpine images in your cli docs. Where does these come from?is it from a docker like registry or are these built in? Can I create my own images? Or this this purely done with the smolfile? Is there a Ubuntu image available?

Looks really nice btw. Hot resize mem/cpu would be nice. This could become a nice tech for a one-backend-per-customer infra orchestrator then.

Comment by lambdanodecore 15 hours ago

Basically any open source project nowadays run their software stack in containers often requiring docker compose. Unfortunatley Smol machines do not support Docker inside the microvms and they also do not support nested VMs for things that use Vagrant. I think this is a big drawback.

Comment by binsquare 15 hours ago

I can support docker - will ship a compatible kernel with the necessary flags in the next release.

Comment by lambdanodecore 15 hours ago

I tried something like this already, also including nested kvm. I think this will increase the boot time quiet a bit.

Also libkrun is not secure by default. From their README.md:

> The libkrun security model is primarily defined by the consideration that both the guest and the VMM pertain to the same security context. For many operations, the VMM acts as a proxy for the guest within the host. Host resources that are accessible to the VMM can potentially be accessed by the guest through it.

> While defining the security implementation of your environment, you should think about the guest and the VMM as a single entity. To prevent the guest from accessing host's resources, you need to use the host's OS security features to run the VMM inside an isolated context. On Linux, the primary mechanism to be used for this purpose is namespaces. Single-user systems may have a more relaxed security policy and just ensure the VMM runs with a particular UID/GID.

> While most virtio devices allow the guest to access resources from the host, two of them require special consideration when used: virtio-fs and virtio-vsock+TSI.

> When exposing a directory in a filesystem from the host to the guest through virtio-fs devices configured with krun_set_root and/or krun_add_virtiofs, libkrun does not provide any protection against the guest attempting to access other directories in the same filesystem, or even other filesystems in the host.

Comment by fqiao 13 hours ago

Thanks so much for the feedbacks. Yes these are valid concerns around libkrun security, We are planning and developing features around them actually, and hopefully that could alleviate the conerns.

for virtio-fs, yes the risk of exposing the host fs struture exists, and we plan to:

1. creating staging directory for each vm and bind-mount the host dir onto them

2. having private mount namespaces for vms

they are both tracked in our github issues:

https://github.com/smol-machines/smolvm/issues/152 https://github.com/smol-machines/smolvm/issues/151

2 may need much more efforts than we imagine, but we will ensure to call this out in our doc.

For the concern around TSI, we are developing virtio-net in-parallel, it is also tracked in our github and will be released soon: https://github.com/smol-machines/smolvm/issues/91

Would like to collect mroe suggestions on how to make this safer. Thanks!

Comment by binsquare 13 hours ago

Security is a broad topic.

Here's how my perspective:

smolvm operates on the same shared responsibility model as other virtual machines.

VM provides VM-level isolation.

If the user mounts a directory with the capability of symlinks or a host OS with a path for guest software that is designed to escape - that is the responsibility of the user rather than the VM.

Security is not guaranteed by using a specific piece of software, it's a process that requires different pieces for different situations. smolvm can be a part of that process.

Comment by genxy 14 hours ago

So Vagrant is launching the VM locally, is that why it needs nesting?

Would you be ok with a trampoline that launched the VM as a sibling to the Vagrant VM?

Comment by estetlinus 2 hours ago

Why would I prefer smol machines over docker sandbox? Do you have an elevator pitch?

Comment by binsquare 3 minutes ago

uhh sort of different things.

smol machines is a virtual machine that has properties and ergonomics of containers. It's not an ai project, it's designed to run any software inside.

docker sandbox sounds like it's running ai stuff inside of a microvm.

So if you need to use a virtual machine - use smol machines.

If you need a to run coding agents, use smol machines still because agents are just software.

Comment by geniium 1 hour ago

Congrats that looks really amazing!

Comment by isterin 14 hours ago

We’re using smolmachines to create environments for our agents to execute code. It’s been great so far and the team is super responsive. The dev ergonomics are also great.

Comment by fqiao 14 hours ago

Really appreciate it! Would love to work together to make this easier to use.

Comment by simonreiff 13 hours ago

Hey this is pretty neat! I definitely would try using this for benchmarks and other places where I need strong isolation as Docker is just too bloated and slow, but sadly I don't think I can run this natively on my Windows laptop. I hope you extend to WSL! Good luck and congrats on launch.

Comment by fqiao 12 hours ago

Hey thanks so much for the feedback. Yah try it and let us know. We have a discord if you want to join, but either github or discord feel free to report any issues you find to us.

Cheers!

Comment by sureglymop 12 hours ago

What I really like about containers is quickly being able to spin one up without having to specify resources (e.g. RAM limit). I hope this would let me do that also.

Comment by binsquare 10 hours ago

This does that.

I'm trying to do away the model of cpu and memory tbh.

Virtio- balloon dynamically resizes based on memory consumed.

CPU is oversubscribed by default

Comment by 2001zhaozhao 5 hours ago

Wow, this seems very useful for coding agent sandbox environments that have full browser installations and the like.

Comment by akoenig 14 hours ago

smolvm is awesome. The team is highly responsive and very experienced. They clearly know what they’re doing.

I’m currently evaluating smolvm for my project, https://withcave.ai, where I’m using Incus for isolation. The initial integration results look very promising!

Comment by indigodaddy 11 hours ago

This looks super awesome. Very excited for you potentially open sourcing it, as I’d like to customize/extend it a bit for certain use cases. Re: smolvm vs in use, I think even if smolvm works great for it, why not keep incus as an option for people who want to use cave on VMs that don’t have access to /dev/kvm (Eg the user can pick either incus or smolvm for their cave deployment)

Comment by fqiao 14 hours ago

Cannot thank you more for this! Lets' work together to see how we can make this easier for cave!

Comment by fqiao 16 hours ago

Give it a try folks. Would really love to hear all the feedbacks!

Cheers!

Comment by leetrout 16 hours ago

why did you seemingly create two HN accounts?

Edit: I see this appears to be a contributor to the project as well. It was not obvious to me.

Comment by fqiao 16 hours ago

this is me: https://github.com/phooq

@binsquare is this one: https://github.com/BinSquare

Comment by fqiao 12 hours ago

No worry at all! Thanks!

Comment by irickt 13 hours ago

Is there a relation to the similarly-purposed and similarly-named https://github.com/CelestoAI/SmolVM

Comment by binsquare 12 hours ago

no relation, they build a sandboxing service using firecracker.

I build a virtual machine that is an alternative to firecracker and containers.

Comment by rkagerer 13 hours ago

I see you support Linux and MacOS hosts. Any Windows support planned?

Comment by binsquare 11 hours ago

Yeah it's feasible, I don't have windows to test. Can you help? :D

Comment by rawoke083600 2 hours ago

I like the name ! :)

Comment by Ey7NFZ3P0nzAe 2 hours ago

Me too but I loove the icon

Comment by binsquare 1 hour ago

thanks, it's my hand traced over and then made pretty.

Comment by 0cf8612b2e1e 15 hours ago

This looks very cool. Does the VM machinery still work if I run it in a bubblewrap? Can it talk to a GPU?

Can you pipe into one? It would be cute if I could wget in machine 1 and send that result to offline machine 2 for processing.

Comment by binsquare 15 hours ago

Haven't tried with bubblewrap - but it should.

Yes! GPU passthrough is being actively worked on and will land in next major release: https://github.com/smol-machines/smolvm/pull/96

Yea just tried piping, it works:

``` smolvm machine exec --name m1 -- wget -qO- https://example.com/data.csv \ | smolvm machine exec --name m2 -i -- python3 process.py ```

Comment by ukuina 15 hours ago

Doesn't Docker's sbx do this?

https://docs.docker.com/reference/cli/sbx/

Comment by binsquare 14 hours ago

sandboxing is one of the features of virtual machines.

I'm building a different virtual machine.

Comment by ccrone 10 hours ago

Neat! I work with the team on sbx. We built our own cross-platform VMM after running into limitations with the existing options. Happy to chat more about what you’ve built and what we’re doing: christopher<dot>crone@docker.com

Comment by bch 15 hours ago

see too[0][1] for projects of a similar* vein, incl historical account.

*yes, FreeBSD is specifically developed against Firecracker which is specifically avoided w "Smol machines", but interesting nonetheless

[0] https://github.com/NetBSDfr/smolBSD

[1] https://www.usenix.org/publications/loginonline/freebsd-fire...

Comment by binsquare 15 hours ago

that was one of my inspirations but I don't think they went far enough in innovation.

microvm space is still underserved.

Comment by bch 15 hours ago

> that was one of my inspirations

Colins FreeBSD work or Emiles NetBSD work?

Comment by binsquare 12 hours ago

netBSD, I love that focus on a minimal and simple, reproducible binaries.

You'll see that philosophy in this project as well (i hope).

freeBSD focuses on features, which is great too.

Comment by timsuchanek 13 hours ago

This is very exciting. It enables a cross platform, language agnostic plugin system, especially for agents, while being safe in a VM.

Comment by brianjlogan 10 hours ago

Any integration with existing orchestrators? Plans to support any or building your own?

Comment by binsquare 9 hours ago

Will build a free open source self serve orchestration to enable subsecond vm vending

But should be easy for anyone to build their own integration with existing as well like nomad.

Comment by akdev1l 9 hours ago

How does it compare to podman with crun-vm ?

Comment by parasitid 14 hours ago

hi! congrats for your work that's really nice.

question: why do you report that qemu is 15s<x<30s? for instance with katacontainers, you can run fast microvms, and even faster with unikernels. what was your setup?

thanks a lot

Comment by nonameiguess 14 hours ago

What are you actually doing on top of libkrun? Providing really small machine images that boot quickly? If I run the smolvm run --image alpine example, what is "alpine?" Where is that image coming from? Does this have some built-in default registry of machine images it pulls from? Does it need an Internet connection that allows outbound access to wherever this registry runs? Is it one of a default set of pre-built images that comes with the software itself and is stored on my own filesystem? Where are the builds for these images? Where do these machine images end up? ~/.local/share/smolvm/?

Comment by binsquare 12 hours ago

i run a custom fork of libkrun, libkrunfw (linux kernel), etc etc: https://github.com/orgs/smol-machines/repositories

Got a lot of questions on how I spin up linux VM's so quickly

Explanation is pretty straight forward.

Linux was built in the 90s. Hardware improved more than a 1000x. Linux virtual machine startup times stayed relatively the same.

Turns out we kept adding junk to the linux kernel + bootup operations.

So all I did was cut and remove unnecessary parts until it still worked. This ended up also getting boot up times to under 1s.

Big part of it was systemd btw.

Comment by binsquare 12 hours ago

those images are pulling from the public docker registry.

Comment by chrisweekly 14 hours ago

This looks awesome. Thanks for sharing!

Comment by fqiao 12 hours ago

Thanks so much! Feel free to try it out if you have a chance, and let's us know your thoughts. Thanks!

Comment by messh 15 hours ago

https://shellbox.dev is a hosted version of something very similar

Comment by tomComb 12 hours ago

This sounds great, except for one thing: you can scale your compute (CPU & RAM) as needed but your storage appears to scale with it.

So, if I use a "16 vCPUs, 32GB RAM, 400GB SSD" machine for a period of intense compute, and then want to scale that down to "2 vCPUs, 4GB RAM", most of my storage disappears?

That rather ruins the potential of the advertised scalability.

Comment by harshdoesdev 16 hours ago

its a really innovative idea! very interested in the subsecond coldstart claim, how does it achieve that?

Comment by fqiao 16 hours ago

@binsquare basically brute-force trimmed down unnecessary linux kernel modules, tried to get the vm started with just bare minimum. There are more rooms for improvement for sure. We will keep trying!

Comment by deivid 15 hours ago

With this approach I managed to get to sub-10ms start (to pid1), if you can accept a few constraints there's plenty of room!

Though my version was only tested on Linux hosts

Comment by binsquare 15 hours ago

would be interested to see how you do it, how can I connect with you - emotionally?

Comment by threecheese 13 hours ago

Start with booze; always works :)

Comment by harshdoesdev 16 hours ago

nice! for most local workloads, it is actually sufficient. so, do you ship a complete disk snapshot of the machines?

Comment by fqiao 16 hours ago

Yes. files on the disks are kept across stop and restart. We also have a pack command to compress the machine as a single file so that it can shipped and rehydrated elsewhere

Comment by dimitry12 11 hours ago

https://github.com/earendil-works/gondolin is another project addressing a similar use-case.

Comment by cperciva 14 hours ago

See also SmolBSD -- similar idea, similar name, using NetBSD.

Comment by fqiao 14 hours ago

I came across SmolBSD before too. Cool project!

Comment by danelliot 12 hours ago

[dead]

Comment by volume_tech 10 hours ago

[dead]

Comment by kevinten10 8 hours ago

[dead]

Comment by volume_tech 16 hours ago

[dead]