Show HN: My Tizen multiplayer drawing game flopped, but then hit 100M drawings
Posted by lombarovic 19 hours ago
Hi HN,
I built the first version of Drawize back in late 2016 specifically for a Samsung Tizen OS app contest. I crunched and built the whole thing (including the real-time multiplayer engine) in under 4 weeks.
It didn’t win anything in the contest.
Since it was built with web tech anyway, I published it on the open web in early 2017 just to see what would happen. It started living its own life, and today — 8 years later — the database processed the 100,000,000th drawing.
On the busiest days it’s been 30k+ active users, and storing 100M drawings currently sits at ~3.16 TB.
The milestone moment: I was watching live logs today, terrified the 100Mth drawing would be NSFW. Luckily, the RNG gods smiled and it turned out to be a Red Balloon (You can see the 100Mth drawing here: https://www.drawize.com/blog/100-million-drawings-milestone)
Tech stack (boring but fast):
Backend: .NET + WebSockets (real-time sync)
Frontend: hand-coded HTML/JS + jQuery (no React, no bundlers)
Data: PostgreSQL & MongoDB
Storage: Wasabi Cloud (moved there to save on S3 costs)
Scaling as a solo dev: real-time lobbies + reconnection edge cases + moderation/content filtering. I use content classification models trained in 2021 to filter bad content, and the real-time multiplayer side is mostly highly optimized .NET code.
Happy to answer questions about the “failed” Tizen origin, real-time multiplayer on the web, moderation, or how .NET handles the load.
Comments
Comment by jmpavlec 10 minutes ago
Comment by sacredSatan 1 hour ago
Comment by lombarovic 27 minutes ago
Fixing it now, thanks for letting me know!
Comment by wildest-boar 4 hours ago
Comment by lombarovic 2 hours ago
You might be surprised — the game is actually deployed in just one region (US) on only two dedicated servers (Contabo).
Here is the breakdown of why it feels fast:
1. The Metal: I use one server for the Web App + Gameplay Backend (.NET), and a second server strictly for PostgreSQL and MongoDB. No virtualization overhead.
2. The Network: I use Cloudflare for static content, which handles the initial global load speed.
3. Aggressive Prefetching: I rely heavily on ServiceWorkers. When you land on the home page, the 'Play' page and game assets are already being prefetched in the background. When you click play, it loads instantly from the local cache.
4. Single WebSocket: Once connected, there is zero HTTP overhead. Every interaction — gameplay, chat, UI updates — travels through a single persistent WebSocket connection.
Keeping the architecture simple (monolith-ish) rather than distributed helps me keep the latency predictable and maintenance low.
Comment by wildest-boar 1 hour ago
Comment by lombarovic 1 hour ago
With efficient code in .NET, a single machine can handle such kind of load without breaking a sweat. I actually sleep better knowing there are fewer moving parts to fail!
Comment by barbegal 15 hours ago
Does it generate enough revenue to be self sustaining?
Comment by lombarovic 15 hours ago
Yes, it is fully self-sustaining. In fact, for the last 5 years, it has been my main full-time source of income, running entirely as a bootstrapped project from Croatia.
The revenue comes primarily from ads, plus a smaller portion from Premium ad-free subscriptions. Since I focus heavily on keeping infrastructure costs low (optimized .NET code + moving storage from S3 to Wasabi), the margins are healthy enough to be a very viable, bootstrapped full-time business.