Tiny Core Linux: a 23 MB Linux distro with graphical desktop
Posted by LorenDB 5 days ago
Comments
Comment by hiAndrewQuinn 5 days ago
Phenomenal for those low powered servers you just want to leave on and running some tiny batch of cronjobs [1] or something for months or years at a time without worrying too much about wear on the SD card itself rendering the whole installation moot.
This is actually how I have powered the backend data collection and processing for [2], as I wrote about in [3]. The end result is a static site built in Hugo but I was careful to pick parts I could safely leave to wheedle on their own for a long time.
[1]: https://til.andrew-quinn.me/posts/consider-the-cronslave/
[2]: https://hiandrewquinn.github.io/selkouutiset-archive/
[3]: https://til.andrew-quinn.me/posts/lessons-learned-from-2-yea...
Comment by 1vuio0pswjnm7 5 days ago
Before RPI existed, I always made filesystem images for USB sticks in NetBSD so that writes never touched "disk" ("diskless"). This allows me to remove the USB stick after boot, freeing up the slot for something else
BSD "install images" work this way
I have been using the RPi with a diskless NetBSD image since around 2012; there are no SD card writes, the userland is extracted into RAM
I can pull out the SD card after boot and use the slot for something else
If I want data storage, I connect an external drive
It's been wild to read endless online complaints from so-called "technical" RPi users for the last 13 years about SD card wear and tear
To me, it's another example of how it's possible to have a solution that is as old as the hills and have it be completely ignored in favor of a "modern" approach that is fatally-flawed
Comment by victorbuilds 5 days ago
A lot of the SD-card wear issues come from people running “normal PC workflows” on a storage medium that was never designed for that pattern.
Something I’ve seen help many newcomers is simply enabling an overlay filesystem or tmpfs-based writes. It’s basically the middle ground between a full RAM-boot distro (piCore, Alpine diskless, NetBSD) and a standard SD-based Raspberry Pi OS.
You still get the normal ecosystem and docs, but almost no writes hit the card unless you explicitly commit them.
For anyone stuck between “I want something simple” and “I don’t want my SD to die,” overlays are the easiest win.
Comment by SoftTalker 5 days ago
Comment by embedding-shape 5 days ago
NetBSD and Tiny Core Linux, even with all their benefits, is a harder experience to get into if you haven't already dipped your toes into Linux, and doesn't have the same wide community and boundless online resources.
Comment by 1vuio0pswjnm7 4 days ago
The point I'm making is that putting the rootfs on a memory filesystem, e.g., tmpfs, mfs, etc. avoids the problem with SD cards^1
This can be done with a variety of operating systems. IMO, the advantange of the RPi hardware is that it is supported by so many different operating systems
When I want to run additional, larger programs that are not in the rootfs I have embedded into the kernel, I either (a) run them from external storage or (b) copy them to the mfs/tmpfs
It depends on how much RAM I have available
1. There are probably other ways to avoid the problem, too
Comment by Nextgrid 4 days ago
Comment by Firehawke 4 days ago
Comment by hiAndrewQuinn 5 days ago
But, NetBSD ISOs are much heavier than TCL ISOs, and so while I'm sure there's a way to get just what I want working in diskless mode, I'm not confident I will have any RAM to run what I actually want to run on top of it.
Comment by cess11 4 days ago
https://www.digitalreviews.net/reviews/pc/norhtec-xcore-geck...
I've noticed Puppy is still around but I have no idea whether it can still be comparable to Tiny Core.
Comment by jaypatelani 4 days ago
Comment by hiAndrewQuinn 1 day ago
Comment by marttt 5 days ago
As compared to TC, the "out of the box" NetBSD images contain many things I wouldn't need, so customizing it has been a recurring thought, but oh well. The documentation and careful modularity is, obviously, a huge bonus of NetBSD in that regard (even an end-user like me could do some interesting modifications of the kernel solely by reading the manual). TC seems much more ad-hoc, but I assume this, too, is intentional, by design.
Comment by 1vuio0pswjnm7 4 days ago
Around that time the NetBSD kernels with embedded rootfs filesystem I was making were around 17MB
Today, TCL is 23MB
The NetBSD kernels with embedded rootfs I'm using today are around 33MB
That size can be reduced of course
I don't monitor the boot process on RPi with serial console, I only connect after tinysshd is running, so I don't pay close attention to boot speed. It's fast enough
TCL appears to be aimed at users that prefer a binary distribution; also it provides GUI by default
I prefer to compile from source and I only use textmode hence NetBSD is more suitable for me than TCL
For someone who does not want to compile anything from source, it is possible to "customise" (replace) the rootfs of a NetBSD install image with another rootfs. It is not documented anywhere that I'm aware of but I have done it many times
I use a very minimal userland. I guarantee few if any HN readers would be satisfied with it. If I need additional programs I either (a) mount an external drive and run the programs from external storage, e.g., via chroot, or (b) copy them from an external drive into mfs or tmpfs
It depends on how much RAM I have
Comment by jaypatelani 4 days ago
Comment by marttt 4 days ago
Comment by ycombinatrix 5 days ago
Though I don't explicitly load the entire userspace into RAM, since this is a laptop and I don't foresee a need to remove the SSD after boot.
Comment by jimvdv 5 days ago
Comment by squarefoot 5 days ago
Comment by Lyngbakr 5 days ago
Comment by lproven 3 days ago
So, running it on a Pi 5 CM in an IO board, there's no way to tell the Pi what device to boot from.
Comment by squarefoot 4 days ago
Comment by jwrallie 5 days ago
Comment by blueflow 4 days ago
Comment by lukan 5 days ago
Yes, this is exactly what I want, except I need some simple node servers running, which is not so ultra light. Would you happen to know, if this still all works within the ram out of box, or does this require extra work?
Comment by fsagx 5 days ago
You can run nodejs fine on a pi with "Raspberry Pi OS Lite". In the configs, look for "Overlay File System" and enable it on the boot partition and main partition. The pi will boot from the sd card and run entirely in ram.
Be sure to run something to clear your logs occasionally or reboot once in a while or you'll run out of RAM. Still, get a quality sd card and power supply. You can get years out of a setup like this.
Comment by hiAndrewQuinn 5 days ago
Comment by oso2k 4 days ago
Comment by ifh-hn 5 days ago
I also like SliTaz: http://slitaz.org/en, and Slax too: https://www.slax.org/
Oh and puppy Linux, which I could never get into but was good for live CDs: https://puppylinux-woof-ce.github.io/
And there's also Alpine too.
Comment by LorenDB 5 days ago
Comment by forinti 5 days ago
The most responsive one, unexpectedly, was Raspberry Pi OS.
Comment by lproven 3 days ago
I carefully put a fairly minimal Xfce setup on it instead of LXDE and RAM usage doubled. It's impressively hand crafted and pruned.
Sadly, though, it hasn't been updated since Debian 11.
Comment by t_mahmood 5 days ago
It will increase the size of the VM but the template would be smaller than a full blown OS
Aside from dev containers, what are other options? I'm not able to run intellij on my laptop, is not an option
I use Nvim to ssh into my computer to work, which is fine. But really miss the full capacity of intellij
Comment by Aurornis 5 days ago
In my experience, by the time you’re compiling and running code and installing dev dependencies on the remote machine, the size of the base OS isn’t a concern. I gained nothing from using smaller distros but lost a lot of time dealing with little issues and incompatibilities.
This won’t win me any hacker points, but now if I need a remote graphical Linux VM I go straight for the latest Ubuntu and call it day. Then I can get to work on my code and not chasing my tail with all of the little quirks that appear from using less popular distros.
The small distros have their place for specific use cases, especially automation, testing, or other things that need to scale. For one-offs where you’re already going to be installing a lot of other things and doing resource intensive work, it’s a safer bet to go with a popular full-size distro so you can focus on what matters.
Comment by dotancohen 5 days ago
I'm all for suggestions for a better base OS in small docker containers, mostly to run nginx, php, postgress, mysql, redis, and python.
Comment by throwaway2037 5 days ago
> Alpine uses musl instead of glibc for the C standard library. This has caused me all types of trouble in unexpected places.
I have no experience with alternative C libs. Can you share some example issues?Comment by dotancohen 5 days ago
https://purplecarrot.co.uk/post/2021-09-04-does_alpine-resol...
Comment by LeFantome 4 days ago
Comment by dotancohen 4 days ago
Comment by lproven 3 days ago
Comment by lproven 3 days ago
No precompiled Linux stuff runs. No Chrome, no 3rd party Electron apps work unless specifically ported. For me, no Slack, no Panwriter, no Ferdium.
Flatpak works, sort of, with restrictions. Snap doesn't.
Comment by autotune 5 days ago
Comment by dotancohen 5 days ago
Comment by t_mahmood 5 days ago
Question, I use VirtualBox, but I feel it's kind a laggy sometimes, What do you use? Any suggestion on performance improvements?
Comment by dotancohen 4 days ago
Comment by t_mahmood 3 days ago
Comment by ornornor 5 days ago
Never really got what it’s for.
Comment by silasb 5 days ago
It'd be best with hardwired network though.
Comment by hdb2 5 days ago
thank you for this reminder! I had completely forgotten about SliTaz, looks like I need to check it out again!
Comment by samtheprogram 5 days ago
Comment by projektfu 5 days ago
Comment by sundarurfriend 5 days ago
In what way? Do you mean you didn't get the chance to use it much, or something about it you couldn't abide?
Comment by ifh-hn 5 days ago
Comment by dayeye2006 5 days ago
Comment by marttt 5 days ago
I used both the FLTK desktop (including my all-time favorite web browser, Dillo, which was fine for most sites up to about 2018 or so) and the text-only mode. TC repos are not bad at all, but building your own TC/squashfs packages will probably become second nature over time.
I can also confirm that a handful of lenghty, long-form radio programs (a somewhat "landmark" show) for my Tiny Country's public broadcasting are produced -- and, in some cases, even recorded -- on either a Dell Mini 9 or a Thinkpad T42 and Tiny Core Linux, using the (now obsolete?) Non DAW or Reaper via Wine. It was always fun to think about this: here I am, producing/recording audio for Public Broadcasting on a 13+ year old T42 or a 10 year old Dell Mini netbook bought for 20€ and 5€ (!) respectively, whereas other folks accomplish the exact same thing with a 2000€ MacBook Pro.
It's a nice distro for weirdos and fringe "because I can" people, I guess. Well thought out. Not very far from "a Linux that fits inside a single person's head". Full respect to the devs for their quiet consistency - no "revolutionary" updates or paradigm shifts, just keeping the system working, year after year. (FLTK in 2025? Why not? It does have its charm!) This looks to be quite similar to the maintenance philosophy of the BSDs. And, next to TC, even NetBSD feels "bloated" :) -- even though it would obviously be nice to have BSD Handbook level documentation for TC; then again, the scope/goal of the two projects is maybe too different, so no big deal. The Corebook [1] is still a good overview of the system -- no idea how up-to-date it is, though.
All in all, an interesting distro that may "grow on you".
Comment by nopakos 5 days ago
Comment by hamdingers 5 days ago
Booting a dedicated, tiny OS with no distractions helped me focus. Plus since the home directory was a FAT32 partition, I could access all my files on any machine without having to boot. A feature I used a lot when printing assignments at the library.
Comment by jwrallie 5 days ago
Comment by ifh-hn 5 days ago
Before encryption by default, get files from windows for family when they messed up their computers. Or change the passwords.
Before browser profiles and containers I used them in VMs for different things like banning, shopping, etc.
Down to your imagination really.
Not too mention just to play around with them too.
Comment by jbstack 5 days ago
Comment by jacquesm 5 days ago
Comment by ja27 5 days ago
Comment by jacquesm 5 days ago
Comment by trollbridge 5 days ago
Or 128K of ram and 400 kb disk for that matter.
Comment by maccard 5 days ago
Comment by snek_case 5 days ago
Comment by perching_aix 5 days ago
The "high color" (16 bit) mode was 5:6:5 bits per channel, so 16 bits per pixel.
> So 153,600 bytes for the frame buffer.
And so you're looking at 614.4 KB (600 KiB) instead.
Comment by snek_case 4 days ago
Comment by perching_aix 4 days ago
To be frank, I wasn't aware such a mode was a thing, but it makes sense.
Comment by mananaysiempre 4 days ago
Comment by Dwedit 5 days ago
Comment by beagle3 5 days ago
In 1985, and with 512K of RAM. It was very usable for work.
Comment by mrits 5 days ago
Comment by krige 5 days ago
Games used either 320h or 640h resolutions, 4 bit or fake 5 bit known as HalfBrite, because it was basically 4 bit with the other 16 colors being same but half brightness. The fabled 12-bit HAM mode was also used, even in some games, even for interactive content, but it wasn't too often.
Comment by teamonkey 5 days ago
Comment by globalnode 5 days ago
Comment by bananaboy 5 days ago
Comment by oso2k 4 days ago
Comment by bananaboy 3 days ago
Btw I was a teenager when those Denthor trainers came out and I read them all, I loved them! They taught me a lot!
Comment by bobmcnamara 5 days ago
Comment by em3rgent0rdr 4 days ago
Comment by perching_aix 5 days ago
Comment by BobbyTables2 5 days ago
Comment by SoftTalker 5 days ago
Comment by echoangle 5 days ago
Comment by jerrythegerbil 5 days ago
For example, NVIDIA GPU drivers are typically around 800M-1.5G.
That math actually goes wildly in the opposite direction for an optimization argument.
Comment by jsheard 5 days ago
Comment by throwaway173738 5 days ago
Comment by Rohansi 5 days ago
They also pack in a lot of game-specific optimizations for whatever reason. Could likely be a lot smaller without those.
Comment by hinkley 5 days ago
Comment by maccard 5 days ago
Comment by znpy 5 days ago
Comment by ErroneousBosh 5 days ago
Comment by trollbridge 5 days ago
The EGA (1984) and VGA (1987) could conceivably be considered a GPU although not turning complete. EGA had 64, 128, 192, or 256K and VGA 256K.
The 8514/A (1987) was Turing complete although it had 512kB. The Image Adapter/A (1989) was far more powerful, pretty much the first modern GPU as we know them and came with 1MB expandable to 3MB.
Comment by ErroneousBosh 4 days ago
The PGC was kind of a GPU if you squint a bit. It didn't work the way a modern GPU does where you've got masses of individual compute cores working on the same problem, but it did have a processor roughly as fast as the host processor that you could offload simple drawing tasks to. It couldn't do 3D stuff like what we'd call a GPU today does, but it could do things like solid fills and lines.
In today's money the PGC cost about the same as an RTX PRO 6000, so no-one really had them.
Comment by Yeask 4 days ago
Comment by lproven 3 days ago
WTF? Tell me more!
I have one, but I have no matching screen so I never tried it... Maybe it's worth finding a converter.
Comment by sigwinch 5 days ago
Comment by AshamedCaptain 4 days ago
Comment by ohhellnawman 5 days ago
Comment by forinti 5 days ago
That said, OSs came with a lot less stuff then.
Comment by xyzzy3000 5 days ago
Comment by lproven 3 days ago
True. And it's still around. It's FOSS now, runs natively on a Raspberry Pi 1-400 and Zero, and has Wifi, IPv6, and a Webkit browser.
Comment by psychoslave 5 days ago
Comment by pastage 5 days ago
Sure we could go back... Maybe we should. But there are lots of stuff we take for granted to day that were not available back then.
Comment by xyzzy3000 5 days ago
It's hinted at in this tutorial, but you'd have to go through the Programmer's Reference Manual for the full details: https://www.stevefryatt.org.uk/risc-os/wimp-prog/window-theo...
RISC OS 3.5 (1994) was still 2MB in size, supplied on ROM.
Comment by masfuerte 5 days ago
P.S. I should probably mention that there wasn't room in the ROM for the vector fonts; these needed to be loaded from some other medium.
Comment by bigiain 5 days ago
Comment by Suppafly 5 days ago
Comment by Scoundreller 5 days ago
No ssl, probably so you can access that site on the browser
Comment by taylodl 5 days ago
Comment by Perz1val 5 days ago
Comment by monocasa 5 days ago
Comment by IAmLiterallyAB 5 days ago
Comment by BobbyTables2 5 days ago
Windows 3.1 was only something like 16MB of storage.
Imagine the Cray supercomputer in those days being used to run a toaster or doorbell…
Comment by 1vuio0pswjnm7 5 days ago
I prefer to use additional RAM and disk for data not code
Comment by oso2k 5 days ago
Comment by beng-nl 5 days ago
Comment by bobmcnamara 5 days ago
Probably not due to DMA buffers. Maybe a headless machine.
But would be funny to see.
Comment by veqq 5 days ago
Comment by croes 5 days ago
Comment by trollbridge 5 days ago
If you were someone special, you got 1024x768.
Comment by Yeask 4 days ago
Comment by nilamo 5 days ago
Comment by embedding-shape 5 days ago
Or 32K of RAM and 64KB disk for that matter.
What's your point? That the industry and what's commonly available gets bigger?
Comment by shiftpgdn 5 days ago
Comment by jollyjerry 5 days ago
Comment by lproven 3 days ago
They did.
https://www.theregister.com/2024/02/14/damn_small_linux_retu...
Comment by tombert 5 days ago
It's 20 years later and I've been running Linux for most of that time, so I probably would have even more fun revisiting DSL and Tiny Core Linux.
Comment by gardnr 5 days ago
Comment by Someone 5 days ago
I don’t think that had the X Windows system. https://web.archive.org/web/19991128112050/http://www.qnx.co... and https://marc.info/?l=freebsd-chat&m=103030933111004 confirm that. It ran the Photon microGUI Windowing System (https://www.qnx.com/developers/docs/6.5.0SP1.update/com.qnx....)
Comment by Beijinger 5 days ago
Comment by ddalex 5 days ago
Comment by Joel_Mckay 5 days ago
Some businesses stick with markets they know, as non-retail customer revenue is less volatile. If you enter the consumer markets, there are always 30k irrational competitors (likely with 1000X the capital) that will go bankrupt trying to undercut the market.
It is a decision all CEO must make eventually. Best of luck =3
"The Rules for Rulers: How All Leaders Stay in Power"
Comment by api 5 days ago
Stuff that is better designed and implemented usually costs money and comes with more restrictive licenses. It’s written by serious professionals later in their careers working full time on the project, and these are people who need to earn a living. Their employers also have to win them in a competitive market for talent. So the result is not and cannot be free (as in beer).
But free stuff spreads faster. It’s low friction. People adopt it because of license concerns, cost, avoiding lock in, etc., and so it wins long term.
Yes I’m kinda dissing the whole free Unix thing here. Unix is actually a minimal lowest common denominator OS with a lot of serious warts that we barely even see anymore because it’s so ubiquitous. We’ve stopped even imagining anything else. There were whole directions in systems research that were abandoned, though aspects live on usually in languages and runtimes like Java, Go, WASM, and the CLR.
Also note that the inverse is not true. I’m not saying that paid is always better. What I’m saying is they worse is free, better was usually paid, but some crap was also paid. But very little better stuff was free.
Comment by rzerowan 5 days ago
Conversly i remenber Maya or Autodesk used to have a bounty program for whoever would turn in people using unlicensed/cracked versions of their product.Meanwhile Blender (from a commercial past) kept their free nature and have connsistently grown in popularity and quality without any such overtures.
Of course nowadays with Saas everything get segmented into wierd verticals and revenue upsells are across the board with the first hit usually also being free.
Comment by Joel_Mckay 5 days ago
They turned into legal-service-firms along the way, and stopped real software development/risk at some point in 2004.
These firms have been selling the same product for decades. Yet once they get their hooks into a business, few survive the incurred variable costs of the 3000lb mosquito. =3
Comment by Joel_Mckay 5 days ago
In *nix, most users had a rational self-interest to improve the platform. "All software is terrible, but some of it is useful." =3
Comment by RachelF 4 days ago
They were expensive too. You had to pay for each device driver you used.
Comment by knowitnone3 5 days ago
Comment by anyfoo 5 days ago
Comment by taylodl 5 days ago
Comment by jacquesm 5 days ago
Comment by M95D 4 days ago
Comment by knowitnone3 5 days ago
Comment by lproven 3 days ago
« QNX DEMO disk
Extending possibilities and adding undocumented features »
Comment by veganjay 5 days ago
Comment by bdbdbdb 5 days ago
Comment by Narishma 5 days ago
Comment by jacquesm 5 days ago
Comment by stOneskull 4 days ago
Comment by noufalibrahim 5 days ago
I don't know if there are any other options for older machines other than stripped down Linux distros.
Comment by dpflug 5 days ago
Comment by anthk 4 days ago
Comment by UncleSlacky 4 days ago
Comment by Romario77 5 days ago
Comment by slim 5 days ago
It's documentation is a free book : http://www.tinycorelinux.net/book.html
[1] https://wiki.tinycorelinux.net/doku.php?id=dcore:welcome
Comment by hypeatei 5 days ago
Comment by Y_Y 5 days ago
Comment by lysace 5 days ago
Download from at least one more location (like some AWS/GCP instance) and checksum.
Download from the Internet Archive and checksum:
https://web.archive.org/web/20250000000000*/http://www.tinyc...
Comment by firesteelrain 5 days ago
Comment by hypeatei 5 days ago
EDIT: nevermind, I see that it has the md5 in a text file here: http://www.tinycorelinux.net/16.x/x86/release/
Comment by maccard 5 days ago
Comment by hypeatei 5 days ago
Comment by firesteelrain 5 days ago
https://distro.ibiblio.org/tinycorelinux/downloads.html
And all the files are here
https://distro.ibiblio.org/tinycorelinux/16.x/x86/release/
Under a HTTPS connection. I am not at a terminal to check the cert with OpenSSL.
I don’t see any way to check the hash OOB
Also this same thing came up a few years ago
https://www.linuxquestions.org/questions/linux-newbie-8/reli...
Comment by maccard 5 days ago
> this same thing came up a few years ago
Honestly, that makes this inexcusable. There are numerous SSL providers available for free, and if that’s antithetical to them, they can use a self signed certificate and provide an alternative method of verification (e.g. via mailing list). The fact they don’t take this seriously means there is 0 chance I would install it!
Honestly, this is a great use for a blockchain…
Comment by firesteelrain 5 days ago
Are any distros using block chain for this ?
I am used to using code signing with HSMs
Comment by maccard 5 days ago
> are any sisters using blockchain
I don’t think so, but it’s always struck me as a good idea - it’s actual decentralised verification of a value that can be confirmed by multiple people independently without trusting anyone other than the signing key is secure.
> I am used to code signing with HSMs
Me too, but that requires distributing the public key securely which… is exactly where we started this!
Comment by embedding-shape 5 days ago
Comment by uecker 5 days ago
Comment by maccard 5 days ago
Comment by uecker 5 days ago
Comment by maccard 5 days ago
Comment by firesteelrain 5 days ago
Comment by maccard 5 days ago
> for extra high security,
No, sending the hash on a mailing list and delivering downloads over https is the _bare minimum_ of security in this day and age.
Comment by firesteelrain 5 days ago
And all the files are here https://distro.ibiblio.org/tinycorelinux/16.x/x86/release/
I posted that above in this thread.
I will add that most places, forums, sites don’t deliver the hash OOB. Unless you mean like GPG but that would have came from same site. For example if you download a Packer plugin from GitHub, files and hash all comes from same site.
Comment by maccard 4 days ago
This thread started by talking about the site serving the download (and hash) over http. Github serves their content over https, so you're not going to be MITM'ed. There are other attack vectors, but if the delivery of the content you're downloading is compromised/MITM'ed, you've lost.
Comment by firesteelrain 4 days ago
Comment by throwaway984393 5 days ago
Comment by Grom_PE 5 days ago
Comment by ajot 3 days ago
Comment by zer0tonin 4 days ago
Tiny Core ran surprisingly well and I could actually use it to browse the web and use IRC.
Comment by devsda 5 days ago
Was a little tricky to install on disk and even on disk it behaved mostly like a live cd and file changes had to be committed to disk IIRC.
Hope they improved the experience now.
Comment by nine_k 5 days ago
Comment by snvzz 5 days ago
Comment by hexagonwin 4 days ago
Comment by snvzz 4 days ago
In weeks before, when the topic came up elsewhere, I had to use one of my tailscale exit nodes elsewhere.
It wouldn't work from Japan. Not from home, not from office, not from phone network either.
Comment by supportengineer 5 days ago
Comment by accrual 5 days ago
Comment by oso2k 4 days ago
https://en.wikipedia.org/wiki/Tiny_Core_Linux#System_require...
Comment by accrual 3 days ago
Comment by Simplita 5 days ago
Comment by girvo 5 days ago
Comment by haunter 5 days ago
Showcase video https://www.youtube.com/watch?v=8or3ehc5YDo
iso https://web.archive.org/web/20240901115514/https://pupngo.dk...
2.1mb, 2.2.26 kernel
>The forth version of xwoaf-rebuild is containing a lot of applications contained in only two binaries: busybox and mcb_xawplus. You get xcalc, xcalendar, xfilemanager, xminesweep, chimera, xed, xsetroot, xcmd, xinit, menu, jwm, desklaunch, rxvt, xtet42, torsmo, djpeg, xban2, text2pdf, Xvesa, xsnap, xmessage, xvl, xtmix, pupslock, xautolock and minimp3 via mcb_xawplus. And you get ash, basename, bunzip2, busybox, bzcat, cat, chgrp, chmod, chown, chroot, clear, cp, cut, date, dd, df, dirname, dmesg, du, echo, env, extlinux, false, fdisk, fgrep, find, free, getty, grep, gunzip, gzip, halt, head, hostname, id, ifconfig, init, insmod, kill, killall, klogd, ln, loadkmap, logger, login, losetup, ls, lsmod, lzmacat, mesg, mkdir, mke2fs, mkfs.ext2, mkfs.ext3, mknod, mkswap, mount, mv, nslookup, openvt, passwd, ping, poweroff, pr, ps, pwd, readlink, reboot, reset, rm, rmdir, rmmod, route, sed, sh, sleep, sort, swapoff, swapon, sync, syslogd, tail, tar, test, top, touch, tr, true, tty, udhcpc, umount, uname, uncompress, unlzma, unzip, uptime, wc, which, whoami, yes, zcat via busybox. On top you get extensive help system, install scripts, mount scripts, configure scripts etc.
Comment by oso2k 5 days ago
Comment by jacquesm 5 days ago
Comment by oso2k 5 days ago
https://forum.tinycorelinux.net/index.php/topic,26713.0.html
I recommend asking on that forum. Folks are helpful.
Comment by jacquesm 5 days ago
Comment by anthk 5 days ago
All of the minilaguages exposed there will run on TC even with 32MB of RAM.
On TC, set IceWM the default WM with no opaque moving/resizing as default and get rid of that horrible dock.
Comment by bflesch 5 days ago
But can they please empower a user interface designer to simply improve the margins and paddings of their interface? With a bunch of small improvements it would look significantly better. Just fix the spacing between buttons and borders and other UI elements.
Comment by wild_egg 5 days ago
Any project that rejects those trends gets bonus points in my book.
Comment by linguae 5 days ago
In my opinion, I believe the Tiny Core Linux GUI could use some more refinement. It seems inspired by 90s interfaces, but when compared to the interfaces of the classic Mac OS, Windows 95, OS/2 Warp, and BeOS, there’s more work to be done regarding the fit-and-finish of the UI, judging by the screenshots.
To be fair, I assume this is a hobbyist open source project where the contributors spend time as they see fit. I don’t want to be too harsh. Fit-and-finish is challenging; not even Steve Jobs-era Apple with all of its resources got Aqua right the first time when it unveiled the Mac OS X Public Beta in 2000. Massive changes were made between the beta and Mac OS X 10.0, and Aqua kept getting refined with each successive version, with the most refined version, in my opinion, being Mac OS X 10.4 Tiger, nearly five years after the public beta.
Comment by oso2k 5 days ago
Comment by bflesch 5 days ago
I thought that would be immediately clear to the HN crowd but I might have overestimated your aesthetic senses.
Comment by delfinom 5 days ago
Too much information density is also disorienting, if not stressing. The biggest problem is finding that balance between multiple kinds of users and even individuals.
Comment by Perz1val 5 days ago
Comment by bflesch 5 days ago
I know that not everybody spent 10 years fiddling with CSS so I can understand why a project might have a skill gap with regards to aesthetics. I'm not trying to judge their overall competence, just wanted to say that there are so many quick wins in the design it hurts me a bit to see it. And due to nature of open source projects I was talking about "empowering" a designer to improve it because oftentimes you submit a PR for aesthetic improvements and then notice that the project leaders don't care about these things, which is sad.
Comment by ohhellnawman 5 days ago
Comment by crackernews1 5 days ago
Comment by grim_io 5 days ago
If you are trying to maximize for accessibility, that is.
Comment by bflesch 5 days ago
Comment by egormakarov 5 days ago
Comment by pbhjpbhj 5 days ago
I imagine the sign-off date of 2008, the lack of very simple to apply mobile css, and no https to secure the downloads (if it had it then it would probably be SSL).
This speaks to me of a project that's 'good enough', or abandoned, for/by those who made it. Left out to pasture as 'community dev submissions accepted'.
I've not bothered to look, but wouldn't surprise me if the UI is hardcoded in assembly and a complete ballache to try and change.
Comment by throwaway984393 5 days ago
Comment by hit8run 4 days ago
Comment by rcarmo 5 days ago
Comment by retube 5 days ago
Comment by roscas 5 days ago
Comment by mannycalavera42 5 days ago
Comment by lproven 3 days ago
I would much prefer its final desktop, from Xandros 4, to the Trinity (TDE) desktop fork of KDE 3.
Comment by alfiedotwtf 5 days ago
I remember booting Linux off a 1.44Mb floppy
Comment by jethro_tell 5 days ago
Comment by extraduder_ire 4 days ago
Comment by alfiedotwtf 5 days ago
Comment by vid 4 days ago
Comment by vid 2 days ago
Comment by theanonymousone 5 days ago
Comment by oso2k 5 days ago
Comment by thway15269037 5 days ago
Comment by sfarcolacul987 5 days ago
Comment by arschficknigger 5 days ago
Comment by deadbabe 5 days ago
Comment by lp0_on_fire 5 days ago
Comment by Y_Y 5 days ago
Comment by adrianN 5 days ago
Comment by SV_BubbleTime 5 days ago
Handmade parchment, or leather carvings if you don’t mind.
Comment by adrianN 5 days ago
Comment by wizzwizz4 5 days ago
Comment by ohhellnawman 5 days ago
Comment by jqpabc123 5 days ago