MinIO is now in maintenance-mode

Posted by hajtom 7 days ago

Counter509Comment320OpenOriginal

Comments

Comment by victormy 7 days ago

Big thanks to MinIO, RustFS, and Garage for their contributions. That said, MinIO closing the door on open source so abruptly definitely spooked the community. But honestly, fair play to them—open source projects eventually need a path to monetization.

I’ve evaluated both RustFS and Garage, and here’s the breakdown:

Release Cadence: Garage feels a bit slower, while RustFS is shipping updates almost weekly.

Licensing: Garage is on AGPLv3, but RustFS uses the Apache license (which is huge for enterprise adoption).

Stability: Garage currently has the edge in distributed environments.

With MinIO effectively bowing out of the OSS race, my money is on RustFS to take the lead.

Comment by ahepp 7 days ago

> open source projects eventually need a path to monetization

I guess I'm curious if I'm understanding what you mean here, because it seems like there's a huge number of counterexamples. GNU coreutils. The linux kernel. FreeBSD. NFS and iSCSI drivers for either of those kernels. Cgroups in the Linux kernel.

If anything, it seems strange to expect to be able to monetize free-as-in-freedom software. GNU freedom number 0 is "The freedom to run the program as you wish, for any purpose". I don't see anything in there about "except for business purposes", or anything in there about "except for businesses I think can afford to pay me". It seems like a lot of these "open core" cloud companies just have a fundamental misunderstanding about what free software is.

Which isn't to say I have anything against people choosing to monetize their software. I couldn't afford to give all my work away for free, which is why I don't do that. However, I don't feel a lot of sympathy to people who surely use tons of actual libre software without paying for it, when someone uses their libre software without paying.

Comment by mikestorrent 7 days ago

I think, if anything, in this age of AI coding we should see a resurgence in true open-source projects where people are writing code how they feel like writing it and tossing it out into the world. The quality will be a mixed bag! and that's okay. No warranty expressed or implied. As the quality rises and the cost of AI coding drops - and it will, this phase of $500/mo for Cursor is not going to last - I think we'll see plenty more open source projects that embody the spirit you're talking about.

The trick here is that people may not want to be coding MinIO. It's like... just not that fun of a thing to work on, compared to something more visible, more elevator-pitchy, more sexy. You spend all your spare time donating your labour to a project that... serves files? I the lowly devops bow before you and thank you for your beautiful contribution, but I the person meeting you at a party wonder why you do this in particular with your spare time instead of, well, so many other things.

I've never understood it, but then, that's why I'm not a famous open-source dev, right?

Comment by Bombthecat 5 days ago

Yap, published already a few ( I hope) useful plugins where I basically don't care what you do with it. Coded in a few days with AI and some testing.

Already a few more ideas I want to code :)

But this might create the problem image models are facing, AI eating itself...

Comment by 6 days ago

Comment by 7 days ago

Comment by hobobaggins 7 days ago

you mean... like Linux? or gcc?

Comment by elzbardico 7 days ago

I don't think there's still someone actively working on the Linux kernel without receiving a salary, and this for the last two decades more or less.

Comment by ahepp 7 days ago

Yeah, that's why I said maybe I'm misunderstanding OP. If that's what OP meant by "monetization" then sure, monetization is great.

Companies pay their employees to work on Linux because it's valuable to them. Intel wants their hardware well supported. Facebook wants their servers running fast. It's an ecosystem built around free-as-in-freedom software, where a lot of people get paid to make the software better, and everyone can use it for free-as-in-beer

Compare that to the "open core" model where a company generally offers a limited gratis version of their product, but is really organized to funnel leads into their paid offering.

The latter is fine, but I don't really consider it some kind of charity or public service. It's just a company that's decided on a very radical marketing strategy.

Comment by pabs3 7 days ago

You would be incorrect, LWN tracks statistics about contributor employers for every Linux kernel release and their latest post about that says that "(None)" (ie unpaid contributions) beat a number of large companies, including RedHat by the lines changed metric, or SUSE by the changesets metric.

https://lwn.net/SubscriberLink/1046966/f957408bbdd4d388/

Comment by foota 7 days ago

Well yes, but the vast majority of changes (~95%, by either changes or lines) seem to be from contributors supported by employers.

Comment by pabs3 7 days ago

Sure, but there is still "someone" contributing unpaid.

Comment by heartsavior 6 days ago

Definitely individual can start with their own reason. It is questionable whether they can make contributions which scope would be a quarter of the work including design or even larger.

Comment by gjsjchd6 6 days ago

[dead]

Comment by tomnipotent 6 days ago

Other than a few popular libraries, I'm unaware of any major open source project that isn't primarily supported by corporate employees working on it as part of their day job.

Comment by Blackthorn 4 days ago

What counts as a "major" open source project?

Comment by 3 days ago

Comment by fragmede 6 days ago

Ghostty's obviously not a replicatable model, but it would be cool if it was!

Comment by tonyhart7 7 days ago

I mean lets be real here, if you competent enough to contribute into linux kernel then you basically competent enough to get a job everywhere

Comment by lima 6 days ago

Shipping updates almost weekly is the opposite of what I want for a complex, mission-critical distributed system. Building a production-ready S3 replacement requires careful, deliberate and rigorous engineering work (which is what Garage is doing[1]).

It's not clear if RustFS is even implementing a proper distributed consensus mechanism. Erasure Coding with quorum replication alone is not enough for partition tolerance. I can't find anything in their docs.

[1]: https://arxiv.org/pdf/2302.13798

Comment by snthpy 7 days ago

Thanks. I hadn't heard of RustFS. I've been meaning to migrate off my MinIO deployment.

I recently learned that Ceph also has an object store and have been playing around with microceph. Ceph also is more flexible than garage in terms of aggregating differently sized disks. Since it's also already integrated in Proxmox and has over a decade of enterprise deployments, that's my top contender at the moment. I'm just not sure about the level of S3 API compatibility.

Any opinions on Ceph vs RustFS?

Comment by lima 5 days ago

Ceph is quite expensive in terms of resource usage, but it is robust and battle-tested. RustFS is very new, very much a work in progress[1], and will probably eat your data.

If you're looking for something that won't eat your data in edge cases, Ceph (and perhaps Garage) are your only options.

[1]https://github.com/rustfs/rustfs/issues/829

Comment by peterashford 7 days ago

"open source projects eventually need a path to monetization"

Why?

Comment by elzbardico 7 days ago

Human beings have this strange desire to be fed, have shelter and other such mundane stuff, all of those clearly less important than software in the big scheme of things, of course.

Comment by antman 7 days ago

Many open source are not core business but supporting layers of overall organisations getting free PRs. Others are pet projects that tried to do too many things and overextended themselves for little additional value failing any sort of sustainability logic. Others had a larger range of features required than the original dev was aware of.

Comment by drnick1 7 days ago

The beauty of open source is that there are all kinds of reasons for contributing to it, and all are valid. For some, it's just a hobby. For others, like Valve, it's a means of building their own platform. Hardware manufacturers like AMD (and increasingly Nvidia) contribute drivers to the kernel because they want to sell hardware.

Comment by victormy 7 days ago

I believe that, at the end of the day, open source enthusiasts still need to make a living.

Comment by _factor 6 days ago

God forbid a passion project stay just a passion project. You don't see this monetization perspective in the hobbyist 3D printing community or airbrushing communities. This is directly a result of how much OSS is framed as a "time sink" instead of enjoyable hobby. I don't like this narrative, and don't think its healthy.

Comment by cwyers 6 days ago

MinIO is absolutely not a passion project, it's a business.

Comment by boomskats 6 days ago

RustFS vs MinIO latest performance comparisons here: https://github.com/rustfs/rustfs/issues/73#issuecomment-3385...

Comment by raxxorraxor 6 days ago

> open source projects eventually need a path to monetization.

I don't think open source projects need a path to monetization in all cases, most don't have that. But if you make such a project your main income, you certainly need money.

If you then restrict the license, you are just developing commercial software, it then has little to do with open source. Developing commercial software is completely fine, but it simply isn't open source.

There is also real open source software with a steady income and they are different than projects that change to commercial software and we should discriminate terms here.

Comment by RealStickman_ 5 days ago

Last time I checked (~half a year ago) Garage didn't have a bunch of s3 features like object versioning and locking. Does RustFS have a list of s3 features they support?

Comment by antman 7 days ago

SeaweedFS with S3 API? Differentiates itself with claims of ease of use and small files optimization

Comment by lmc 7 days ago

Any idea who is behind RustFS?

Comment by gethly 7 days ago

There is https://github.com/seaweedfs/seaweedfs

I haver not used it but will be likely a good minio alternative for people who want to run a server and don't use minio just as s3 client.

Comment by chrislusf 7 days ago

This is Chris and I am the creator of SeaweedFS. I am starting to work full time on SeaweedFS now. Just create issues on SeaweedFS if any.

Recently SeaweedFS is moving fast and added a lot more features, such as: * Server Side Encryption: SSE-S3, SSE-KMS, SSE-C * Object Versioning * Object Lock & Retention * IAM integration * a lot of integration tests

Also, SeaweedFS performance is the best in almost all categories in a user's test https://www.repoflow.io/blog/benchmarking-self-hosted-s3-com... And after that, there is a recent architectural change that increases performance even more, with write latency reduced by 30%.

Comment by written-beyond 7 days ago

Congratulations on earning that opportunity!

Thank you for your work. I was in a position where I had to choose between minio and seaweed FS and though seaweed FS was better in every way the lack of an includes dashboard or UI accessibility was a huge factor for me back then. I don't expect or even want you to make any roadmap changes but just wanted to let you know of a possible pain point.

Comment by chrislusf 6 days ago

Thank! There is an admin UI already. AI coding makes this fairly easy.

Comment by written-beyond 6 days ago

I'm sorry I probably missed it then, this was like 4 years ago so I could be wrong.

Comment by lima 7 days ago

Is it stable now? Last time I checked, the amount of correctness bugs being fixed in the Git history wasn't very confidence-inspiring.

Comment by rednb 7 days ago

Since storage is a critical component, I closely watched it and engaged with the project for about 2 years circa as i contemplated adding it to our project, but the project is still immature from a reliability perspective in my opinion.

No test suite, plenty of regressions, and data loss bugs on core code paths that should have been battled tested after so many years. There are many moving parts, which is both its strength and its weakness as anything can break - and does break. Even Erasure Coding/Decoding has had problems, but a guy from Proton has contributed a lot of fixes in this area lately.

One of the big positive in my opinion, is the maintainer. He is an extremely friendly and responsive gentleman. Seaweedfs is also the most lightweight storage system you can find, and it is extremely easy to set up, and can run on servers with very little hardware resources.

Many people are happy with it, but you'd better be ready to understand their file format to fix corruption issues by hand. As far as i am concerned, i realized that after watching all these bugs, the idea of using seaweedfs was causing me more anxiety than peace of mind. Since we didn't need to store billions of files yet, not even millions, we went with creating a file storage API in ASP.NET Core in 1 or 2 hours, hosted on a VPS, that we can replicate using rsync without problem. Since i made this decision, i have peace of mind and no longer think about my storage system. Simplicity is often better, and OSes have long been optimized to cache and serve files natively.

If you are not interested in contributing fixes and digging into the file format when a problem occurs, and if your data is important to you, unless you operate at the billions of files scalability tier Seaweedfs shines at, i'd suggest rolling your own boring storage system.

Comment by yahooguntu 7 days ago

We're in the process of moving to it, and it does seem to have a lot of small bugfixes flying around, but the maintainer is EXTREMELY responsive. I think we'll just end up doing a bit of testing before upgrading to newer versions.

For our use case (3 nodes, 61TB of NVMe) it seems like the best option out of what I looked at (GarageFS, JuiceFS, Ceph). If we had 5+ nodes I'd probably have gone with Ceph though.

Comment by nodesocket 7 days ago

I'm looking at deploying SeaWeedFS but the problem is cloud block storage costs. I need 3-4TB and Vultr costs $62.50/mo for 2.5TB. DigitalOcean $300/mo for 3TB. AWS using legacy magnetic EBS storage $150/mo... GCP persistent disk standard $120/mo.

Any alternatives besides racking own servers?

*EDIT* Did a little ChatGPT and it recommended tiny t4g.micro then use EBS of type cold HDD (sc1). Not gonna be fast, but for offsite backup will probably do the trick.

Comment by Bombthecat 5 days ago

Hetzner VM with mounted storage box.

https://www.hetzner.com/storage/storage-box/

It's not as fast as local storage of course, but it's cheap!

Comment by xyzzy123 7 days ago

I'm confused why you would want to turn an expensive thing (cloud block storage) into a cheaper thing (cloud object storage) with worse durability in a way that is more effort to run?

I'm not saying it's wrong since I don't know what it's for, I'm just wondering what the use-case could be.

Comment by nodesocket 7 days ago

I've quickly come to this conclusion. Essentially looking for offsite backup of my NAS and currently paying around $15-$20/mo to Backblaze. I thought I might be able to roll my own object store for cheaper but that was idiotic. :-)

Comment by xyzzy123 7 days ago

Totally fair. There are some situations where you can "undercut" cloud native object storage on a per TB basis (e.g. you have a big dedi at Hetzner with 50TB or 100TB of mirrored disk) but you pay a cost in operational overhead and durability vs managed object store. It's really hard to make the economics work at $20 price point, if you get up to a few $100 or more then there are some situations where it can make sense.

For backup to a dedi you don't really need to bother running the object store though.

Comment by huntaub 7 days ago

Shot you an email about how we can potentially help you with this.

Comment by hobofan 7 days ago

SeaweedFS has been our choice as a replacement for both local development and usage in our semi-ephemeral testing k8s cluster (both for its S3 interface). The switch went very smooth.

I can't really say anything about advanced features or operational stability though.

Comment by lionkor 6 days ago

Sadly there's nothing in the license of seaweedFS that would stop the maintainer from pulling a minio -- and this time without breaking the (at least spirit of the) terms of the project's license.

Not an issue at all until they do.

Comment by spicymaki 7 days ago

Stallman was right. When will the developer community learn not to contribute to these projects with awful CLAs. The rug has been pulled.

Comment by pabs3 7 days ago

MinIO doesn't seem to have had a CLA though?

Comment by EgoIncarnate 7 days ago

MinIO had a de facto CLA. MinIO required contributors to license their code to the project maintainers (only) under Apache 2. Not as bad as copyright assignment, but still asymmetric (they can relicense for commercial use, but you only get AGPL). https://github.com/minio/minio/blob/master/.github/PULL_REQU...

Comment by woooooo 7 days ago

Isn't that standard protective boilerplate so that they cant get rugpulled themselves on a contribution, 2 years later? I thought the ASF had something similar.

Comment by EgoIncarnate 7 days ago

Requiring AGPL on the contribution would also prevent a rugpull. MinIO went beyond that.

The wording gives an Apache license only to MinIO, not to people who use it. So MinIO can relicense the the contributor code under a commercially viable license, but no one else can. Everyone else will only have access to the contribution under AGPL as part of the whole project.

Comment by woooooo 7 days ago

Ah I didnt realize there were 2 different licenses at play, yeah that's a little sus.

Comment by cuu508 7 days ago

This wording was added in the template in August 2023. What's the licensing situation for community contributions before then?

Comment by EgoIncarnate 7 days ago

Presumably they've either gotten explicit permission after the fact, rewritten in the commerical product, or the contribution was too minor to be a concern. I don't think they could have put the amount of though needed to ensure they benefit from contributions in a way no one else can, and then also be unaware of license issues with any possible AGPL only contributions.

Comment by dzogchen 3 days ago

Where does Stallman say anything about CLAs?

Comment by creatonez 7 days ago

Except... the FSF is actually on the extreme opposite end of this issue. They do formal copyright assignment from the GNU contributors to the FSF. This way, they have a centralized final say on enforcement that is resistant to copyleft trolls, but it ultimately allows the theoretical possibility of a rugpull.

Comment by reedciccio 6 days ago

The FSF can't pull the rug because of its bylaws

Comment by cantagi 7 days ago

They have been removing features from the open source version for a while.

The closest alternative seems to be RustFS. Has anyone tried it? I was waiting until they support site replication before switching.

Comment by bityard 7 days ago

Garage is a popular alternative to Minio. https://garagehq.deuxfleurs.fr

I hadn't heard of RustFS and it looks interesting, although I nearly clicked away based on the sheer volume of marketing wank on their main page. The GitHub repo is here: https://github.com/rustfs/rustfs

Comment by adamcharnock 7 days ago

We’ve done some fairly extensive testing internally recently and found that Garage is somewhat easier to deploy, but is not as performant at high speeds. IIRC we could push about 5 gigabits of (not small) GET requests out of it, but something blocked it from reaching the 20-25 gigabits (on a 25g NIC) that MinIO could reach (also 50k STAT requests/s)

I don’t begrudge it that. I get the impression that Garage isn’t necessarily focussed on this kind of use case.

Comment by dalenw 7 days ago

I use garage at home, single node setup. It's very easy and fast, I'm happy with it. You're missing out on a UI for it, but MountainDuck / CyberDuck solves that problem for me.

Comment by redrove 7 days ago

I’ve been using this https://github.com/khairul169/garage-webui as a UI for Garage. It’s been solid.

After years of using Garage for S3 for the homelab I’d never pick anything else. Absolutely rock solid, no problem whatsoever. There isn’t ONE other piece of software I can say that about, not ONE.

Major kudos to the guys at deuxfleurs. Merci beaucoup!

Comment by eproxus 7 days ago

Yeah, that page is horrendous and looks super sketchy. It looks like a very professional fishing attempt to get unsuspecting developers to download malware.

They have a lot of obviously fake quotes from non-existent people at positions that don’t even mention what company it is. The pictures are misgendered and even contain pictures of kids.

Feels like the whole page is AI generated.

Comment by runiq 7 days ago

They have a CLA that assigns copyright to them: https://github.com/rustfs/rustfs/blob/5b0a3a07645364d998e3f5...

So, arguably worse than MinIO.

Comment by everfrustrated 7 days ago

The _only_ reason to require a CLA is because you expect to change the license in the future. RustFS has rug-pull written all over it.

Comment by jen20 2 days ago

Obviously this is not the only reason - even the Free Software Foundation require IP assignment via a CLA.

Whether you can or will sign one is a different matter (I will not).

Comment by regularfry 7 days ago

Or to offer it under a commercial licence in parallel.

Comment by Jon_Lowtek 7 days ago

While that is the most common use case for CLAs, it is normally done by contributors granting a very permissive, but not exclusive, license to a legal entity like a company or foundation, in addition to the public license granted to everyone.

This is not that. This is not even a license. They want a full transfer of intellectual property ownership. Sure that enables them to use it in a commercial product, but it also enables them to sue if contributors contribute similarly to other projects. Obviously that would create a shit storm, and there is an exception with the public license, but riddle me this: can you legally make similar contributions to multiple projects that have this type of CLA?

Let us take a step back and instead look where such terms are more common: employment contracts.

Comment by runiq 7 days ago

That doesn't require full copyright assignment, though, right?

Comment by stormking 7 days ago

How would you run a project like this? People come and go. People do a one-time contribution and then you never hear from them again. People work on a project for years and then just go silent. Honestly, credit where credit is due, but how is a project like this supposed to manage this?

Comment by PunchyHamster 7 days ago

You can have CLA without assigning copyright to the project.

You don't need assignment to the project if you are not planning to change project's license.

You do need assignment to the project if you need to ever rugpull the community and close the code

Comment by speedgoose 7 days ago

You could pick a license and not plan to relicense later. Like Linux.

Comment by runiq 7 days ago

What do you mean by 'manage?' In your mind, what are you planning to do in the future that you need my full copyright as a change owner?

Comment by victormy 7 days ago

Without a valid CLA and a strong core team, you often end up with fragmentation or legal deadlock. Even the ASF isn't a silver bullet—projects without strong leadership die there all the time. The CLA exists to prevent that friction.

Comment by runiq 7 days ago

Then it's not the CLA that ensures project survivability. It's the strong core team you mentioned.

Comment by EgoIncarnate 7 days ago

MinIO had a de facto CLA. MinIO required contributors to license their code to the project maintainers (only) under Apache 2. Not as bad as copyright assignment, but still asymmetric (they can relicense for commercial use, but you only get AGPL). https://github.com/minio/minio/blob/master/.github/PULL_REQU...

Comment by yencabulator 5 days ago

That's so weird. Your contribution is a derived work based on AGPL, so it must be AGPL...

The number of weird incompetent things the Minio people have done is surprisingly high.

Comment by victormy 7 days ago

Speaking as an open-source enthusiast, I’m actually really digging RustFS. Honestly, anything that can replace or compete with MinIO is a win for the users. Their marketing vibe feels pretty American, actually—they aren't afraid to be loud and proud, haha. You gotta give it to them though, they’ve got guts, and their timing is spot on.

Comment by cromka 7 days ago

I saw an article here not long about where someone explained they were hosting their Kopia or Nextcloud aver Garage, but I can't find it anymore.

This was going to be my next project, as I am currently storing my Kopia/Ente on MinIO in a non-distributed way. MinIO project going to shi*s is a good reason to take on this project faster than later.

Comment by nikeee 7 days ago

I maintain an S3 client that has a test matrix for the commonly used S3 implementations. RustFS regularly breaks it. Last time it did I removed it from the matrix because deleteObject suddenly didn't delete the object any more. It is extremely unstable in its current form. The website states that it is not in a production-ready state, which I can confirm.

I'd take a look at garage (didn't try seaweed yet).

Comment by edude03 7 days ago

> I maintain an S3 client that has a test matrix for the commonly used S3 implementations.

Is it open to the public? I'd like to check it out

Comment by positisop 7 days ago

If it is not an Apache/CNCF/LinuxFoundation project, it can be a rug pull aimed at using open source for getting people in the door only. They were never open for commits, and now they have abandoned open source altogether.

Comment by victormy 7 days ago

The Good: Single-node is stable, and the team moves fast—most of my reported bugs get patched within a couple of weeks. The Bad: Distributed mode needs work. Bucket replication and lifecycle policies are still WIP (as noted in their roadmap) and not usable yet.

It's promising, but definitely check the roadmap before deploying at scale.

Comment by rbartelme 7 days ago

Might be coming soon based on this: https://docs.rustfs.com/features/replication/

Comment by maxloh 7 days ago

Although promising, RustFS is a Chinese product. This would be a non-starter for many.

Comment by jasonvorhe 7 days ago

Because they aren't thinking about all the chinese wetware they'd be writing down that decision with.

Comment by PunchyHamster 7 days ago

From what I looked still very fresh project, to the point running out of date minio version will most likely be less problematic than latest rustfs

Comment by pankajdoharey 7 days ago

Sad to see these same people were behind GlusterFS.

Comment by mbreese 7 days ago

Well, maybe they are using that experience to build something better this time around? One can hope...

Comment by pankajdoharey 6 days ago

Sure but trying to close source what has been opensource for a decade or trying to reduce features is very strange. I thought those people had higher standards.

Comment by js4ever 7 days ago

I made recently an open source alternative to minio Server & minio UI also in Rust:

https://github.com/vibecoder-host/ironbucket/

https://github.com/vibecoder-host/ironbucket-ui

Comment by syabro 7 days ago

Probably just me but I would stay away from anything saying vibecoder in the repo

Comment by uroni 7 days ago

I've been working on https://github.com/uroni/hs5 as a replacement with similar goals to early minio.

The core is stable at this point, but the user/policy management and the web interface is still in the works.

Comment by giancarlostoro 7 days ago

Looks like you cleanly point out their violation of the AGPL. I wish I were a lawyer with nothing better to do, I'd definitely be suing the MinIO group, there's no way they can cleanly remove the AGPL code outsiders contributed.

Comment by mbreese 7 days ago

I don't think there would be an issue with removing AGPL contributed code. You can't force someone to distribute something they don't want to. IANAL, but I believe that what (all?) copyright in software is most concerned with is the active distribution of code -- not the removal of code.

That said, if there was contributed AGPL code, they couldn't change the license on that part of the code w/o a CLA. AGPL also doesn't necessarily mean you have to make the code publicly available, just available to those that you give the program to (I'm assuming AGPL is like the GPL in this regard).

So, that I'd be curious about it is -- (1) is there any contributed AGPL code in the current version? (2) what license is granted to customers of the enterprise version?

Minio can completely use whatever license they want for their code. But, if there was contributed code w/o a CLA, then I'm not sure how a commercial/enterprise license would play with contriubuted AGPL code. It would be an interesting question to find out.

Comment by kragen 7 days ago

> AGPL also doesn't necessarily mean you have to make the code publicly available, just available to those that you give the program to (I'm assuming AGPL is like the GPL in this regard).

This is the crucial difference between the AGPL and the GPL: the AGPL requires you to make the code available to users for whom you run the code, as well as users you give the program to.

Comment by mbreese 7 days ago

But, for minio, the users aren't the public... the users are their enterprise customers (now). So, to fulfill the AGPL, they'd have to give the code to their users, but that doesn't necessarily mean to the public at large (via GitHub).

But, what I don't know is -- is there any other AGPL code that minio doesn't own, but that was otherwise contributed to minio? Because, presumably, they aren't actually giving their customers the minio program with an AGPL license, rather they have whatever their enterprise license agreement is. If this is the case, and there is AGPL code that's not owned by Minio, I can foresee problems in the future.

Comment by kragen 7 days ago

I agree with all of that.

Comment by giancarlostoro 7 days ago

That's definitely not how its written or interpreted. Microsoft had to release code because they touched GPL code some years back I think it was for HyperV? We're talking about a company with many lawyers at the ready not being able to skirt the GPL in any way, like undoing the code.

Additionally, in order to CHANGE the license, if others contributed code under that license, you would need their permission, on top of the fact, you cannot retroactively revoke the license for previous versions.

Comment by mbreese 7 days ago

What I'm really curious about is if their most recent enterprise versions/code must be released under AGPL. And if so, can they restrict customers from distributing AGPL'd code through an enterprise contract?

I can't see how this is a defensible position for Minio, but I'm not sure they really care that much at this point.

Comment by giancarlostoro 6 days ago

That would be a violation of the AGPL.

Comment by bityard 7 days ago

I don't see a contributor licensing agreement (CLA), so you may be right.

(I personally choose not to contribute to projects with CLAs, I don't want my contributions to become closed-source in the future.)

Comment by giancarlostoro 7 days ago

Its worse than I thought:

https://blog.min.io/weka-violates-minios-open-source-license...

They think they can revoke someone's AGPL license. That's not at all how that license works!

Comment by kstrauser 7 days ago

I think that's exactly how that license works. Basically, the license is the only thing that grants you rights to redistribute the licensed work. Copyright law otherwise forbids it. And the license itself only grants you the right to redistribute the work as long as you comply with its terms. If you violate them, the license no longer applies, and you no longer have any legal right to distribute the work or any derived works.

I have zero knowledge about the squabble between MinIO and Weka. I don't know, and don't care, if either of them is in the right. But if Weka isn't complying with the terms of the AGPL, then MinIO has the legal right to tell them they can no longer distribute MinIO's licensed work at all, because nothing else grants them that privilege.

If that weren't true, there'd be no teeth to the A?GPL whatsoever.

Comment by asmor 7 days ago

MinIO the corporation is not the sole licencor of MinIO the source code. They could sue and probably enforce compliance, but they can't just revoke the license like it is an overly restrictive commercial EULA.

Comment by kstrauser 7 days ago

They absolutely can revoke the license on all their own code, or any code signed over to them with a CLA. But really, they don’t have to revoke anything. The license does that automatically: you’re only allowed to redistribute GPL/AGPL licensed software as long as you comply with the terms. If you stop complying, the license ceases granting you permission automatically.

Comment by kragen 7 days ago

Yes, it is. Although https://www.gnu.org/licenses/agpl-3.0.html says

> All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met.

it also says

> You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).

> However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

> Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

This is in common with the GPLv3. It is much longer than the corresponding terms of the GPLv2 to remedy a sort of fragility in the GPLv2 which says your license terminates permanently if you ever violate the GPL, even temporarily and by accident.

I have no knowledge of whether Weka did or didn't violate the license, but if they did violate it and refused to fix it, MinIO's revocation of their license is completely in accordance with the terms of the license as written. I don't think a GPL termination case has yet been litigated.

Comment by roblabla 7 days ago

> I don't think a GPL violation case has yet been litigated.

It has, though it has mainly been under the "breach of contract" approach and not under "copyright infringement" approach. See https://en.wikipedia.org/wiki/Open_source_license_litigation

Comment by kragen 7 days ago

Of course you're correct. I meant to say no GPL termination case, and I've corrected my comment to say that. By that I mean cases where the defendant had cured their breach of the GPL but continued exercising the rights the GPL would have given them but for the termination clause.

Comment by EgoIncarnate 7 days ago

MinIO had a de facto CLA. MinIO required contributors to license their code to the project maintainers (only) under Apache 2. Not as bad as copyright assignment, but still asymmetric (they can relicense for commercial use, but you only get AGPL). https://github.com/minio/minio/blob/master/.github/PULL_REQU...

Comment by EgoIncarnate 7 days ago

MinIO had a de facto CLA. MinIO required contributors to license their code to the project maintainers (only) under Apache 2. Not as bad as copyright assignment, but still asymmetric (they can relicense for commercial use, but you only get AGPL). https://github.com/minio/minio/blob/master/.github/PULL_REQU...

Comment by uroni 7 days ago

I'm not a contributor to Minio. This is its own separate thing.

I do have a separate AGPL project (see github) where I have nearly all of the copyright and have looked into how one would be able to enforce this in the US at some point and it did look pretty bleak -- it is a civil suit where you have to show damages etc. but IANAL.

I did not like the FUD they were spreading about AGPL at the time since it is a good license for end-user applications.

Comment by giancarlostoro 7 days ago

Oh I didn't mean to imply yours was, yours is C++ theirs is Go. The AGPL is fine, not a license for me, but its fine. I'm more of an MIT license kind of guy. If you're going to do the AGPL thing and then try to secure funding, make sure you own the whole thing first.

Comment by sph 7 days ago

Good time to post a Show HN for your project then

Comment by bityard 7 days ago

Interesting! I like the relative simplicity and durability guarantees. I can see using this for dev and proof of concept. Or in situations where HA/RAID are handled lower in the stack.

What is the performance like for reads, writes, and deletes?

And just to play devil's advocate: What would you say to someone who argues that you've essentially reimplemented a filesystem?

Comment by uroni 7 days ago

It uses LMDB, so if the object mapping fits in memory that should be pretty optimal for reading, while using the build-in Linux page cache and not a separate one (important for testing use cases). For write/deletes it has a bit of write-amplification due to the copy-on-write btree. I've implemented a separate, optional WAL for this and also a mode where writes/delete can be bundeled in a transaction, but in practice I think the performance difference should not matter.

W.r.t. filesystem: Yes, aware of this. Initially used minio and also implemented the use case directly on XFS as well and only had problems at larger scales (that still fit on a machine) with it. Ceph went into a similar direction with BlueStore (reimplementing the filesystem, but with RocksDB).

Comment by MrZander 7 days ago

I wish I knew about this last week. I spent way too long trying out MinIO alternatives before getting SeaweedFS to work, but it is overkill for my purposes.

Looks like a great alternative.

Comment by liviux 7 days ago

Fork in Linux foundation incoming. Minio will revert in 1-2 years, but too late, community will move on and never return, reputation lost forever

Comment by phoronixrly 7 days ago

Comment by orphea 7 days ago

  > you may be violating AGPLv3 if you are using MinIO to build commercial products or using it for commercial purposes
Yeah, this is bullshit. I wish the guy used his own advise and spoke to a lawyer :)

Comment by speedgoose 7 days ago

Oh no, I used MinIO once or twice for some unlicensed software.

Should I contact a MinIO salesman to purchase an enterprise license ASAP or is it fine if I license my kids and advent of code solutions under the AGPLv3 license ?

Comment by ahepp 7 days ago

Wait, what's the consensus on this? Are they saying that using object storage over a standard network API which they didn't even create, makes your application a derivative work of the object store?

Or just that the users would need to make minio sources, including modifications, freely available?

I guess that's kind of the big question inherent to the AGPL?

Comment by tetha 7 days ago

From my understanding, you would not be allowed to sell an "S3 compatible storage" as a service based off of Minio or another AGPL licensed S3-compatible storage solution, especially if you modify the source code of minio in any way and then serve that to your customers.

If you use Minio or another AGPL licensed service internally to support your own product without a customer ever touching it's API, it should be fine.

Comment by lukaslalinsky 6 days ago

What in AGPL prevents this? AGPL only forces you to open source your modified version of MinIO/whatever. GPL forces you to open source only if you actually distribute the modified version, which gets muddy in the context of network services, therefore AGPL was created. If you want to build a commercial service based on AGPL software, there is nothing stopping you doing that.

Comment by dzogchen 3 days ago

You can modify the source code, you can commercialize it. You just have to give access to the source code to users that interact with it over a network.

Comment by spapas82 7 days ago

Minio is more or less feature complete for most use cases. Actually the last big update of minio removed features (the UI). I am using minio for 5 years and haven't messed with it or used any new thingie for the last 5 years (i.e since I installed it); I only update to new versions.

So if the minio maintainers (or anybody that forks the project and wants to work it) can fix any security issues that may occur I don't see any problems with using it.

Comment by cromka 7 days ago

> Actually the last big update of minio removed features (the UI)

AFIK they removed it only to move it to their paid version, didn't they?

Comment by spapas82 7 days ago

Well I didn't mind when they removed it and certainly I didn't consider their paid version which is way too expensive for most use cases.

The UI was useful when first configuring the buckets and permissions; if you've got it working (and don't need to change anything) you're good to go. Also, everything can be configured without the UI (not so easily of course).

Comment by lionkor 7 days ago

yes

Comment by deeebug 7 days ago

> So if the minio maintainers (or anybody that forks the project and wants to work it) can fix any security issues that may occur I don't see any problems with using it.

The concerning language for me is this part that was added:

> Critical security fixes may be evaluated on a case-by-case basis

It seems to imply that any fixes _may_ be merged in, but there's no guarantees.

Comment by spapas82 7 days ago

Yes this is concerning for me too. Hopefully if they don't fix/merge security issues somebody will fork and maintain it. It shouldn't be too much work. I'd even do it myself if I was experienced in golang.

Comment by fithisux 7 days ago

I used it for my experiments in Docker. I once or two used the UI, I always connected through python.

Comment by aftbit 7 days ago

Shocker... they abandoned POSIX compatibility, built a massively over-complicated product, then failed to compete with things like Ceph on the metal side or ubiquitous S3/R2/B2 on the cloud side.

Comment by PunchyHamster 7 days ago

No, they rebranded to AIStor and are now selling to AI companies.

Minio is/was pretty solid product for places where rack of servers for Ceph wasn't an option (it does have quite a bit higher memory requirements), or you just need a bit of S3 (like we have small local instances that just run as build cache for CI/CD)

But that's not where money is

Comment by throwaway894345 7 days ago

> they abandoned POSIX compatibility, built a massively over-complicated product

This is a wild sentence--how can you criticize them for abandoning POSIX support __and__ building a massively over-complicated product? Making a reliable POSIX system is inherently very complex.

Comment by bee_rider 7 days ago

I think the criticism (just interpreting the post, don’t know anything about the technical situation) is that the complication is not necessary/worthwhile.

POSIX can be complicated, but it puts you in a nice ecosystem, so for some use-cases complex POSIX support is not over complicated. It is just… appropriately complicated.

Comment by throwaway894345 7 days ago

Sure, but then you can make that argument about any of the features in Minio, in which case the parent's argument about Minio as a whole being overcomplicated is invalidated. Probably the more sensible way to look at things is "value / complexity" or "bang for buck", but even there I think POSIX loses since it's relatively little value for a relatively large amount of complexity.

Comment by bee_rider 7 days ago

Yeah. I don’t actually know if they are right or wrong, it depends on the ecosystem the project wants to hook in to, right? I just want to reduce it from “wild” to “debatable,” haha.

Comment by 7 days ago

Comment by 7 days ago

Comment by ahepp 7 days ago

What would go in to POSIX compatibility for a product like this which would make it complicated? Because the kind of stuff that stands out to me is the use of Linux specific syscalls like epoll/io_uring vs trad POSIX stuff like poll. That doesn't seem too complicated?

Comment by dark-star 7 days ago

S3 object names are not POSIX compatible.

"foo" and "foo/bar" are valid S3 object names that cannot coexist on a POSIX filesystem

Comment by ahepp 6 days ago

So when we say "they abandoned posix compatibility", are we saying "They abandoned the POSIX filesystem storage backend"? I believe that's true, I used to use minio on a FreeBSD server but after an update I had to switch to just passing in zfs block devs.

Or are we saying that they no longer support running minio on POSIX systems at all, due to using linux specific syscalls or something else I'm not thinking of? I don't know whether they did this or not.

Those seem like two very different things to me, and when someone says "they don't support POSIX", I assume the latter

Comment by dark-star 6 days ago

Ah, yes, I didn't even think of that. I always understood it as "abandon POSIX filesystems (as backend for S3)" because I knew about all these issues with filename/directory clashes,

I don'T think they would abandon POSIX systems in general, because what sense would that make?

Comment by Dachande663 7 days ago

Does anyone have any recommendations for a simple S3-wrapper to a standard dir? I've got a few apps/services that can send data to S3 (or S3 compatible services) that I want to point to a local server I have, but they don't support SFTP or any of the more "primitive" solutions. I did use a python local-s3 thing, but it was... not good.

Comment by mcpherrinm 7 days ago

Versity Gateway looks like a reasonable option here. I haven't personally used it, but I know some folks who say it performs pretty great as a "ZFS-backed S3" alternative.

https://github.com/versity/versitygw

Unlike other options like Garage or Minio, it doesn't have any clustering, replication, erasure coding, ...

Your S3 objects are just files on disk, and Versity exposes it. I gather it exists to provide an S3 interface on top of their other project (ScoutFS), but it seems like it should work on any old filesystem.

Comment by pkoiralap 7 days ago

Versity is really promising. I got a chance to meet with Ben recently at the Super Computing conference in St. Louis and he was super chill about stuff. Big shout out to him.

He also mentioned that the minio-to-versity migration is a straight forward process. Apparently, you just read the data from mino's shadow filesystem and set it as an extended attribute in your file.

Comment by mbreese 7 days ago

I really like what I've (just now) read about Versity. I like that they are thinking about large scale deployments with tape as the explicit cold-storage option. It really makes sense to me coming from an HPC background.

Thanks for posting this, as it's the first I've come across their work.

Comment by zzyzxd 7 days ago

Garage also decide to not implement erasure coding.

Comment by mr-karan 7 days ago

You could perhaps checkout https://garagehq.deuxfleurs.fr/

Comment by dardeaup 7 days ago

I've done some preliminary testing with garage and I was pleasantly surprised. It worked as expected and didn't run into any gotchas.

Comment by digikata 7 days ago

Garage is really good for core S3, the only thing I ran into was it didn't support object tagging. It could be considered maybe a more esoteric corner of the S3 api, but minio does support it. Especially if you're just mapping for a test api, object tagging is most likely an unneeded feature anyway.

It's a "Misc" endpoint in the Garage docs here: https://garagehq.deuxfleurs.fr/documentation/reference-manua...

Comment by topspin 7 days ago

"didn't support object tagging"

Thanks for pointing that out.

Comment by ralgozino 7 days ago

Do you want to serve already existing files from a directory or just that the backend is a directory on your server?

If the answer is the latter, seaweedfs is an option:

https://github.com/seaweedfs/seaweedfs?tab=readme-ov-file#qu...

Comment by trufas 7 days ago

s3proxy has a filesystem backend [0].

Possibly of interest: s3gw[1] is a modified version of ceph's radosgw that allows it to run standalone. It's geared towards kubernetes (notably part of Rancher's storage solution), but should work as a standalone container.

[0] https://github.com/gaul/s3proxy [1] https://github.com/s3gw-tech/s3gw

Comment by frellus 7 days ago

Check out from nvidia, aistore: https://github.com/NVIDIA/aistore

It's not a fully featured s3 compatible service, like MinIO, but we used it to great success as a local on-prem s3 read/write cache with AWS as the backing S3 store. This avoided expensive network egress charges as we wanted to process data in both AWS as well as in a non-AWS GPU cluster (i.e. a neocloud)

Comment by import 7 days ago

rclone serve s3, could be.

Comment by Zambyte 6 days ago

I just learned about the rclone serve subcommand the other day. Rclone is not exactly niche, but it feels like such an underrated piece of software.

Comment by spicypixel 7 days ago

This is the winner

Comment by dark-star 7 days ago

that is not easily possible. In S3, "foo" and "foo/bar" are valid and distinct object names that cannot be directly mapped to a POSIX directory. As soon as you create one of those objects, you cannot create the other

Comment by baq 7 days ago

please copy and paste outrage from previous discussions to not waste more time

https://news.ycombinator.com/item?id=45665452

Comment by st3fan 7 days ago

What a story. EOL the open source foundation of your commercial product, to which many people contributed, to turn it into a closed source "A-Ff*ing-I Store" .. seriously what the ...

Comment by nikeee 7 days ago

Didn't contribute to MinIO, but if they accepted external contributions without making them sign a CLA, they cannot change the license without asking every external contributor for consent to the license change. As it is AGPL, they still have to provide the source code somewhere.

IANAL, of course

Comment by lima 7 days ago

They required a "Community Contribution License" in each PR description, which licensed each contribution under Apache 2 as an inbound license.

Meanwhile, MinIO's own contributions and the distribution itself (outbound license) were AGPL licensed.

It's effectively a CLA, just a bit weaker, since they're still bound by the terms of Apache 2 vs. a full license assignment like most CLAs.

Comment by NewsaHackO 7 days ago

People underestimate the amount of fakeness a lot of these "open-core/source" orgs have. I guarantee from day one of starting the MinIO project, they had eyes on future commercialization, and of course made contributors sign away their rights knowing full well they are going to go closed source.

Comment by sieabahlpark 7 days ago

[dead]

Comment by 7 days ago

Comment by smsm42 7 days ago

Well, you can not have a product without having "AI" somewhere in the name anymore. It's the law.

Comment by orphea 7 days ago

Comment by alex-aizman 7 days ago

back in 2018, it didn't feel this way

Comment by daveguy 7 days ago

This is why I don't bother with AGPL released by a company (use or contribute).

Choosing AGPL with contributors giving up rights is a huge red flag for "hey, we are going to rug pull".

Just AGPL by companies without even allowing contributor rights is saying, "hey, we are going to attempt to squeeze profit out and don't want competition on our SaaS offering."

I wish companies would stop trying to get free code out of the open source community. There have been so many rug pulls it should be expected now.

Comment by btian 7 days ago

What's the problem? Surely people will fork it

Comment by binsquare 7 days ago

I still don't understand what the difference is.

What is an AI Stor (e missing on purpose because that is how it is branded: https://www.min.io/product/aistor)

Comment by everfrustrated 7 days ago

Might be because of this other storage product named that https://github.com/NVIDIA/aistore

Comment by singhrac 7 days ago

Does anyone use this? I was setting it up a few months ago but it felt very complicated compared to MinIO (or alternatives). Is there a sort of minikube-like tool I could use here?

Comment by 56kbr 7 days ago

There's a development/playground deployment for local K8s (e.g. Minikube, KinD): https://github.com/NVIDIA/aistore/tree/main/deploy/dev/k8s.

For production you'd need a proper cluster deployed via Helm, but for trying it out locally that setup is easy to get running.

Comment by paulddraper 7 days ago

It can store things for AI workloads (and non-AI workloads, but who’s counting…)

Comment by bigbuppo 7 days ago

About a billion dollars difference in valuation up until the bubble pops.

Comment by ljm 7 days ago

Looks like AI slop

    Replication

    A trusted identity provider is a
    key component to single sign on.
Uh, what?

It’s probably just Minio but it costs more money.

Comment by bananapub 7 days ago

for those looking for a simple and reliable self hosted S3 thing, check out Garage . it's much simpler - no web ui, no fancy RS coding, no VC-backed AI company, just some french nerds making a very solid tool.

fwiw while they do produce Docker containers for it, it's also extremely simple to run without that - it's a single binary and running it with systemd is unsurprisingly simple[1].

0: https://garagehq.deuxfleurs.fr/

1: https://garagehq.deuxfleurs.fr/documentation/cookbook/system...

Comment by colesantiago 7 days ago

How do you sustain yourselves while developing this project?

What if the sponsorships run out?

Comment by prmoustache 7 days ago

What if a company change license, drop the project or goes bankrupt?

You shouldn't expect guarantee of any kind.

Comment by colesantiago 7 days ago

> What if a company change license, drop the project or goes bankrupt?

You can always fork the project, then the questions of sponsorships still remains.

Recently Ghostty is a non profit, which means that it is guaranteed not to turn into a for profit and rugpull like what MinIO has done.

Comment by prmoustache 7 days ago

That doesn't guarantee the devs stay motivated either.

In the end open source allows motivated people to take over the project if you aren't willing to do it yourself but projects can also die of lack of motivated/paid resources.

Comment by jdoe1337halo 7 days ago

I use this image on my VPS, it was the last update before they neutered the community version

quay.io/minio/minio:RELEASE.2025-04-22T22-12-26Z

Comment by spapas82 7 days ago

This is a way too old version. You should use a newer one instead by downloading the source and built the binaries yourself.

Here's a simple script that does it automagically (you'll need golang installed):

> build-minio-ver.sh

  #!/bin/bash
  set -e

  VERSION=$(git ls-remote --tags https://github.com/minio/minio.git | \
  grep -Eo 'RELEASE\.[0-9T-]+Z' | sort | tail -n1)

  echo "Building MinIO $VERSION ..."

  rm -rf /tmp/minio-build
  git clone --depth 1 https://github.com/minio/minio.git /tmp/minio-build

  cd /tmp/minio-build
  git fetch --tags
  git checkout "$VERSION"

  echo "Building minio..."

  CGO_ENABLED=0 go build -trimpath \
  -ldflags "-s -w \
  -X github.com/minio/minio/cmd.Version=$VERSION \
  -X github.com/minio/minio/cmd.ReleaseTag=$VERSION \
  -X github.com/minio/minio/cmd.CommitID=$(git rev-parse HEAD)" \
  -o "$OLDPWD/minio"

  echo " Binary created at: $(realpath "$OLDPWD/minio")"

  "$OLDPWD/minio" --version

Comment by NietTim 7 days ago

Same here, since I'm the only one using my instance. But, you should be aware that there is an CVE in that version that allows any account level to increase their own permissions to admin level, so it's inherently unsafe

Comment by tiernano 7 days ago

Is this not the best thing that could happen? Like now its in maintenance, it can be forked without any potential license change in the future, or any new features that are in that license change... This allows anyone to continue working on this, right? Or did i miss something?

Comment by jagged-chisel 7 days ago

> ... it can be forked without any potential license change in the future ...

It is useful to remember that one may fork at the commit before a license change.

Comment by phoronixrly 7 days ago

It is also useful to remember that MinIO has historically held to an absurd interpretation of the AGPL -- that it spreads (again, according to them) to software that communicates with MinIO via the REST API/CLI.

I assume forks, and software that uses them will be held to the same requirements.

Comment by ahepp 7 days ago

As long as I'm not the one who gets sued over this, I think it would be wonderful to have some case law on what constitutes an AGPL derivative work. It could be a great thing for free software, since people seem to be too scared to touch the AGPL at all right now.

Comment by NegativeK 7 days ago

They're not the only ones to claim that absurdity.

https://opensource.google/documentation/reference/using/agpl...

Comment by createaccount99 7 days ago

I thought that literally was the point of AGPL. If not, what's the difference between it and GPL3?

Comment by lukaslalinsky 6 days ago

AGPL changes what it means to "distribute" the software. With GPL, sending copies of the software to users is distribution. With AGPL, if the users can access it over network, it's distribution. The implication is that if you run a custom version of MinIO, you need to open source it.

Comment by Weryj 7 days ago

Pretty sure you can’t retroactively apply a restrictive license, so that was never a concern.

Comment by IgorPartola 7 days ago

You can, sort of, sometimes. Copyleft is still based on copyright. So in theory you can do a new license as long as all the copyright holders agree to the change. Take open source/free/copyleft out of it:

You create a proprietary piece of software. You license it to Google and negotiate terms. You then negotiate different terms with Microsoft. Nothing so far prevents you from doing this. You can't yank the license from Google unless your contract allows that, but maybe it does. You can in theory then go and release it under a different license to the public. If that license is perpetual and non-revokable then presumably I can use it after you decide to stop offering that license. But if the license is non-transferrable then I can't pass on your software to someone else either by giving them a flash drive with it, or by releasing it under a different license.

Several open source projects have been re-licensed. The main thing that really is the obstacle is that in a popular open source or copyleft project you have many contributors each of which holds the copyright to their patches. So now you have a mess of trying to relicense only some parts of your codebase and replace others for the people resisting the change or those you can't reach. It's a messy process. For example, check out how the Open Street Maps data got relicensed and what that took.

Comment by bilkow 7 days ago

I think you are correct, but you probably misunderstood the parent.

My understanding of what they meant by "retroactively apply a restrictive license" is to apply a restrictive license to previous commits that were already distributed using a FOSS license (the FOSS part being implied by the new license being "restrictive" and because these discussions are usually around license changes for previously FOSS projects such as Terraform).

As allowing redistribution under at least the same license is usually a requirement for a license to be considered FOSS, you can't really change the license of an existing version as anyone who has acquired the version under the previous license can still redistribute it under the same terms.

Edit: s/commit/version/, added "under the same terms" at the end, add that the new license being "restrictive" contributes to the implication that the previous license was FOSS

Comment by IgorPartola 7 days ago

Right but depending on the exact license, can the copyright holder revoke your right to redistribute?

Comment by bilkow 7 days ago

It's probable that licenses that explicitly allows revocation at will would not be approved by OSI or the FSF.

Copyright law is also a complex matter which differs by country and I am not a lawyer so take this with a grain of salt, but there seem to be "edge cases" where the license can be revoked as seen in the stackexchange page below.

See:

https://lists.opensource.org/pipermail/license-discuss_lists...

https://opensource.stackexchange.com/questions/4012/are-lice...

Comment by Havoc 7 days ago

I thought they were pivoting towards close it and trying to monetize this?

That got backlash so now it’s just getting dropped entirely?

People get to do whatever they want but bit jarring to go from this is worth something people will pay for to maintenance mode in quick succession

Comment by embedding-shape 7 days ago

> I thought they were pivoting towards close it and trying to monetize this?

That's literally what the commit shows that they're doing?

> *This project is currently under maintenance and is not accepting new changes.*

> For enterprise support and actively maintained versions, please see MinIO SloppyAISlop (not actual name)

Comment by this_user 7 days ago

Their marketing had shifting to trying to push an AI angle for some time now. For an established project or company, that's usually a sign that things aren't going well.

Comment by ocdtrekkie 7 days ago

They cite a proprietary alternative they offer for enterprises. So yes they pivoted to a monetized offering and are just dropping the open source one.

Comment by itopaloglu83 7 days ago

So they’re pulling an OpenAI.

Start open source to use free advertising and community programmer, and then dumps it all for commercial licensing.

I think n8n is next because they finished the release candidate for version 2.0, but there are no changelogs.

Comment by candiddevmike 7 days ago

It sucks that S3 somehow became the defacto object storage interface, the API is terrible IMO. Too many headers, too many unknowns with support. WebDAV isn't any better, but I feel like we missed an opportunity here for a standardized interface.

Comment by tlarkworthy 7 days ago

?

Its like GET <namespace>/object, PUT <namespace>/object. To me its the most obvious mapping of HTTP to immutable object key value storage you could imagine.

It is bad that the control plane responses can be malformed XML (e.g keys are not escaped right if you put XML control characters in object paths) but that can be forgiven as an oversight.

Its not perfect but I don't think its a strange API at all.

Comment by jerf 7 days ago

That may be what S3 is like, but what the S3 API is is this: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3

My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.

Yes, there's a simple API with simple capabilities struggling to get out there, but pointing that out is merely the first step on the thousand-mile journey of determining what, exactly, that is. "Everybody uses 10% of Microsoft Word, the problem is, they all use a different 10%", basically. If you sat down with even 5 relevant stakeholders and tried to define that "simple API" you'd be shocked what you discover and how badly Hyrum's Law will bite you even at that scale.

Comment by zokier 7 days ago

> That may be what S3 is like, but what the S3 API is is this: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3

> My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.

idk why you link to Go SDK docs when you can link to the actual API reference documentation: https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operatio... and its PDF version: https://docs.aws.amazon.com/pdfs/AmazonS3/latest/API/s3-api.... (just 3874 pages)

Comment by tlarkworthy 7 days ago

It's better to link to a leading S3 compatible API docs page. You get a better measure of the essential complexity

https://developers.cloudflare.com/r2/api/s3/api/

It's not that much, most of weirder S3 APIs are optional, orthogonal APIs, which is good design.

Comment by jerf 6 days ago

Because it had the best "on one HTML page" representation I found in the couple of languages I looked at.

Comment by eproxus 7 days ago

That page crashes Safari for me on iOS.

Comment by PunchyHamster 7 days ago

It gets complex with ACLs for permissions, lifecycle controls, header controls and a bunch of other features that are needed on S3 scale but not at smaller provider scale.

And many S3-compatible alternatives (probably most but the big ones like Ceph) don't implement all of the features.

For example for lifecycles backblaze have completely different JSON syntax

Comment by perbu 7 days ago

Last I checked the user guide to the API was 3500 pages.

3500 pages to describe upload and download, basically. That is pretty strange in my book.

Comment by nine_k 7 days ago

Even download and upload get tricky if you consider stuff like serving buckets like static sites, or stuff like siged upload URLs.

Now with the trivial part off the table, let's consder storage classes, security and ACLs, lifecycle management, events, etc.

Comment by candiddevmike 7 days ago

Everything uses poorly documented, sometimes inconsistent HTTP headers that read like afterthoughts/tech debt. An S3 standard implementation has to have amazon branding all over it (x-amz) which is gross.

Comment by drob518 7 days ago

I suspect they learned a lot over the years and the API shows the scars. In their defense, they did go first.

Comment by christina97 7 days ago

I mean… it’s straight up an Amazon product, not like it’s an IETF standard or something.

Comment by 7 days ago

Comment by paulddraper 7 days ago

!!!

I’ve seen a lot of bad takes and this is one of them.

Listing keys is weird (is it V1 or V2)?

The authentication relies on an obtuse and idiosyncratic signature algorithm.

And S3 in practice responds with malformed XML, as you point out.

Protocol-wise, I have trouble liking it over WebDAV. And that's depressing.

Comment by KaiserPro 7 days ago

HTTP isn't really a great back plane for object storage.

Comment by ssimpson 7 days ago

I thought the openstack swift API was pretty clean, but i'm biased.

Comment by giancarlostoro 7 days ago

To be fair. We still have an opportunity to create a standardized interface for object storage. Funnily enough when Microsoft made their own they did not go for S3 compatible APIs, but Microsoft usually builds APIs their customers can use.

Comment by mbreese 7 days ago

It was better. When it first came out, it was a pretty simple API, at least simpler than alternatives (IIRC, I could just be thinking with nostalgia).

I think it's only gotten as complicated as it has as new features have been organically added. I'm sure there are good use cases for everything, but it does beg the question -- is a better API possible for object storage? What's the minimal API required? GET/POST/DELETE?

Comment by bostik 7 days ago

I suspect there is no decent "minimal" API. Once you get to tens of millions of objects in a given prefix, you need server side filtering logic. And to make it worse, you need multiple ways to do that.

For example, did you know that date filtering in S3 is based on string prefix matching against an ISO8601/RFC3339 style string representation? Want all objects created between 2024-01-01 and 2024-06-30? You'll need to construct six YYYY-MM prefixes (one per month) for datetime and add them as filter array elements.

As a result the service abbreviation is also incorrect these days. Originally the first S stood for "Simple". With all the additions they've had to bolt on, S2 would be far more appropriate a name.

Comment by everfrustrated 7 days ago

Like everything it starts off simple but slowly with every feature added over 19 years Simple Storage is it not.

S3 has 3 independent permissions mechanisms.

Comment by dathinab 7 days ago

S3 isn't JSON

it's storing a [utf8-string => bytes] mapping with some very minimal metadata. But that can be whatever you want. JSON, CBOR, XML, actual document formats etc.

And it's default encoding for listing, management operations and similar is XML....

> but I feel like we missed an opportunity here for a standardized interface.

except S3 _is_ the de-facto standard interface which most object storage system speaks

but I agree it's kinda a pain

and commonly done partial (both feature wise and partial wrong). E.g. S3 store utf8 strings, not utf8 file paths (like e.g. minio does), that being wrong seems fine but can lead to a lot of problems (not just being incompatible for some applications but also having unexpected perf. characteristics for others) making it only partial S3 compatible. Similar some implementations random features like bulk delete or support `If-Match`/`If-Non-Match` headers can also make them S3 incompatible for some use cases.

So yeah, a new external standard which makes it clear what you should expect to be supported to be standard compatible would be nice.

Comment by jsiepkes 7 days ago

There is also Ambry ( https://github.com/linkedin/ambry ) as an alternative. The blob store open-sourced, created and maintained by LinkedIn. It also has an S3 compatible interface.

I think it is about 10 years old now and it is really stable.

Comment by hintymad 7 days ago

I suspect that Clickhouse will go down the same path. They already changed their roadmap a bit two years ago[1], and had good reasons: if the open sourced version does too well, it will compete with their cloud business.

[1] https://news.ycombinator.com/item?id=37608186

Comment by ncrmro 7 days ago

As a note ceph (rook on kubernetes) which is distributed blockstorage has a built in s3 endpoint support

Comment by Joel_Mckay 7 days ago

Like many smart people they focused on telling people the "how", and assume visitors to their wall of "AI"/hype text already understand the use-case "why".

1. I like that it is written in Go

2. I saw nothing above what Apache Spark+Hadoop with _consistent_ object stores already offers on Amazon (S3), Google Cloud (GCS), and or Microsoft (Azure Storage, ADLS Gen2)

Best of luck, maybe folks should look around for that https://donate.apache.org/ button before the tax year concludes =3

Comment by PunchyHamster 7 days ago

> I saw nothing above what Apache Spark+Hadoop with _consistent_ object stores already offers on Amazon (S3), Google Cloud (GCS), and or Microsoft (Azure Storage, ADLS Gen2)

it was very simple to setup, and even if you just leased a bunch of servers off say OVH, far FAR cheaper to run your own than paying any of the big cloud providers.

It also had pretty low requirements, ceph can do all that but setup is more complex and RAM requirements far, far higher

Comment by Joel_Mckay 7 days ago

MinIO still makes no sense, as Ceph is fundamentally already RADOS at its core (fully compatible with S3 API.)

For a proper Ceph setup, even the 45drives budget configuration is still not "hobby" grade.

I will have to dive into the MinIO manual at some point, as the value proposition still seems like a mystery. Cheers =3

Comment by PunchyHamster 7 days ago

MinIO is far less complex than getting same functionality on Ceph stack.

But that's kind of advantage only on the small companies and hobbyist market, big company either have enough needs to run big ceph cluster, or to buy it as a service.

Minio is literally "point it at storage(s), done". And at far smaller RAM usage.

Ceph is mon servers, osd servers, then rados gatway server on top of that.

Comment by Joel_Mckay 7 days ago

It sounds a lot like SwiftOnFile with GlusterFS, but I would need to look at it more closely on personal time. =3

Comment by dardeaup 7 days ago

"Ceph is fundamentally already RADOS at its core (fully compatible with S3 API.)"

Yes, Ceph is RADOS at its core. However, RADOS != S3. Ceph provides an S3 compatible backend with the RADOS Gateway (RGW).

Comment by Joel_Mckay 7 days ago

My point was even 45drives virtualization of Ceph host roles to squeeze the entire setup into a single box was not a "hobby" grade project.

I don't understand yet exactly what MinIO would add on top of that to make it relevant at any scale. I'll peruse the manual on the weekend, because their main site was not helpful. Thanks for trying though ¯\_(ツ)_/¯

Comment by dardeaup 7 days ago

What I tried to say (perhaps not successfully) was that core Ceph knows nothing about S3. One gets S3 endpoint capability from the radosgw which is not a required component in a ceph cluster.

Comment by Joel_Mckay 7 days ago

The risk with mixing different subjects per thread. Cheers =3

https://docs.ceph.com/en/latest/radosgw/s3/

Comment by killme2008 7 days ago

I can't believe they made this decision. It's detrimental to the open-source ecosystem and MinIO users, and it's not good for them either, just look at the Elasticsearch case.

Comment by frellus 7 days ago

Comment by positisop 7 days ago

github.com/NVIDIA/aistore

At the 1 billion valuation from the previous round, achieving a successful exit requires a company with deep pockets. Right now, Nvidia is probably a suitable buyer for MinIO, which might explain all the recent movements from them. Dell, Broadcom, NetApp, etc, are not going to buy them.

Comment by thway15269037 7 days ago

So, when anyone will fork in? Call it MaxIO or whatever. I might even submit couple of small patches.

My only blocker for a fork to maintain compatibility and path to upgrade from earlier versions.

Comment by 12_throw_away 7 days ago

To be fair, their previous behavior and attitude towards the open source license suggests that minio would possibly engage in at least a little bumptious legal posturing against whoever chose to fork it.

Comment by valyala 7 days ago

What is the purpose of MinIO, Seaweedfs and similar object storage systems? They lack durability guarantees provided by S3 and GCS. They lack "infinite" storage promise contrary to S3 and GCS. They lack "infinite" bandwidth unlike S3 and GCS. They are more expensive than other storage options, unlike S3 and GCS.

Comment by cortesoft 7 days ago

We use it because we are already running our own k8s clusters in our datacenters, and we have large storage requirements for tools that have native S3 integration, and running our own minio clusters in the same datacenter as the tools that generate and consume that data is a lot faster and cheaper than using S3.

For example, we were running a 20 node k8s cluster for our Cortex (distributed Prometheus) install, monitoring about 30k servers around the world, and it was generating a bit over a TB of data a day. It was a lot more cost effective and performant to create a minio cluster for that data than to use S3.

Also, you can get durability with minio with multi cluster replication.

Comment by valyala 6 days ago

Consider migrating to VictoriaMetrics and saving on storage costs and operations costs. You also won't need MinIO, since it stores data to local filesystem (aka to regular persistent volumes). See real-world reports from happy users who saved costs on a large-scale Prometheus-compatible monitoring - https://docs.victoriametrics.com/victoriametrics/casestudies...

Comment by cortesoft 5 days ago

I can't imagine switching at this point. We spent quite a while building up our Cortex and Minio infrastructure management, as well as our alerting and inventory management systems, and it is all very stable right now. We don't really touch it anymore, it just hums along.

We have already worked through all the pain points and it all works smoothly. No reason to change something that isn't a problem.

Comment by onionisafruit 7 days ago

I haven't used it in a while, but it used to be great as a test double for s3

Comment by wasmitnetzen 7 days ago

S3 is a widely supported API schema, so if you need something on-prem, you use these.

Comment by valyala 6 days ago

But what's the point to use these DIY object storage systems, when they do not provide durability and other important guarantees provided by S3 and GCS?

Comment by lima 6 days ago

When you want just the API for compatibility, I guess?

Self-hosted S3 clones with actual durability guarantees exist, but the only properly engineered open source choices are Ceph + radosgw (single-region, though) or Garage (global replication based on last-writer-wins CRDS conflict resolution).

Comment by maartin0 7 days ago

It's great for a prototype which doesn't need to store a huge amount of data, you can run it on the same VM as a node server behind Cloudflare and get a fairly reliable setup going

Comment by spapas82 7 days ago

Minio allows you to have an s3 like interface when you have your own servers and storage.

Comment by valyala 6 days ago

MinIO also allows losing your data, since it doesn't provide high durability guarantees unlike S3 and GCS.

Comment by lynn_xx 7 days ago

MinIO is a great open-source project. I’m familiar with it because I previously worked with Longhorn. But for any project to sustain long-term development, it needs a viable business model to support ongoing investment and growth.

Comment by ecshafer 7 days ago

Is this just the open source portion? Minio is now a fully paid product then?

Comment by 0x073 7 days ago

"For enterprise support and actively maintained versions, please see MinIO AIStor."

Probably yes.

Comment by margorczynski 7 days ago

Basically officially killing off the open source version.

Comment by pabs3 7 days ago

Anyone know if MinIO AIStor is legal? AFAICT MinIO didn't have a CLA and there are 559 non-@minio.io commit authors in the git history, which could be an AGPL violation if they didn't get contributor approval for the license change. Or is AIStor a fresh codebase written from scratch?

Edit: some discussion of this here: https://news.ycombinator.com/item?id=46136871

Comment by EgoIncarnate 7 days ago

MinIO had a de facto CLA. MinIO required contributors to license their code to the project maintainers (only) under Apache 2. Not as bad as copyright assignment, but still asymmetric (they can relicense for commercial use, but you only get AGPL). https://github.com/minio/minio/blob/master/.github/PULL_REQU...

Comment by Eikon 7 days ago

I've been using Minio in ZeroFS' [0] CI (a POSIX compliant filesystem that works on top of s3). I guess I'll switch to MicroCeph [1].

[0] https://github.com/Barre/ZeroFS

[1] https://canonical-microceph.readthedocs-hosted.com/stable/

Comment by ahepp 7 days ago

What is the use case for implementing a POSIX filesystem on top of an object store? I remember reading this article a few years ago, which happens to be by the minio folks: https://blog.min.io/filesystem-on-object-store-is-a-bad-idea...

Comment by Eikon 7 days ago

> What is the use case for implementing a POSIX filesystem on top of an object store?

The use case is fully stateless infrastructure: your file/database servers become disposable and interchangeable (no "pets"), because all state lives in S3. This dramatically simplifies operations, scaling, and disaster recovery, and it's cheap since S3 (or at least, S3 compatible services) storage costs are very low.

The MinIO article's criticisms don't really apply here because ZeroFS doesn't store files 1:1 to S3. It uses an LSM-tree database backed by S3, which allows it to implement proper POSIX semantics with actual performance.

Comment by ahepp 7 days ago

It makes sense that some of the criticisms wouldn't apply if you're not storing the files 1:1.

What about NFS or traditional filesystems on iSCSI block devices? I assume you're not using those because managing/scaling/HA for them is too painful? What about the openstack equivalents of EFS/EBS? Or Ceph's fs/blockdev solutions (although looking into it a bit, it seems like those are based on its object store)?

Comment by 7 days ago

Comment by rowanseymour 7 days ago

What's the simplest replacement for mocking S3 in CI? We don't about performance or reliability.. it's just gotta act like S3.

Comment by onei 7 days ago

I've used localstack in the past which worked pretty well.

https://github.com/localstack/localstack

Comment by rodwyersoftware 7 days ago

localstack, 100%

Comment by paulddraper 7 days ago

Open source is not a sustainable business model.

There are two ways open source projects continue.

1. The creator has a real, solid way to make money (React by Facebook, Go by Google).

2. The project is extremely popular (Linux, PostreSQL).

Is it possible for people to reliably keep working for ~free? In theory yes, but if you expect that, you have a very bad understanding of 98% of human behavior.

Comment by conqrr 7 days ago

They are making lot of enterpise bucks though. And they did start as Open Source. Killing it midway to serve their convenience is the issue.

There's also tonne of Open Source that isn't as popular but serving niche communities. It's definitely harder but not impossible. OS core and paid hosting with bells and whistles has proven to be a good sustainable model.

Comment by paulddraper 6 days ago

> OS core and paid hosting with bells and whistles has proven to be a good sustainable model

Redis, Elasticsearch, Terraform, MongoDB, CockroachDB have all changed their OSS licenses in recent years.

Comment by mschuster91 7 days ago

There's actually three ways, the third one being academia picking up the bill which is how we got the mess that is OpenStack.

Also, Debian has been around for a few decades, although I do admit that - like the Linux kernel - that wouldn't have been possible without a lot of companies contributing back to the ecosystem.

Comment by aranw 7 days ago

I've been using the minio-go client for S3-compatible storage abstraction in a project I'm working on. This new change putting the minio project into maintenance mode means no new features or bug fixes, which is concerning for something meant to be a stable abstraction layer

Need to start reconsidering the approach now and looking for alternatives

Comment by johnmaguire 7 days ago

Any good alternatives?

Comment by xrd 7 days ago

I saw this referenced a few days ago. Haven't investigated it at all.

https://garagehq.deuxfleurs.fr/

Edit: jeez, three of us all at once...

Comment by phpdave11 7 days ago

If you just need a simple local s3 server (e.g. for developing and testing), I recommend rclone.

rclone serve s3 path/to/buckets --addr :9000 --auth-key <key-id>,<secret>

Comment by import 7 days ago

Seaweed and garage (tried both, still using seaweed)

Comment by ecshafer 7 days ago

A lot of them actually. Ceph personally I've used. But there's a ton, some open source, some paid. Backblaze has a product Buckets or something. Dell powerscale. Cloudian has one. Nutanix has one.

Comment by dardeaup 7 days ago

Ceph is awesome for software defined storage where you have multiple storage nodes and multiple storage devices on each. It's way too heavy and resource intensive for a single machine with loopback devices.

Comment by coredog64 7 days ago

I've been looking at microceph, but the requirement to run 3 OSDs on loopback files plus this comment from the docs gives me pause:

`Be wary that an OSD, whether based on a physical device or a file, is resource intensive.`

Can anyone quantify "resource intensive" here? Is it "takes an entire Raspberry Pi to run the minimum set" or is it "takes 4 cores per OSD"?

Edit: This is the specific doc page https://canonical-microceph.readthedocs-hosted.com/stable/ho...

Comment by dardeaup 7 days ago

Ceph has multiple daemons that would need to be running: monitor, manager, OSD (1 per storage device), and RADOS Gateway (RGW). If you only had a single storage device it would still be 4 daemons.

Comment by dathinab 7 days ago

ceph depends a lot on your use case

minio was also suited for some smaller use cases (e.g. running a partial S3 compatible storage for integration tests). Ceph isn't really good for it.

But if you ran large minio clusters in production ceph might be a very good alternative.

Comment by grimblee 7 days ago

If you just need a s3 endpoint for some services lookup garage

Comment by pezgrande 7 days ago

This one is usually the most recommended: https://garagehq.deuxfleurs.fr/

Comment by nullify88 7 days ago

https://www.versity.com/products/versitygw/

I haven't tried it though. Seems simple enough to run.

Comment by mlnj 7 days ago

Have heard good things about Garage (https://garagehq.deuxfleurs.fr/).

Am forced to use MinIO for certain products now but will eventually move to better eventually. Garage is high on my list of alternatives.

Comment by SteveNuts 7 days ago

RustFS is good, but still pretty immature IMO

Comment by itodd 7 days ago

seaweedfs

Comment by lousken 7 days ago

wasn't there a fork with the UI?

Comment by atemerev 7 days ago

How it makes sense? If they are no longer open-source S3 and cloud only, I'll just use S3.

Comment by createaccount99 6 days ago

So why exactly did they close source, what were they losing by having AGPL? I thought AGPL + selling private licenses to corps was a fantastic method of getting some income for an open source offering.

Comment by rzerowan 6 days ago

The moves they have been making seem to be similar to what one would see if the VC money was getting tight or alternatively they were bought out by a Private Equity firm.

Similar to the way Broadcom did with VMware hiking prices astronomically for their largest clients, and basically killing the SME offering.

Comment by nazcan 7 days ago

I'm quite interested in a k8s-native file-system that makes use of local persistent volumes. I'm running cockroachDB in my cluster (not yet with local persistent volumes.. but getting closer).

Anyone have any suggestions?

Comment by snickell 7 days ago

Any efforts to consolidate around a community fork yet?

Comment by souenzzo 7 days ago

The best software is the one that doesn't change.

Comment by apexalpha 7 days ago

So how are HN reviews of GarageHQ? Or any others?

Comment by realreality 7 days ago

Garage works well for its limited feature set, but it doesn't have very active development. Apparently they're working on a management UI.

Seaweedfs is more mature and has many interfaces (S3, webdav, SFTP, REST, fuse mount). It's most appropriate for storing lots of small files.

I prefer the command line interface and data/synchronization model of Garage, though. It's easier to manage, probably because the developers aren't biting off more than they can chew.

Comment by speedgoose 7 days ago

I havn't tested it since a while, but it was pretty good and a lot simpler than MinIO.

Like in the old MinIO days, an S3 object is a file on the filesystem, not some replicated blocks. You could always rebuild the full object store content with a few rsync. I appreciate the simplicity.

My main concern was that you couldn't configure it easily through files, you had to use CLI, which wasn't very convenient. I hope this has changed.

Comment by realreality 7 days ago

Objects in Garage are broken up into 1MB (default) blocks, and compressed with zstandard. So, it would be difficult to reconstruct the files. I don't know if that was a recent change since you looked at it.

Configuration is still through the CLI, though it's fairly simple. If your usecase is similar to the way that the Deuxfleurs organization uses it -- several heterogeneous, geographically distributed nodes that are more or less set-it-and-forget-it -- then it's probably a good fit.

Comment by speedgoose 7 days ago

I guess this change was inevitable. But I like the possibility to reconstruct a broken distributed file storage system. GlusterFS also allowed this.

My use case is relatively common : I want small S3 compatible object stores that can be deployed in Kubernetes without manual intervention. The CLI part was a bit in the way last time, this could have been automated but it wasn't straightforward.

Comment by cies 7 days ago

I use Supabase Storage. It does S3-style signed download links (so I can switch to any S3 service if I like later).

Comment by ibgeek 7 days ago

Time to fork and bring back removed features. :). An advantage of it being AGPL licensed.

Comment by vanschelven 7 days ago

Is there a good overview of recent Open Source Rugpulls in the vein of killedbygoogle.com somewhere?

Comment by positisop 7 days ago

Raising 100 mil at 1 B valuation and then trying for an exit is a bitch!

Comment by zerofor_conduct 7 days ago

“The real hell of life is everyone has his reasons.” ― Jean Renoir

Comment by bouk 7 days ago

big L for all the cloud providers that made the mistake of using it instead of forging their own path, they're kind of screwed now

Comment by tehjoker 7 days ago

How are they screwed if they can adopt the source and continue patching it? Writing their own would incur a greater cost.

Comment by dardeaup 7 days ago

Hopefully no one is shocked or surprised.

Comment by giancarlostoro 7 days ago

I'm both shocked and not surprised. Lots of questions: Are they doing that bad from the outcry? Or are they just keeping a private version and going completely commercial only? If so, how do they bypass the AGPL in doing so, I assume they had contributions under the AGPL.

Comment by 0x073 7 days ago

"For enterprise support and actively maintained versions, please see MinIO AIStor."

Commercial only, they will replace the agpl contributions from external people. (Or at least they will say that)

Comment by bogomipz 6 days ago

It's worth noting their enterprise support is a joke. As is their whole pivot to "AI." Their pitch is that they are an AI company now. Good riddance. I look forward to a good community fork.

Comment by Kerrick 7 days ago

I don't understand. They've seen the contributions. How can they possibly do a clean-room implementation to avoid copyright infringement? (Let alone how tangled up in the history of the codebase they must be...)

Comment by giancarlostoro 7 days ago

I hope some contributors get together and sue. ;)

Comment by tempest_ 7 days ago

It doesnt matter unless someone takes them to court over it.

Comment by adriatp 7 days ago

I had a minio server in my homelab and I have to replace it after the 15v because they capped almost all settings. So sad...

Comment by dbacar 7 days ago

Disgusting. Build a product, make it open-source to gain traction, and when you are done completely abandon it. Shame on me that I have put this ^%^$hit on a project and advocated it.

Comment by stronglikedan 7 days ago

That can happen to any project, hence why Plan B should be implemented right alongside Plan A whenever humanly possible.

Comment by Aurornis 7 days ago

> For enterprise support and actively maintained versions, please see [MinIO AIStor]

Naming the product “AIStor” is one of the most blatant forced AI branding pivots I’ve seen.

Comment by evil-olive 7 days ago

for maximum performance with MinIO AIStor, make sure to use one of Seagate's "AI hard drives":

https://www.seagate.com/products/video-analytics/skyhawk-ai-...

Comment by bityard 7 days ago

And the naming conflicts with NVidia's AIStore (https://github.com/NVIDIA/aistore). The two products are extremely similar. I don't know which came first, but Minio is going to want to do another pivot very soon if they want to survive. I doubt they have the resources to stand up to NVidia's army of extremely well-paid IP lawyers.

Comment by Natfan 7 days ago

this is no different than grok vs groq imo. aistor and aistore are different names, even if they're pronounced similarly.

Comment by positisop 7 days ago

Raising 100 mil at 1 B valuation and then trying for an exit is a bitch!

Comment by 0x1ch 7 days ago

> Kill open source features.

> Gaslight community when rightfully annoyed

> Kill off primary product

> Offer same product with AI slapped on the name to enterprise customers.

Good riddance Minio, and goodbye!

Comment by 7 days ago

Comment by 7 days ago

Comment by 7 days ago

Comment by ta9000 7 days ago

[dead]

Comment by theideaofcoffee 7 days ago

[flagged]

Comment by nkmnz 7 days ago

Downvoted because nobody knows how far a distance 39.5 feet is.

Comment by stronglikedan 7 days ago

they do if they know the shoe size of the person who measured it

Comment by deathanatos 7 days ago

It's a reference to a fairly widely known, and presently topical, song:

  Your brain is full of spiders
  You've got garlic in your soul, Mr. Grinch
  I wouldn't touch you with a
  39 and a half foot pole!