FFmpeg at Meta: Media Processing at Scale
Posted by sudhakaran88 1 day ago
Comments
Comment by dewey 1 day ago
Some comments seem to glance over the fact that they did give back and they are not the only ones benefitting from this. Could they give more? Sure, but this is exactly one of the benefits of open source where everyone benefits from changes that were upstreamed or financially supported by an entity instead of re-implementing it internally.
Comment by sergiotapia 1 day ago
We're using React Native, hello!?
We're using React!
Tons of projects, we should be very grateful they give so much tbh.
Comment by kindkang2024 1 day ago
Those who benefit others deserve to be benefited in return — and if we could, we should help make them more fit.
Comment by arjvik 1 day ago
Comment by jcul 1 day ago
Comment by popalchemist 1 day ago
Comment by sergiotapia 1 day ago
Comment by beachy 1 day ago
When your business is pushing ads to people while they watch cat videos, then video processing software is your complement, and you want it to be as cheap as possible.
[0] https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/
Comment by whattheheckheck 1 day ago
Comment by lofaszvanitt 2 hours ago
Comment by j45 1 day ago
Still, Meta has also put a lot out there in open source, from a differentiation perspective it doesn't seem to go unnoticed.
Comment by vmaurin 1 day ago
Comment by kccqzy 1 day ago
Big tech companies can easily hire manpower to make proprietary versions of software, or just pay licensing fees for other proprietary software. They don’t rely on open source. Microsoft bought 86-DOS to produce MS-DOS; Microsoft paid the Unix license to produce Xenix; and when Microsoft hired former DEC people to make NT, it later paid DEC.
Instead, modern startups wouldn’t exist without open source.
Comment by golfer 1 day ago
Comment by ok123456 1 day ago
Comment by cedws 1 day ago
Comment by izacus 1 day ago
Take a glance and contributor lists for your projects sometime.
Comment by dirasieb 1 day ago
Comment by rcxdude 12 hours ago
Comment by OKRainbowKid 22 hours ago
Comment by EdNutting 1 day ago
But personally, I took issue with the tone of the blog post, characterised by this opening framing:
>For many years we had to rely on our own internally developed fork of FFmpeg to provide features that have only recently been added to FFmpeg
Could they not have upstreamed those features in the first place? They didn't integrate with upstream and now they're trying to spin this whole thing as a positive? It doesn't seem to acknowledge that they could've done better (e.g. the mantra of 'upstream early; upstream often').
The attempt to spin it ("bringing benefits to Meta, the wider industry, and people who use our products") just felt tone-deaf. The people reading this post are engineers - I don't like it when marketing fluff gets shoe-horned into a technical blog post, especially when it's trying to put lipstick on a story that is a mix of good and not so good things.
So yeah, you're right, they've contributed to OSS, which is good. But the communication of that contribution could have been different.
Comment by pdpi 1 day ago
This is the gold standard, sure. In practice, you end up maintaining a branch simply because upstream isn't merging your changes on your timescale, or because you don't quite match their design — this is completely reasonable on both sides, because they have different priorities.
Comment by dewey 1 day ago
Hard to say without being there, but in my experience it's very easy to end up in "we'll just patch this thing quickly for this use case" to applying a bunch of hacks in various places and then ending up with an out of sync fork. As a developer I've been there many times.
It's a big step to go from patching one specific company internal use case to contributing a feature that works for every user of ffmpeg and will be accepted upstream.
Comment by EdNutting 1 day ago
However, my interpretation of the article was that they did a lot more than just patching pieces. They, perhaps, could have taken a much earlier opportunity to work with the core maintainers of ffmpeg to help define its direction and integrate improvements, rather than having to assist a significant overhaul now (years later).
Comment by Aurornis 1 day ago
The typical situation is that you need to write a proof of concept internally and get it deployed fast. Then you can iterate on it and improve it through real world use. Once it matures you can start working on aligning with upstream, which may take a lot of effort if upstream has different ideas about how it should be designed.
I’ve also had cases where upstream decided that the feature was good but they didn’t want it. If it doesn’t overlap with what the maintainers want for the project then you can’t force them to take it.
Upstreaming is a good goal to aim toward but it can’t be a default assumption.
Comment by summerlight 1 day ago
Comment by kevincox 1 day ago
Comment by EdNutting 1 day ago
But corporate blog posts often go this way. I'm not mad at them or anything. Just a mild dislike ;)
Comment by kevincox 1 day ago
Comment by pyrolistical 1 day ago
But you can use that to steer Meta. Explain how doing x (which also helps the community) makes them more money.
Comment by p-o 1 day ago
I really wonder if they couldn't have run the fork as an open source project. They present their options as binary when it fact they had many different options from the get go. They could have run the fork in an open-source fashion for developers of FFmpeg to see what their work was and be able to understand what the features they were working on was.
Keeping everything close source and then contributing back X amount of years later feels a little bit disingenuous.
Comment by zer0zzz 1 day ago
Often when you are working on a downstream code base either you are inheriting the laziness of non-upstreaming of others or you are dealing with an upstream code base that’s really opinionated and doesn’t want many of your teams patches. It can vary, and I definitely empathize.
Comment by xienze 1 day ago
This can be harder than you think. Some time ago I worked a $BIGCORP and internally we used an open source library with some modifications to allow it to fit better into our architecture. In order to get things upstreamed we had to become official contributors AND lobby to get everyone involved to see the usefulness of what we were trying to do. This took a lot of back-and-forth and rethinking the design to make it less specific to OUR needs and more generally applicable to everyone. It's a process. I'm not surprised that Facebook's initial approach would be an internal fork instead of trying to play the political games necessary to get everything upstreamed right off the bat. That's exactly the situation we were in, so I get it.
Comment by HumblyTossed 1 day ago
Comment by neutrinobro 1 day ago
While it is good they worked to get their internal improvements into upstream, and this is certainly better behavior than some other unmentioned tech giants. It makes one wonder (since they are presumably running it tens of billions of times per day), if they were involved in supporting these improvements all along. If not, why not?
Comment by ebbflowgo 1 day ago
Comment by tcbrah 1 day ago
Comment by MisterPea 21 hours ago
Silver lining of current RAM prices is that it changes the cost-benefit analysis of improving underlying software which (hopefully) gets implemented in open source.
Comment by kevincox 1 day ago
This makes a lot of sense for the live-streaming use case, and some sense for just generally transcoding a video into multiple formats. But I would love to see time-axis parallelization in ffmpeg. Basically quickly split the input video into keyframe chunks then encode each keyframe in parallel. This would allow excellent parallelization even when only producing a single output. (And without lowering video quality as most intra-frame parallelization does)
Comment by solid_fuel 22 hours ago
Comment by infogulch 1 day ago
Comment by Melatonic 1 day ago
Comment by infogulch 1 day ago
Comment by dv35z 1 day ago
Comment by hparadiz 1 day ago
Oof. That is so relatable.
Also ffmpeg 8 is finally handling HDR and SDR color mapping for HDR perfectly as of my last recompile on Gentoo :)
Comment by comrade1234 1 day ago
Comment by ecshafer 1 day ago
Comment by Maxious 1 day ago
Comment by dewey 1 day ago
Comment by petcat 1 day ago
Comment by BonoboIO 1 day ago
Comment by flipped 1 day ago
Comment by touwer 1 day ago
Comment by EdNutting 1 day ago
[Edit: Why is anyone downvoting me linking to the previous post of this? What possible objection could you have to this particular comment?]
Comment by WalterGR 1 day ago
Comment by EdNutting 1 day ago
Comment by EdNutting 1 day ago
It's completely the opposite of HN "assume good faith" policy. Sigh.
Comment by flipped 1 day ago
Comment by jamesnorden 1 day ago
Comment by hrmtst93837 1 day ago
Concrete things that actually reduce risk are paying for continuous fuzzing with OSS-Fuzz on libavcodec, funding multi-arch CI that covers macOS, Windows, ARM and Nvidia GPU tests, and committing to upstream fixes instead of maintaining an internal fork. If a company does those three things you'll likely see fewer regressions, fewer security surprises, and much lower downstream maintenance cost than from a one-off bank transfer and a press release.
Comment by mghackerlady 1 day ago
Comment by thiago_fm 1 day ago
Comment by cheema33 1 day ago
Comment by PrathamJain3903 12 hours ago
Comment by raphaelmolly8 1 day ago
Comment by boxingdog 1 day ago
Comment by casey2 5 hours ago
Comment by BorisMelnik 1 day ago
Comment by randall 1 day ago
I worked at fb, and I'm 100% certain we sponsored VLC and OBS at the time. It would be strange if we didn't sponsor FFMPEG, but regardless (as the article says) we definitely got out of our internal fork and upstreamed a lot of the changes.
I worked on live, and everyone in the entire org worships ffmpeg.
Comment by BorisMelnik 1 day ago
and I know the teams love ffmpeg, there are some great folks at meta just not a lot in the c suite
Comment by flipped 1 day ago
Comment by Suckseh 1 day ago
doesn't matter how you worship ffmpeg if a company, which makes billions by destroying our society, gives a little bit of handout back.
So good for you? Bad for ffmpeg, society and the rest of the world.
Comment by tt24 1 day ago
Comment by tsumnia 1 day ago
Comment by Suckseh 1 day ago
Comment by JambalayaJimbo 1 day ago
Comment by qalmakka 1 day ago
Comment by semiquaver 1 day ago
If you get mad when a company makes good use of open source and contributes to a project’s betterment, you do not understand the point of open source, you’re just fumbling for a pitchfork.
Comment by golfer 1 day ago
Comment by gruez 1 day ago
The analogy fails because free samples cost costco (or whatever the vendor is) money. Raking Meta over the coals for using ffmpeg instead of paying for some proprietary makes as much sense as raking every tech company over the coals for using Linux. Or maybe you'd do that too, I can't tell.
Comment by theultdev 1 day ago
They bet on open source and they open source a lot of technology.
It's one of the best companies when it comes to open source.
I don't know how much total they donate, but I've seen tons of grants given to projects from them.
Comment by pmontra 1 day ago
Comment by dotancohen 1 day ago
Comment by theultdev 1 day ago
But PHP wouldn't be here today if it wasn't for Meta and it's support.
Comment by pmontra 1 day ago
Actually, FaceBook worked against WordPress and the adoption of PHP because a number of people that could have used a WP instance to blog or to market a product started using a FB page instead. Ecommerce went from self hosted (Magento, Woocommerce, Prestashop) to hosted or to Amazon and also FB.
Comment by theultdev 23 hours ago
Wordpress did nothing to help further PHP other than adoption (which is still important, but not as important)
Comment by ianhawes 1 day ago
This could not be more wrong. Meta is still using PHP AFAIK but I'm not sure it's modern. They created the Hack programming language ~10 years ago but it doesn't look like it's been updated in several years. Most of the improvements they touted were included in PHP 7 years ago.
Comment by theultdev 1 day ago
But when the backend world was either Java or ASP, FB chose PHP and helped us other small companies out.
They eventually went Hack, the rest went Node for the most part.
But during those PHP years they gave us HHVM and many PHP improvements to get us through.
Comment by captn3m0 23 hours ago
Comment by theultdev 21 hours ago
Of course it wasn't merged in, it was a separate compiler, it certainly inspired future optimizations though.
But the point is, it was a very useful stop-gap solution for the community.
Also would like to highlight that they have contributed a lot to PHP upstream in addition to that.
Comment by righthand 1 day ago
Comment by cheema33 1 day ago
I am guessing the world moved to React because the developer community in general does not feel the same way.
Comment by righthand 1 day ago
Comment by ecshafer 1 day ago
Comment by theultdev 1 day ago
Been doing this for 20 years. React/JSX is the easiest (for me)
Comment by embedding-shape 1 day ago
React and JSX really did help a lot compared to how it used to be, which was pretty unmanageable already.
Comment by DonHopkins 18 hours ago
If you, for some inexplicable reason, judge companies "the best" only based on their open source software and totally ignore everything else they do to society, while totally ignoring all the other companies who support open source software so much better, without doing all the evil shit that Facebook does (like React).
The rest of us don't bend over backwards so far and blindfold ourselves to harsh reality just to lick Zuckerberg's boots.
Comment by theultdev 6 hours ago
Not defending the company in any other regard nor do I even like social media platforms, would rather have forums only again as a society.
Feel free to continue to follow me around and perform bad takes, it's funny.
Comment by acedTrex 1 day ago