Thus, as Mark Atwood, an open source policy expert, pointed out on Twitter, he had to keep telling Amazon to not do things that would mess up FFmpeg because, he had to keep explaining to his bosses that “They are not a vendor, there is no NDA, we have no leverage, your VP has refused to help fund them, and they could kill three major product lines tomorrow with an email. So, stop, and listen to me … ”
I agree with the headline here. If Google can pay someone to find bugs, they can pay someone to fix them. How many time have managers said "Don't come to me with problems, come with solutions"
I've been a proponent of upstreaming fixes for open source software.
Why?
- It makes continued downstream consumption easier, you don't have to rely on fragile secret patches.
- It gives back to projects that helped you to begin with, it's a simple form of paying it forward.
- It all around seems like the "ethical" and "correct" thing to do.
Unfortunately, in my experience, there's often a lot of barriers within companies to upstream. Reasons can be everything from compliance, processes, you name it... It's unfortunate.
I have a very distinct recollection of talks about hardware aspirations and upstreaming software fixes at a large company. The cultural response was jarring.
As yet, Valve is the only company I know of doing this, and it's paying off in dividends both for Linux and for Valve. In just 5ish years of Valve investing people and money into Linux- specifically mesa and WINE, Linux has gone from a product that is kind of shaky with Windows, to "I can throw a windows program or game at it and it usually works". Imagine how further the OSS ecosystem would be if Open Source hadn't existed, only FOSS; and companies were legally obligated to either publish source code or otherwise invest in the ecosystem.
WINE, CodeWeavers, Mesa, Red Hat, and plenty of others have been pumping money into the Linux graphics subsystems for a very long time. It's cool that Valve was able to use its considerable wealth to build a business off of it. But they came in at a pretty opportune time.
Windows support had gotten a boost from .NET going open source as well as other stuff MS began to relax about. It also helped that OpenGL was put to rest and there was a new graphics API that could reasonably emulate DirectX. I don't know much about the backstory of Mesa, but it's pretty cool tech that has been developing for a long time.
Valve is so successful because it is a private company, and the CEO is the CTO and he is essentially the corporate equivalent of a religious monk. How else can you get 20+ years to slowly build a software business?
As a side note YC and tech startups themselves have become reality TV. Your goal should be Valve! You should be Gabe Newell! You don’t need to be famous! Just build something valuable and be patient
Ironically, Gabe is more famous than the rest of whoever you're talking about, not because he seeks fame but just because he generally does right by his customers and makes himself accessible. Telling gamers to email him with questions, concerns, comments, anything, and then actually responding. Even though he's apparently spending most of his time hanging out on yachts, people love him because he makes an effort to be tuned in to what his customers want. If you do that, you'll be famous in a better way than what you can get from reality TV.
Steam is the most dominant game tool on the planet and landed when there was not yet a market for it. Very few other projects will get to the level of success it has in any sector, anywhere.
GabeN was also a MS developer back in the day and likely would have been well off regardless, but he didn't need to play the YC A-B-let's-shoehorn-AI bullshit games that are 100% de rigeour for all startups in 2025.
From what I understand, Gabe/Valve almost went bust during Half Life's development. His gamble paid off when that turned into a runaway success, but he still could have lost it when he bet again on HL2 and Steam; at the time it was extremely controversial to make those a package deal. If Half Life 2 had been not quite as good as it turned out to be, it could have turned out to be a studio with a one hit wonder that burned their goodwill with some sketchy DRM sort of scheme on their second game.
> How else can you get 20+ years to slowly build a software business?
It used to be normal to build a business slowly over 20 years. Now everyone grabs for the venture capital, grows so fast they almost burst, and the venture capital inevitably ends in enshittification as companies are forced by shareholders to go against their business model and shit over their customers in order to generate exponential profit margins.
WINE was a thing for years and generally worked okay for a lot of things.
I was playing Fallout 3 on WINE well before Valve got involved with minimal tweaks or DIY effort.
Proton with Steam works flawlessly for most things including AAA games like RDR2 and it's great, but don't forget that WINE was out there making it work for a while
> WINE was a thing for years and generally worked okay for a lot of things.
Yes, but Valve's involvement handled "the last 10% takes the 90% of the time" part of WINE, and that's a great impact, IMHO.
Trivia: I remember WINE guys laughing at WMF cursor exploit, then finding the exploit works on WINE too and fix it in panic, and then bragging bug-for-bug compatibility with Windows. It was hilarious.
Also, WINE allowed Linux systems to be carrier for Windows USB flash drive virii without being affected by them for many years.
Can't you just give the information you are hinting at? Other people than OP read this. You basically tell me to go read thousands of messages on a mailing list just solve your rhetorical question. (answer: Intel, Redhat, Meta, Google, Suse, Arm and Oracle. There are much more efficient ways to find this.) Yes, they are the main kernel contributors and have been for many years. I'm still not sure I understand the comment.
I think GP answered as they did because there are so many examples it's hard to know where to start.
It's not entirely unlike if someone said "the only person I know writing books successfully is Brandon Sanderson." I do think "you ought to go check out your local book store" would be a valid response.
I'd say as a counterpoint that just because someone works at, say, Meta or Oracle, and also contributes to OSS projects, that doesn't equate to the company they work at funding upstream projects (at least not by itself).
I don't even have to link the xkcd comic because everyone already knows which one goes here.
Everyone I know who contributes to Linux upstream is paid to do it. It's not really worth the hassle to bother trying if you weren't getting paid. It's also very easy to find companies that will pay you to work on Linux and upstream.
To be clear, both of those are closed source, proprietary games owned by Valve. It makes sense for them to want to consolidate their player base in one game.
> Valve is the only company I know of [upstreaming fixes for open source software]
Sorry, that's ridiculous. Basically every major free software dependency of every major platform or application is maintained by people on the payroll of one or another tech giant (edit: or an entity like LF or Linaro funded by the giants, or in a smaller handful of cases a foundation like the FSF with reasonably deep industry funding). Some are better than others, sure. Most should probably be doing more. FFMpeg in particular is a project that hasn't had a lot of love from platform vendors (most of whom really don't care about software codecs or legacy formats anymore), and that's surely a sore point.
But to pretend that SteamOS is the only project working with upstreams is just laughable.
From my time working at a Fortune 100 company, if I ever mentioned pushing even small patches to libraries we effing used, I'd just be met "try to focus on your tickets". Their OSS library and policies were also super byzantine, seemingly needing review of everything you'd release, but the few times I tried to do it the official way, I just never heard anything back from the black-hole mailing list you were supposed to contact.
Yes, I've also worked on OpenStack components at a university, and there I see Red Hat or IBM employees pushing up loads of changes. I don't know if I've ever seen a Walmart, UnitedHealth, Chase Bank, or Exxon Mobil (to pick some of the largest companies) email address push changes.
I don't know about ExxonMobil but Walmart, UnitedHealth Group, and JPMorganChase employees do actively contribute to open source projects. Maybe just not the ones you used. They have also published some of their own.
To steelman this: I've never worked at any of the companies you listed but most likely Red Hat and IBM employees (Is there still a difference?) are being paid specifically to work on Openstack, as they get money from support contracts. When Walmart of Chase use Openstack there is a rather small team who is implementing openstack to be used as a platform. They are then paying IBM/Redhat for that support. There probably isn't really the expertise in the Openstack team at Warlmart to be adding patches. Some companies spend a different amount of money on in house technology than others, and then open source it.
Check again. The Optum unit of UnitedHealth Group has huge revenue from software and technical services. If just that part of the business was spun out it would be one of the top 20 US tech companies.
What a lot of people don't realize is that it's mostly employer HR departments running the "death panels". UHG and its competitors would be happy to sell insurance policies that cover absolutely everything with no questions asked: this would be easier for them to administer without the hassles of utilization management and claim edits. But customers — mainly large employers — demand that insurers (or third-party administrators) impose more restrictive coverage rules in order to hold down medical costs.
Ultimately there will always be some healthcare rationing. This happens in every country. For example, the UK NHS has death panels which decide that certain treatments won't be covered at all because they're not cost effective. Resources are limited and demand is effectively infinite. So the only real question is how we do the rationing.
> UHG and its competitors would be happy to sell insurance policies that cover absolutely everything with no questions asked...But customers — mainly large employers — demand that insurers (or third-party administrators) impose more restrictive coverage rules in order to hold down medical costs.
UHG has been caught denying claims for things that employers already paid them to cover for their employees. You can't blame HR departments for that. You also can't blame HR for UHG upcoding/overbilling which eats into the limited resources of hospitals and the limited resource of taxpayer money ultimately resulting in fewer people able to get the healthcare they need just so that UHG can line their own pockets.
While HR departments do have their own issues, they're nowhere near the level of pure evil that UHG is.
FWIW, when working at a major Silicon Valley tech company in the mid 2010s, my team made significant contributions to OSS projects including OpenStack and the Linux kernel as a core part of our work for Walmart.
The work to upstream our changes was included in the Statements of Work which Walmart signed off on, and our time spent on those efforts was billed to them.
The stats for those projects will have recorded my former employer as the direct source of those contributions - but they wouldn't have existed had it not been for Walmart.
Sure, but the parent’s comment hits on something perhaps. All the tech giants contribute more haphazardly and for their own internal uses.
Valve does seem somewhat rare position of making a proper Linux distro work well with games. Google’s Chromebooks don’t contribute to the linux ecosystem in the same holistic fashion it seems.
> Unfortunately, in my experience, there's often a lot of barriers within companies to upstream. Reasons can be everything from compliance, processes, you name it... It's unfortunate.
I’ve been at several companies where upstreaming was encouraged for everything. The fewer internal forks we could maintain, the better.
What surprised me was how many obstacles we’d run into in some of the upstream projects. The amount of time we lost to trying to appease a few maintainers who were never happy with code unless they wrote it themselves was mind boggling.
For some projects you can basically forget about upstreaming anything other than an obvious and urgent bug fix because the barriers can be so high.
While there's sometimes maintainer-prima-donna egos the contend with there's also this:
Any patch sent in also needs to be maintained into the future, and most of the time it's the maintainers that need to do that, not the people contributing the patch. Therefore any feature-patches (as opposed to simple bugfixes) are quite often refused, even if they add useful functionality, because the maintainers conclude they will not be able to maintain the functionality into the future (because no one on the maintaining team has experience in a certain field, for example).
The quality bar for a 'drive by patch' which is contributed without the promise of future support is ridiculously high and it has to be. Other peoples' code is always harder to maintain than your own so it has to make up for that in quality.
Not any patch. Sometimes there are patches that are not explicitly fixing defects, but for example they surface a boolean setting that some upstream library started to expose. That setting is exactly like a dozen other settings already there. It's made using the same coding style and has all requisite things other settings have.
Maybe the developer intends to some day change the internal implementation, such that that particular boolean flag wouldn't make sense any more. Or they're considering taking out the option entirely, and thus simplifying the codebase by making it so it only works one way.
Maybe the developer just doesn't care about your use case. If I have a project that works fine for what I do with it, why should I also care about some other use case you have for my work? I'm not your employee. Your product doesn't put a roof over my head.
I don't want a job where I do free work, for a bunch of companies who all make money off my work. That's a bad deal. Its a bad deal even if my code gets better as a result. I have 150 projects on github. I don't want to be punished if any of those projects become popular.
We can't go around punishing projects like ffmpeg or ruby on rails for the crime of being useful.
> Maybe the developer just doesn't care about your use case. If I have a project that works fine for what I do with it, why should I also care about some other use case you have for my work?
Then say you don't expect contributions at all. That's a fair game, I'm ok with it. I will then exercise my rights granted by your license in another way (forking and making my own fix most likely). My gripe is with projects that write prominently "PRs welcome", and then make enough red tape to signal that nah, not really.
The pattern I have seen is that if you want to contribute a fix into a project, you are expected to "engage with the community", wear their badge, invest into the whole thing. I don't want to be in your community, I want to fix a bug in a thing I'm using and go on with my life.
Given the usual dynamics of online communities which are getting somehow increasingly more prone to dramas, toxicity, tribalism, and polarization, I just as increasingly want to have no part in them most of the time.
I think many projects would be better for having a lane for drive-by contributors who could work on fixing bugs that prevent their day-to-day from working without expectations of becoming full-time engaged. The project could set an expectation that "we will rewrite your patch as we see fit so we could integrate and maintain it, if we need/want to". I wouldn't care as long as the problem is taken care of in some way.
In my experience simple bugfixes are nearly always accepted without fuss (in active projects, that is. Some project in maintenance mode where the last commit was 3 months ago is a different story, because then probably just no-one can be arsed to look at the patch).
Some simple setting expose like you describe can sometimes go in without a fuss or it can stall, that depends on a lot of factors. Like the other reply states: it could go against future plans. Or it could be difficult for the maintainer to see the ramifications of a simple looking change. It sucks that it is that way (I have sent in a few patches for obscure CUPS bugs which have stayed in limbo, so I know the feeling ;-) ) but it is hardly surprising. From a project's point of view drive-by patches very often cost more than they add so to get something included you often need to do a very thorough writeup as for why something is a good idea.
> I just as increasingly want to have no part in them most of the time.
If all people you meet are assholes.... ;-P Not to say you are an asshole, or at least not more than most people, but I have been in this situation myself more than once, and it really pays to stay (overly) polite and not let your annoyance about being brushed off slip through the mask. The text-only nature of these kind of communications are very sensitive to misinterpretations and annoyances.
It would be nice if all you'd need for a patch to be included somewhere was for it to be useful. But alas there's a certain amount of social engineering needed as well. And imho this has always been the case. If you feel it gets increasingly hostile that's probably your own developer burnout speaking (by do I know that one :-P )
“Pay my way or take the highway” is as close to the closed-source ethos as you can possibly get. Collaboration is not feasible if the barrier of entry is too high and those involved make no effort to foster a collaborative environment.
> The amount of time we lost to trying to appease a few maintainers who were never happy with code unless they wrote it themselves was mind boggling.
That brings us full circle to the topic because one important thing that gets people motivated into accepting other people's changes to their code is being paid.
If you work in FOSS side projects as well as a proprietary day job, you know it: you accept changes at work that you wouldn't in those side projects.
In the first place, you write the code in ways you wouldn't due to conventions you disagree with, in some crap language you wouldn't use voluntarily, and so it geos.
People working on their own FOSS project want everything their way, because that's one of the benefits of working on your own FOSS project.
I've literally had my employer's attorneys tell me I can't upstream patches because it would put my employer's name on the project, and they don't want the liability.
No, it didn't help giving them copies of licenses that have the usual liability clauses.
It seems a lot of corporate lawyers fundamentally misunderstand open source.
Corporate counsel will usually say no to anything unusual because there's no personal upside for them to say yes. If you escalate over their heads with a clear business case then you can often get a senior executive to overrule the attorneys and maybe even change the company policy going forward. But this is a huge amount of extra unpaid work, and potentially politically risky if you don't have a sold management chain.
I don't know if it would work, but sometimes I consider a "moochers" rule wrt opensource code.
Like, here's the deal: The work is proper, legit opensource. You can use it for free, with no obligations.
But if your company makes a profit from it, you're expected to either donate money to the project or contribute code back in kind. (Eg security patches, bug fixes, or contribute your own opensource projects to the ecosystem, etc).
If you don't, all issues you raise and PRs get tagged with a special "moocher" status. They're automatically - by default - ignored or put in a low priority bin. If your employees attend any events, or join a community discord or anything like that, you get a "moocher" badge, so everyone can see that you're a parasite or you work for parasites. Thats ok; opensource licenses explicitly allow parasites. I'm sure you're a nice person. But we don't really welcome parasites in our social spaces, or allow parasites to take up extra time from the developers.
I've spent the last 32 years pushing every employer I've had to contribute back to open source. Because of the sector I work in, more often than not I'm constrained by incredibly tight NDAs.
I can usually stop short of providing code and file a bug that explains the replication case and how to fix it. I've taken patches and upstreamed them pseudonymously on my own time when the employer believed the GPL meant they couldn't own the modifications.
If after all that you still want to label me a moocher at cons, that's your choice.
It goes even further sometimes, I've seen someone in the Go community slack announce they are going to dial back their activity because of Very Serious Clauses in their Apple contract.
That seems to imply that Apple employees are prohibited from being good internet citizens and e.g. helping people out with any kind of software issue. This presumably includes contributing to open source, although I'm sure they can get approval for that. But the fact they have to get approval for it is already a chilling effect.
Why would they invest resources - scarce, expensive time of attorneys - in researching and solving this problem? The attorneys' job is to help the company profit, to maximize ROI for legal work. Where is the ROI here? And remember, just positive ROI is unacceptable; they want maximum ROI per hour worked. When the CEO asks them how this project maximized ROI, what do they say?
I believe in FOSS and can make an argument that lots of people on HN will accept, but many outside this context will not understand it or care.
If you fixed something in an open source library you use, and you don't push that upstream, you are bound to re-apply that patch with every library update you do. And today's compliance rules require you to essentially keep all libraries up to date all the time, or your CVE scanners will light up. So fixing this upstream in the original project has a measurable impact on your "time spent on compliance and updates KPI".
This touches on what I ended up telling them: maintaining a local patchset is expensive and fragile. Running customized versions of things is a self-inflicted compliance problem.
Sounds like your employers attorneys need to be brought to heel by management. Like most things, this is a problem of management not understanding that details matter.
I upstreamed a 1-line fix, plus tests, at my previous company. I had to go through a multi-month process of red tape and legal reviews to make it happen. That was a discouraging experience to say the least.
My favorite is when while you were working through all that, the upstream decided they need a CLA. And then you have to go through another round of checking to see if your company thinks it's ok for you to agree to sign that for a 1 line change.
Certainly easier to give a good bug report and let upstream write the change, if they will.
I found a tiny bug in a library. A single, trivial, “the docs say this utility function does X, but it actually does Y”. I’m not even allowed to file a bug report. It took me some time to figure out how to even ask for permission, and they referred it to some committee where it’s in limbo.
One of my past employers in the UK added to the policy all the software the employee writes during the employment (eg. during the weekend, on the personal hardware), is owned by the company.
Several software engineers left, several didn't sign it.
Yes, company was very toxic apart of that. Yeah, I should name and shame but I won't be doxxing myself.
Many years ago an employer tried to to that and everyone .. just refused to sign the new contracts. The whole thing sat in standoff limbo for months until the dotcom crash happened and the issue became moot when we were all made redundant.
This is what I've done in those rare cases I've had to fix a bug in a tool or a library I've used professionally. I've also made sure to do that using online identities with no connection to my employer so that any small positive publicity for the contribution lands on my own CV instead of the bureaucratic company getting the bragging rights.
Even at places that are permissive about hobby code, a company ought to want to put its name on open source contributions. These build awareness in the programming community of the company and can possibly serve as a channel for recruitment leads. But the (usually false) perception of legal risk and misguided ideas about what constitutes productivity usually sink any attempts.
It is amazing how companies want this "marketing" but don't want to put the actual effort to make it possible.
A tech company I worked at once had a "sponsorship fund" to "sponsor causes" that employees wanted, it was actually good money but a drop in the bucket for a company. A lot of employees voted for sponsoring Vue.js, which is what we used. Eventually, after months of silence, legal/finance decided it was too much work.
But hey it wasn't an exception. The local animal shelter was the second most voted and legal/finance also couldn't figure it out how to donate.
In the end the money went to nowhere.
The only "developer marketing" they were doing was sending me in my free time to do panels with other developers in local universities and conferences. Of course it was unpaid, but in return I used it to get another job.
Maybe I've just gotten lucky, but at companies I've worked for I've usually gotten the go-ahead to contribute upstream on open source projects so as long as it's something important for what we are working on. The only reason I didn't do a whole lot as part of my work at Google is because most of the open source projects I contributed to at Google were Google projects that I could contribute to from the google3 side, and that doesn't count.
> Unfortunately, in my experience, there's often a lot of barriers within companies to upstream. Reasons can be everything from compliance, processes, you name it...
True. In my case I literally had to fight for it. Our lawyers were worried about a weakened patent portfolio and whatnot. In my case at least I won and now we have a culture of upstreaming changes. So don't give up the fight, you might win.
> Unfortunately, in my experience, there's often a lot of barriers within companies to upstream. Reasons can be everything from compliance, processes, you name it... It's unfortunate.
I sympathize and understand those issues for small companies, but after a certain size those excuses stop being convincing.
Especially for a software company like Google who runs dozens of open source projects, employs an army of lawyers to monitor compliance, and surely has to deal with those issues on a daily basis anyway.
At some point there needs to be pushback. Companies get a huge amount of value from hobbiest open source projects, and eventually they need to start helping out or be told to go away.
This is why all open source software should be copyleft. No discussion to be had: either you upstream changes, or that open source developer's going to get funded via the courts.
I think if you look a bit deeper, all product lines from said trillion dollor company rely on open source to some degree. They should be spending hundreds of millions in sponsorship of OS projects. They should put the maintainers on their payroll. Not even reporting to a manager, just pay them a salary for their OS work.
In a follow-up tweet, Mark Atwood eloborates: "Amazon was very carefully complying with the licenses on FFmpeg. One of my jobs there was to make sure the company was doing so. Continuing to make sure the company was was often the reason I was having a meeting like that inside the company."
I interpret this as meaning there was an implied "if you screw this up" at the end of "they could kill three major product lines with an email."
Are you interpreting that as "if we violate the license, they can revoke our right to use the software" ?? And they use it in 3 products so that would be really bad. That would make sense to have a compliance person.
Yeah - Amazon Elastic Transcoder which they just shut down and replaced with Elemental MediaConvert is almost certainly just managed "ffmpeg as a Service" under the hood.
Twitch definitely. This whole brouhaha has been brewing for a while, and can be traced back to a spat between Theo and ffmpeg.
In the now deleted tweet Theo thrashed VLC codecs to which ffmpeg replied basically "send patches, but you wouldn't be able to". The reply to which was
You clearly have no idea how much of my history was in ffmpeg. I built a ton of early twitch infra on top of yall.
--- end quote ---
This culminated in Theo offering a 20k bounty to ffmpeg if they remove the people running ffmpeg twitter account. Which prompted a lot of heated discussion.
So when Google Project Zero posted their bug... ffmpeg went understandably ballistic
The company that I work at makes sure anything that uses third-party library, whether in internal tools/shipped product/hosted product, goes through legal review. And you'd better comply with whatever the legal team asks you to do. Unless you and everyone around you are as dumb as a potato, you are not going to do things that blatantly violates licenses, like shipping a binary with modified but undisclosed GPL source code. And you can be sure that (1) it's hard to use anything GPL or LGPL in the first place (2) even if you are allowed to, someone will tell you to be extra careful and exactly do what you are told to (or not to)
And as long as Amazon is complying with ffmpeg's LGPL license, ffmpeg can't just stop licensing existing code via an email. Of course, unless there is some secret deal, but again, in that case, someone in the giant corporation will make sure you follow what's in the contract.
Basically, at company at Amazon where there are functional legal teams, the chance of someone "screwing up" is very small.
Easy: ffmpeg discontinues or relicenses some ffmpeg functionality that AWS depends on for those product alines and AWS is screwed. I've seen that happen in other open source projects.
And then the argument for refusing to just pay ffmpeg developers gets even more flimsy.
The entire point here is to pay for the fixes/features you keep demanding, else the project is just going to do as it desires and ignore you.
More and more OSS projects are getting to this point as large enterprises (especially in the SaaS/PaaS spheres) continue to take advantage of those projects and treat them like unpaid workers.
Not really. Their whole reason for not funding open source is it essentially funds their competitors who use the same projects. That's why they'd rather build a closed fork in-house than just hand money to ffmpeg.
It's a dumb reason, especially when there are CVE bugs like this one, but that's how executives think.
> Their whole reason for not funding open source is it essentially funds their competitors who use the same projects. That's why they'd rather build a closed fork in-house than just hand money to ffmpeg.
So the premise here is that AWS should waste their own money maintaining an internal fork in order to try to make their competitors do the same thing? But then Google or Intel or someone just fixes it a bit later and wisely upstreams it so they can pay less than you by not maintaining an internal fork. Meanwhile you're still paying the money even though the public version has the fix because now you either need to keep maintaining your incompatible fork or pay again to switch back off of it. So what you've done is buy yourself a competitive disadvantage.
> that's how executives think.
That's how cargo cult executives think.
Just because you've seen someone else doing something doesn't mean you should do it. They might not be smarter than you.
It's the tragedy of the commons all over again.
You can see it in action everywhere people or communities should cooperate for the common good but don’t. Because many either fear being taken advantage of or quietly try to exploit the situation for their own gain.
The tragedy of the commons is actually something else. The problem there comes from one of two things.
The first is that you have a shared finite resource, the classic example being a field for grazing which can only support so many cattle. Everyone then has the incentive to graze their cattle there and over-graze the field until it's a barren cloud of dust because you might as well get what you can before it's gone. But that doesn't apply to software because it's not a finite resource. "He who lights his taper at mine, receives light without darkening me."
The second is that you're trying to produce an infinite resource, and then everybody wants somebody else to do it. This is the one that nominally applies to software, but only if you weren't already doing it for yourself! If you can justify the effort based only on your own usage then you don't lose anything by letting everyone else use it, and moreover you have something to gain, both because it builds goodwill and encourages reciprocity, and because most software has a network effect so you're better off if other people are using the same version you are. It also makes it so the effort you have to justify is only making some incremental improvement(s) to existing code instead of having to start from scratch or perpetually pay the ongoing maintenance costs of a private fork.
This is especially true if your company's business involves interacting with anything that even vaguely resembles a consolidated market, e.g. if your business is selling or leasing any kind of hardware. Because then you're in "Commoditize Your Complement" territory where you want the software to be a zero-margin fungible commodity instead of a consolidated market and you'd otherwise have a proprietary software company like Microsoft or Oracle extracting fees from you or competing with your hardware offering for the customer's finite total spend.
Google, AWS, Vimeo, etc can demand all they want. But they’re just another voice without any incentives that aid the project. If they find having an in-house ffmpeg focused on their needs to be preferable, go for it; that’s OSS.
But given its license, they’re going to have to reveal those changes anyways (since many of the most common codecs trigger the GPL over LGPL clause of the license) or rewrite a significant chunk of the library.
They COULD, but history has shown they would rather start and maintain their own fork.
It might not make sense morally, but it makes total sense from a business perspective… if they are going to pay for the development, they are going to want to maintain control.
I always like to point out that "Open Source" was a deliberate watering-down of the moralizing messaging of Free Software to try and sell businesses on the benefits of developing software in the open.
> We realized it was time to dump the confrontational attitude that has been associated with "free software" in the past and sell the idea strictly on the same pragmatic, business-case grounds that motivated Netscape.
I like FS, but it's always had kind of nebulous morality, though. It lumps in humans with companies, which cannot have morals, under the blanket term "users".
This is the same tortured logic as Citizens United and Santa Clara Co vs Southern Pacific Railroad, but applied to FS freedoms instead of corporate personhood and the 1st Amendment.
I like the FS' freedoms, but I favor economic justice more, and existing FS licenses don't support that well in the 21st c. This is why we get articles like this every month about deep-pocketed corporate free riders.
Agree in some ways. Still, discussing the nitty gritty is superfluous, the important underlying message you are making is more existential.
Open source software is critical infrastructure at this point. Maintainers should be helped out, at least by their largest users. If free riding continues, and maintainers' burden becomes too large, supply chain attacks are bound to happen.
> Agree in some ways. Still, discussing the nitty gritty is superfluous, the important underlying message you are making is more existential.
It's an important conversation to have.
I remember a particular developer...I'll be honest, I remember his name, but I remember him being a pretty controversial figure here, so I'll pretend not to know them to avoid reflexive downvotes...but this developer made a particular argument that I always felt was compelling.
> If you do open source, you’re my hero and I support you. If you’re a corporation, let’s talk business.
The developer meant this in the context of preferring the GPL as a license, but the problem with the GPL is that it still treats all comers equally. It's very possible for a corporation to fork a GPL project and simply crush the original project by throwing warm bodies at their projects.
Such a project no longer represents the interests of the free software community as a whole, but its maintainers specifically. I also think that this can apply to projects that are alternatives to popular GPL projects, except for the license being permissive.
We need to revisit the four freedoms, because I no longer think they are fit for purpose.
There should be a "if you use this product in a for-profit environment, and you have a yearly revenue of $500,000,000,000+ ... you can afford to pay X * 100,000/yr" license.
That's the Llama license and yeah, a lot of people prefer this approach, but many don't consider it open source. I don't either.
In fact, we are probably just really lucky that some early programmers were kooky believers in the free software philosophy. Thank God for them. So much of what I do owes to the resulting ecosystem that was built back then.
I reckon this is an impedance mismatch between "Open Source Advocacy" and Open Source as a programming hobby/lifestyle/itch-to-scratch that drives people to write and release code as Open Source (of whatever flavour they choose, even if FSS and/or OSF don't consider that license to qualify as "Open Source").
I think Stallmann's ideological "allowing users to run, modify, and share the software without restrictions" stance is good, but I think for me at least that should apply to "users" as human persons, and doesn't necessarily apply to "corporate personhood" and other non-human "users". I don't see a good way to make that distinction work in practice, but I think it's something that if going to become more and more problematic as time goes on, and LLM slop contributions and bug reports somehow feed into this too.
I was watching MongoDB and Redis Labs experiments with non-OSF approved licences clearly targeted at AWS "abusing" those projects, but sadly neither of those cases seemed to work out in the long term. Also sadly, I do not have any suggestions of how to help...
A fork is more expensive to maintain than funding/contributing to the original project. You have to duplicate all future work yourselves, third party code starts expecting their version instead of your version, etc.
Funding ffmpeg also essentially funds their competitors, but a closed fork in-house doesn't. Submitting bugs costs less than both, hence why they still use ffmpeg in the first place.
Yes, definitely. I was just saying that if the license ever did change, they would move to an in-house library. In fact, they would probably release the library for consumer use as an AWS product.
something more dangerous would be "amazon is already breaking the license, but the maintainers for now havent put in the work to stop the infringement"
Relicensing isn't necessary. If you violate the GPL with respect to a work you automatically lose your license to that work.
It's enough if one or two main contributors assert their copyrights. Their contributions are so tangled with everything else after years of development that it can't meaningfully be separated away.
I don’t know about ffmpeg, but plenty of OSS projects have outlined rules for who/when a project-wide/administrative decision can be made. It’s usually outlined in a CONTRIB or similar file.
If you breach the LGPLv2/GPLv2 licence then you lose all rights to use the software.
There's no penalty clause, there's no recovery clause. If you don't comply with the licence conditions then you don't have a licence. If you don't have a licence then you can't use the program, any version of the program. And if your products depend on that program then you lose your products.
The theoretical email would be a notification that they had breached the licence and could no longer use the software. The obvious implication being that AWS was wanting to do something that went contrary to the restrictions in the GPL, and he was trying to convince them not to.
And? How does that give the ffmpeg authors a power over Amazon? (Hint: it doesn’t and the guy we’re discussing is spewing nonsense for maximum retweets)
I'd guess Prime Video heavily relies on ffmpeg, then you got Elastic Transcode and the Elemental Video Services. Probably Cloudfront also has special things for streaming that rely on ffmpeg.
The "kill it with an email" probably means that whoever said this is afraid that some usecase there wouldn't stand up to an audit by the usual patent troll mothercluckers. The patents surrounding video are so complex, old and plentiful that I'd assume full compliance is outright impossible.
AWS MediaConvert as well which is a huge API (in surface it covers) which is under Elemental but is kinda it's own thing - willing to bet (though I don't know) that that is ffmpeg somewhere underneath.
The API manual for it is nearly 4000 pages and it can do insane stuff[1].
I had to use it at last job(TM), it's not terrible API wise.
As a Googler, I wish I was as optimistic as you. There is an internal sentiment that valuable roles are being removed that aren't aligned with strategic initiatives, even roles that are widely believed to improve developer productivity. See the entire python maintainers team being laid off: https://www.reddit.com/r/AskProgramming/comments/1cem1wk/goo...
Roles fixing FFmpeg bugs would be a hard sell in this environment, imho.
> "Don't come to me with problems, come with solutions"
The problem is, the issue in the article is explicitly named as "CVE slop", so if the patch is of the same quality, it might require quite some work anyway.
The linked report seems to me to be the furthest thing from "slop". It is an S-tier bug report that includes a complete narrative, crash artifacts, and detailed repro instructions. I can't believe anyone is complaining about what is tied for the best bug report I have ever seen. https://issuetracker.google.com/issues/440183164?pli=1
But it's also a bug report about the decoder for "SANM ANIM v0" - a format so obscure almost all the search results are the bug report itself. Possibly a format exclusive to mid-1990s LucasArts games [1]
Pretty crazy that ffmpeg supports the codec in the first place, IMHO.
I can understand volunteers not wanting to sink time into maintaining a codec to play a video format that hasn't been used since the Clinton administration. gstreamer divides their plugins into 'good', 'bad' and 'ugly' to give them somewhere to stash unmaintained codecs.
It's a codec that is enabled by default at least on major Linux distributions, and that will be processed by ffmpeg without any extra flags. Anyone playing an untrusted video file without explictly overriding the codec autodetection is vulnerable.
The format being obscure and having no real usage doesn't help when it's the attackers creating the files. The obscure formats are exposing just as much attack surface as the common ones.
> Pretty crazy that ffmpeg supports the codec in the first place, IMHO.
Sure, it's a valid bug report. But I don't understand why there has been so much drama over this when all the ffmpeg folks have to do is say "sorry, this isn't a priority for us so we'll get to it as soon as we can" and put the issue in the backlog as a low priority. If Google wants the issue fixed faster, they can submit a fix. If they don't care enough to do that, they can wait. No big deal either way. Instead, ffmpeg is getting into a public tiff with them over what seems to be a very easily handled issue.
Yes, you're very right. They could simply have killed a codec that no one uses anymore. Or put it behind a compile flag, so if you really want, you can still enable it
But no. Intentionally or not, there was a whole drama created around it [1], with folks being criticized [2] for saying exactly what you said above, because their past (!) employers.
Instead of using the situation to highlight the need for more corporate funding for opensource projects in general, it became a public s**storm, with developers questioning their future contributions to projects. Shameful.
That behaviour is indeed totally unacceptable. At your job. Where they're paying you, and especially if they're paying you at FAANG type pay scales.
If you're an unpaid volunteer? Yeah - nah. They can tell you "Sorry, I'm playing with my cat for the next 3 months, maybe I'll get to it after that?", or just "Fuck off, I don't care."
(I'm now playing out a revenge fantasy in my head where the ffmpeg team does nothing, and Facebook or Palantir or someone similar get _deeply_ hacked via the exploit Google published and theat starts the planets biggest ever pointless lawyers-chasing-the-deepest-pockets fight.)
Or perhaps you’re a FAANG security researcher and your time will be better spent serving the OSS community as a whole by submitting as many useful bug reports as possible, instead of slightly fewer reports with patches included.
In this particular case it’s hardly obvious which patch you should submit. You could fix this particular bug (and leave in place the horrible clunky codec that nobody ever uses) OR you could just submit a patch that puts it behind a compile flag. This is really a decision for the maintainers, and submitting the latter (much better!) patch would not save the maintainers any meaningful amount of time anyway.
I don’t understand how it helps the community to publicly release instructions for attacking people, unless you’re trying to incentivize a company to fix their crap. In this case, there is no company to incentivize, so just report it privately.
You can say publicly that “there is an ABC class vulnerability in XYZ component” so that users are aware of the risk.
It’s OSS so somebody who cares will fix it, and if nobody cares then it doesn’t really matter.
This also informs users that it’s not safe to use ffmpeg or software derived from it to open untrusted files, and perhaps most importantly releasing this tells the distro package maintainers to disable the particular codec when packaging.
This bug might lead to vulnerability and that's enough. It makes no sense to waste lot of time and research whether it is possible or not - it is faster to remove the buggy codec nobody needs or make a fix.
I get that the ffmpeg people have limited time and resources, I get that it would be nice if Google (or literally anyone else) patched this themselves and submitted that upstream. But "everyone else down stream of us should compile out our security hole" is a terrible way to go about things. If this is so obscure of a bug that there's no real risk, then there's no need for anyone to worry that the bug has been reported and will be publicized. On the other hand, if it's so dangerous that everyone should be rebuilding ffmpeg from source and compiling it out, then it really needs to be fixed in the up stream.
Edit: And also, how is anyone supposed to know they should compile the codec our unless someone makes a bug report and makes it public in the first place?
> But "everyone else down stream of us should compile out our security hole" is a terrible way to go about things.
Is that somehow _less_ of a terrible way to think than "someone who's contributed their time as a volunteer to an open source software project that we have come to rely on, now has some sort of an obligation to drop everything and do more unpaid work for a trillion dollar company"?
> it really needs to be fixed in the up stream
Lots of people love using "Free Software" that they didn't have to write as essential parts of their business.
Way too many of them seem to blink right when they get to this bit of the licence they got it with:
SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
> someone who's contributed their time as a volunteer to an open source software project that we have come to rely on, now has some sort of an obligation to drop everything and do more unpaid work for a trillion dollar company
If you could highlight the relevant part of the bug report that demanded the developers "drop everything" and do "unpaid work for a trillion dollar company", that would be great because I'm having trouble finding it. I see "hey, we found this bug, we found where we think the issue is in the code and here's a minimal reproduction. Also FYI we've sent you this bug report privately, but we will also be filing a public bug report after 90 days." And no, I don't think having a policy of doing a private bug report followed by a public report some time later qualifies as a demand. They could have just made a public report from the get go. They could also have made a private report and then surprised them with a public bug report some arbitrary amount of time later. Giving someone a private heads up before filing a public bug report is a courtesy, not a demand.
And it's really funny to complain about Google expecting "unpaid work for a trillion dollar company", when the maintainers proudly proclaim that the likes of no less than Google ARE paying them for consulting work on ffmpeg[1][2][3]
So when someone finds a bug in software, in your mind the only acceptable options are:
1) Fix it yourself
2) Sit on it silently until the maintainers finally get some time to fix it
That seems crazy to me. For one, not everyone who discovers a bug can fix it themselves. But also a patch doesn't fix it until it's merged. If filing a public bug report is expecting the maintainers to "drop everything and do free labor" then certainly dropping an unexpected PR with new code that makes heretofore unseen changes to a claimed security vulnerability must surely be a much stronger demand that the maintainers "drop everything" and do the "free labor" of validating the bug, validating the patch, merging the patch etc etc etc. So if the maintainers don't have time to patch a bug from a highly detailed bug report, they probably don't have time to review an unexpected patch for the same. So then what? Does people sit on that bug silently until someone finally gets around to having the time to review the PR. Or are they allowed to go public with the PR even though that's far more clearly a "demand to drop everything and come fix the issue NOW".
I for one am quite happy the guy who found the XZ backdoor went public before a fix was in place. And if tomorrow someone discovers that all Debian 13 releases have a vulnerable SSH installation that allows root logins with the password `12345`, I frankly don't give a damn how overworked the SSH or Debian maintainers are, I want them to go public with that information too so the rest of us can shut off our Debian servers.
xz was a fundamentally different problem, it was code that had been maliciously introduced to a widespread library and the corrupted version was in the process of being deployed to multiple distributions. The clock was very much ticking.
The clock is always ticking. You have no idea when you find a vulnerability who knows about it or how or whether it is currently being actively exploited. A choice to delay disclosure is a choice to take on the risk that the bug is being actively exploited in order to reduce the gap (and risk in that gap) between public disclosure and remediations being available. But critically, it is a risk that is being forced on the users of the software. They are unable to make an informed decision about accepting the risk because they don't know there is a risk. Public disclosure, sooner rather than later MUST be the goal of all bug reports, no matter how serious and no matter how overworked the maintainers.
Responsible disclosure policies for contributor-driven projects can differ from commercial projects. Also, if Google has the funds to pay for bug finding, they also have the funds for bug fixing the community projects they depend on.
> Responsible disclosure policies for contributor-driven projects can differ from commercial projects.
The can, but there's not an obvious reason why they should. If anything, public disclosure timelines for commercial closed source projects should be much much longer than for contributor-driven projects because once a bug is public ANYONE can fix it in the contributor-driven project, where as for a commercial project, you're entirely at the mercy of the commercial entities timelines.
> Also, if Google has the funds to pay for bug finding, they also have the funds for bug fixing the community projects they depend on.
They do. And they do. They literally higher the ffmpeg maintainers via the maintainer's consulting business (fflabs.eu) and they routinely contribute code to the ffmpeg project.
> The can, but there's not an obvious reason why they should.
Of course there are obvious reasons: corporations have the resources and incentives to fix them promptly once threatened with disclosure. Corporations don't respond well otherwise. None of these apply to volunteer projects.
> They literally higher the ffmpeg maintainers via the maintainer's consulting business (fflabs.eu) and they routinely contribute code to the ffmpeg project.
Great, then they should loop in the people they're paying on any notification of a vulnerability.
Of course, if this has truly been the case then nobody would have heard of this debacle.
How so? Volunteer projects have maintainers assigned to the project writing code. The "resources" to fix a bug promptly are simply choosing to allocate your developer resources to fixing the bug. Of course, volunteers might not want to do that, but then again, a company might not want to allocate their developers to fixing a bug either. But in either case the solution is to prioritize spending developer hours on the bug instead of on some other aspect of your project. In fact, volunteer driven projects have one huge resource that corporations don't, a theoretically infinite supply of developers to work on the project. Anyone with an interest can pick up the task of fixing the bug. That's the promise of open source right? Many eyes making all bugs shallow.
As for incentives, apparently both corporations and volunteer projects are "incentivized" to preserve their reputation. If volunteer projects weren't, we wouldn't be having this insane discussion where some people are claiming filing a bug report is tantamount to blackmail.
The only difference between the volunteer project and the corporation is even the head of a volunteer project can't literally force someone to work on an issue under the threat of being fired. I guess technically they could threaten to expel them from the project and I'm sure some bigger projects could also deny funding from their donation pool to a developer that refuses to play ball, but obviously that's not quite the same as being fired from your day job.
> Great, then they should loop in the people they're paying on any notification of a vulnerability.
If only there was some generally agreed upon and standardized way of looping the right people in on notifications of a bug. Some sort of "bug report" that you could give a team. It could include things like what issue you think you've found, places in the code that you believe are the cause of the issue, possibly suggested remediations, maybe even a minimum test case so that you can easily reproduce and validate the bug. Even better if there were some sort email address[1] that you could send these sorts of reports to if you didn't necessarily want to make them public right away. Or maybe there could be a big public database you could submit the reports to where anyone could see things that need work and could pick up the work[2] even if the maintainers themselves didn't. That would be swell, I'm sure some smart person will figure out a system like that one day.
Here’s where I’m coming from: it would really suck if the outcome of all this was for ffmpeg to drop support for niche codecs.
It may be the case that ffmpeg cannot reasonably support every format while maintaining the same level of security. In that case, it makes sense for distros to disable some formats by default. I still think it’s great that they’re supported by the ffmpeg project.
I agree there would probably need to be some unified guidance about which formats to enable.
I agree, it would suck if ffmpeg dropped support for niche codec altogether. But that's orthogonal to whether or not the bug reports should be made public. And realistically the only way distros (or anyone) can know if they should or need to disable some formats by default is if the issues with those formats are public knowledge so people can make informed decisions. Otherwise you're just arbitrarily picking some formats to enable and some not to based on age or some other less useful criteria.
There are dozens if not hundreds of issues just like this one in ffmpeg, except for codecs that are infinitely more common. Google has been running all sorts of fuzzers against ffmpeg for over a decade at this point and it just never ends. It's a 20 year old C project maintained by poorly funded volunteers that mostly gives every media file ever the be-liberal-in-what-you-accept treatment, because people complain if it doesn't decode some bizarrely non-standard MPEG4 variant recorded with some Chinese plastic toy from 2008. Of course it has all of the out-of-bounds bugs. I poked around on the issue tracker for like 5 minutes and found several "high impact" issues similar to the one in TFA just from the last two or three months, including at least one that hasn't passed the 90 day disclosure window yet.
Nobody who takes security even remotely seriously should decode untrusted media files outside of a sandboxed environment. Modern media formats are in themselves so complex one starts wondering if they're actually Turing complete, and in ffmpeg the attack surface is effectively infinitely large.
The issue is CVE slop because it just doesn't matter if you consider the big picture.
I don't get why you think linking to multiple legitimate and high quality bug reports with detailed analysis and precise reproduction instructions demonstrates "slop". It is the opposite.
This is software that is directly or indirectly run by millions of people on untrusted media files without sandboxing. It's not even that they don't care about security, it's that they're unaware that they should care. It should go without saying that they don't deserve to be hacked just because of that. Big companies doing tons of engineering work to add defense in depth for use cases on their own infrastructure (via sandboxing or disabling obsolete codecs) doesn't help those users. Finding and fixing the vulnerabilities does.
All of these reports are effectively autogenerated by Big Sleep from fuzzing.
Again, Google has been doing this sort of thing for over a decade and has found untold thousands of vulnerabilities like this one. It is not at all clear to me that their doing so has been all that valuable.
Google fuzzing open source projects has eliminated a lot of low hanging fruit from being exploited. I am surprised you think that finding these vulnerabilities so they can be fixed has not been valuable.
> If you used the scan to pdf functionality of a [Xerox] like this a decade ago, your PDF likely had a JBIG2 stream in it.
That's not an obscure format, that's an old format. Meanwhile with ffmpeg we're talking about
> decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.
That's both old and obscure.
Your point is still taken, but just to clarify that these are different situations. JBIG2 is included for legacy. The Lucas art codec is included for... completion's sake(?)
The problem is that if you have a process using ffmpeg and an attacker feeds it a video with this codec, ffmpeg will proceed to auto-detect the codec, attempt to decrypt and then then break everything.
If the format is old and obscure, and the implementation is broken, it shouldn't be on by default.
Sorry, I probably wasn't clear enough in my comment. I was trying to say that being old gives some legitimacy for existing. Just because it is old doesn't mean it isn't used. Though yes, this should be better determined to make sure it isn't breaking workflows you don't know about.
But old AND obscure, well it's nice that it is supported but enabled by default? Fully with you there.
Yeah but as you can see from the bug report ffmpeg automatically triggers the codec based on file magic, so it is possible that if you run some kind of network service or anything that handles hostile data an attacker could trigger the bug.
It feels like maybe people do not realize that Google is not the only company that can run fuzzers against ffmpeg? Attackers are also highly incentivized to do so and they will not do you the courtesy of filing bug reports.
The actual best response would be to run any "unsupported" codecs in a WASM sandbox. That way you are not throwing away work, Google can stop running fuzzers against random formats from 1995, and you can legitimately say that the worst that can happen with these formats is a process crash. Everybody wins.
Hmmmm. There's probably just one guy who wrote the ffmpeg code for that format. _Maybe_ one or two more who contributed fixes or enhancements?
The ffmpeg project need to get in touch and get then to assign copyright to the ffmpeg project, then delete that format/decoder from ffmpeg. Then go back to Google with an offer to licence then a commercial version of ffmpeg with the fixed SANM ANIM v0 decoder, for the low low price of only 0.0001% of YouTube's revenue every year. That'd likely make them the best funded open source project ever, if they pulled it off.
A human at Google investigates all of the bugs fuzzers and AI find manually and manually writes bug reports for upstream with more analysis. They are certainly paid to do that. They are also paid to develop tooling to find bugs.
I'm not sure what you think you mean when you say "running AIs indiscriminately". It's quite expensive to run AI this way, so it needs to be done with very careful consideration.
I’m an open source maintainer, so I empathize with the sentiment that large companies appear to produce labor for unpaid maintainers by disclosing security issues. But appearance is operative: a security issue is something that I (as the maintainer) would need to fix regardless of who reports it, or would otherwise need to accept the reputational hit that comes with not triaging security reports. That’s sometimes perfectly fine (it’s okay for projects to decide that security isn’t a priority!), but you can’t have it both ways.
If google bears no role in fixing the issues it finds and nobody else is being paid to do it either, it functionally is just providing free security vulnerability research for malicious actors because almost nobody can take over or switch off of ffmpeg.
I don’t think vulnerability researchers are having trouble finding exploitable bugs in FFmpeg, so I don’t know how much this actually holds. Much of the cost center of vulnerability research is weaponization and making an exploit reliable against a specific set of targets.
(The argument also seems backwards to me: Google appears to use a lot of not-inexpensive human talent to produce high quality reports to projects, instead of dumping an ASan log and calling it a day. If all they cared about was shoveling labor onto OSS maintainers, they could make things a lot easier for themselves than they currently do!)
Internally, Google maintains their own completely separate FFMpeg fork as well as a hardened sandbox for running that fork. Since they keep pace with releases to receive security fixes, there’s potentially lots of upstreamable work (with some effort on both sides…)
My understanding from adjacent threads in this discussion is that Google does in fact make significant upstream contributions to FFmpeg. Per policy those are often made with personal emails, but multiple people have said that Google’s investment in FFmpeg’s security and codec support have been significant.
(But also, while this is great, it doesn’t make an expectation of a patch with a security report reasonable! Most security reports don’t come with patches.)
Yeah it's more effort, but I'd argue that security through obscurity is a super naive approach. I'm not on Google's side here, but so much infrastructure is "secured" by gatekeeping knowledge.
I don't think you should try to invoke the idea of naivete when you fail to address the unhappy but perfectly simple reality that the ideal option doesn't exist, is a fantasy that isn't actually available, and among the available options, even though none are good, one is worse than another.
"obscurity isn't security" is true enough, as far as it goes, but is just not that far.
And "put the bugs that won't be fixed soon on a billboard" is worse.
The super naive approach is ignoring that and thinking that "fix the bugs" is a thing that exists.
More fantasy. Presumes the bug only exists in some part of ffmpeg that can be disabled at all, and that you don't need, and that you are even in control over your use of ffmpeg in the first place.
Sure, in maybe 1 special lucky case you might be empowered. And in 99 other cases you are subject to a bug without being in the remotest control over it since it's buried away within something you use and don't even have the option not to use the surface service or app let alone control it's subcomponents.
It's a heck of a lot better than being unaware of it.
(To put this in context: I assume that on average a published security vulnerability is known about to at least some malicious actors before it's published. If it's published, it's me finding out about it, not the bad actors suddenly getting a new tool)
it's only better if you can act on it equal to the bad guys. If the bad guys get to act on it before you, or before some other good guys do on your behalf, then no it's not better
remember we're not talking about keeping a bug secret, we're talking about using a power tool to generate a fire hose of bugs and only doing that, not fixing them
"The bug" in question refers to the one found by the bug-finding tool the article claims triggered the latest episode of debate. Nobody is claiming it's the only bug, just that this triggering bug highlighted was a clear example of where there is actually such a clear cut line.
Google does contribute some patches for codecs they actually consume e.g. https://github.com/FFmpeg/FFmpeg/commit/b1febda061955c6f4bfb..., the bug in question was just an example of one the bug finding tool found that they didn't consume - which leads to this conversation.
Given that Google is both the company generating the bug reports and one of the companies using the buggy library, while most of the ffmpeg maintainers presumably aren't using their libraries to run companies with a $3.52 trillion dollar market cap, would you argue that going public with vulnerabilities that affect your own product before you've fixed them is also a naive approach?
Sorry, but this states a lot of assumption as fact to ask a question which only makes sense if it's all true. I feel Google should assist the project more financially given how much they use it, but I don't think Google shipping products using every codec they find bugs for with their open source fuzzer project is a reasonable guess. I certainly doubt YouTube/Chrome let's you upload/compiles ffmpeg with this LucasArts format, as an example. For security issues relevant to their usage via Chrome CVEs etc, they seem to contribute on fixes as needed. E.g. here is one via fuzzing or a codec they use and work on internally https://github.com/FFmpeg/FFmpeg/commit/b1febda061955c6f4bfb...
In regards whether it's a bad idea to publicly document security concerns found regardless whether you plan on fixing them, it often depends if you ask the product manager what they want for their product or what the security concerned folks in general want for every product :).
> it functionally is just providing free security vulnerability research for malicious actors because almost nobody can take over or switch off of ffmpeg
At least, if this information is public, someone can act on it and sandbox ffmpeg for their use case, if they think it's worth it.
I personally prefer to have this information be accessible to all users.
This is a weird argument. Basically condoning security through obscurity: If nobody reports the bug then we just pretend it doesn’t exist, right?
There are many groups searching for security vulnerabilities in popular open source software who deliberately do not disclose them. They do this to save them for their own use or even to sell them to bad actors.
It’s starting to feel silly to demonize Google for doing security research at this point.
The timeline is industry standard at this point. The point is make sure folks take security more seriously. If you start deviating from the script, others will expect the same exceptions and it would lose that ability. Sometimes it's good to let something fail loudly to show this is a problem. If ffmpeg doesn't have enough maintainers, then they should fail and let downstream customers know so they have more pressure to contribute resources. Playing superman and trying to prevent them from seeing the problem will just lead to burn out.
Is it industry standard to run automatic AI tools and spam the upstream with bug reports? To then expect the bugs to be fixed within a 90 days is a bit much.
It's not some lone report of an important bug, it's AI spam that put forth security issues at a speed greater than they have resources to fix it.
I guess the question that a person at Google who discovers a bug they don’t personally have time to fix is, should they report the bug at all? They don’t necessarily know if someone else will be able to pick it up. So the current “always report” rule makes sense since you don’t have to figure out if someone can fix it.
The same question applies if they have time to fix it in six months, since that presumably still gives attackers a large window of time.
In this case the bug was so obscure it’s kind of silly.
It’s possible that this is a more efficient use of their time when it comes to open source security as a whole, most projects do not have a problem with reports like this.
If not pumping out patches allows them to get more security issues fixed, that’s fine!
From the perspective of Google maybe, but from the perspective of open source projects, how much does this drain them?
Making open source code more secure and at the same time less prevalent seems like a net loss for society. And if those researchers could spare some time to write patches for open source projects, that might benefit society more than dropping disclosure deadlines on volunteers.
My takeaway from the article was not that the report was a problem, but a change in approach from Google that they’d disclose publicly after X days, regardless of if the project had a chance to fix it.
To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
On the other hand as an ffmpeg user do you care? Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it? I mean someone could already be using the vulnerability regardless of what Google does.
>Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it?
Yes? It's in the license
>NO WARRANTY
>15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.
If I really care, I can submit a patch or pay someone to. The ffmpeg devs don't owe me anything.
Not being told the existence of bugs is different from having a warranty on software. How would you submit a patch on a bug you were not aware of?
Google should provide a fix but it's been standard to disclose a bug after a fixed time because the lack of disclosure doesn't remove the existence of the bug. This might have to be rethought in the context of OSS bugs but an MIT license shouldn't mean other people can't disclose bugs in my project.
Google publicly disclosing the bug doesn't only let affected users know. It also lets attackers know how they can exploit the software.
Holding public disclosure over the heads of maintainers if they don't act fast enough is damaging not only to the project, but to end users themselves also. There was no pressing need to publicly disclose this 25 year old bug.
How is having a disclosure policy so that you balance the tradeoffs between informing people and leaving a bug unreported "holding" anything over the heads of the maintainers? They could just file public bug reports from the beginning. There's no requirement that they file non-public reports first, and certainly not everyone who does file a bug report is going to do so privately. If this is such a minuscule bug, then whether it's public or not doesn't matter. And if it's not a minuscule bug, then certainly giving some private period, but then also making a public disclosure is the only responsible thing to do.
That license also doesn't give the ffmpeg devs the right to dictate which bugs you're allowed to find, disclose privately, or disclose publicly. The software is provided as-is, without warranty, and I can do what I want with it, including reporting bugs. The ffmpeg devs can simply not read the bug reports, if they hate bug reports so much.
Sorry to put it this bluntly, but you are not going to get what you want unless you do it yourself or you can convince, pay, browbeat, or threaten somebody to provide it for you.
Have you ever used a piece of software that DID make guarantees about being safe?
Every software I've ever used had a "NO WARRANTY" clause of some kind in the license. Whether an open-source license or a EULA. Every single one. Except, perhaps, for public-domain software that explicitly had no license, but even "licenses" like CC0 explicitly include "Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work ..."
I don't know what our contract terms were for security issues, but I've certainly worked on a product where we had 5 figure penalties for any processing errors or any failures of our system to perform its actions by certain times of day. You can absolutely have these things in a contract if you pay for it, and mass market software that you pay for likely also has some implied merchantability depending on jurisdiction.
But yes things you get for free have no guarantees and there should be no expectations put in the gift giver beyond not being actively intentionally malicious.
Point. As part of a negotiated contract, some companies might indeed put in guarantees of software quality; I've never worked in the nuclear industry or any other industries where that would be required, so my perspective was a little skewed. But all mass-distributed software I've ever seen or heard of, free or not, has that "no warranty" clause, and only individual contracts are exceptions.
Also, "depending on jurisdiction" is a good point as well. I'd forgotten how often I've seen things like "Offer not valid in the state of Delaware/California/wherever" or "If you live in Tennessee, this part of the contract is preempted by state law". (All states here are pulled out of a hat and used for examples only, I'm not thinking of any real laws).
This program discloses security issues to the projects and only discloses them after they have had a "reasonable" chance to fix it though, and projects can request extensions before disclosure if projects plan to fix it but need more time.
Google runs this security program even on libraries they do not use at all, where it's not a demand, it's just whitehat security auditing. I don't see the meaningful difference between Google doing it and some guy with a blog doing it here.
Great, so Google is actively spending money on making open source projects better and more secure. And for some reason everyone is now mad at them for it because they didn't also spend additional money making patches themselves. We can absolutely wish and ask that they spend some money and resources on making those patches, but this whole thing feels like the message most corporations are going to take is "don't do anything to contribute to open source projects at all, because if you don't do it just right, they're going to drag you through the mud for it" rather than "submit more patches"
Why should Google not be expected to also contribute fixes to a core dependency of their browser, or to help funding the developers? Just publishing bug reports by themselves does not make open source projects secure!
It doesn't if you report lots of "security" issues (like this 25 years old bug) and give too little time to fix them.
Nobody is against Google reporting bugs, but they use automatic AI to spam them and then expect a prompt fix. If you can't expect the maintainers to fix the bug before disclosure, then it is a balancing act: Is the bug serious enough that users must be warned and avoid using the software? Will disclosing the bug now allow attackers to exploit it because no fix has been made?
In this case, this bug (imo) is not serious enough to warrant a short disclosure time, especially if you consider *other* security notices that may have a bigger impact. The chances of an attacker finding this on their own and exploiting it are low, but now everybody is aware and you have to rush to update.
The bug exists whether or not google publishes a public bug report. They are no more making the project less secure than if some retro-game enthusiast had found the same bug and made a blog post about it.
Publishing bugs that the project has so that they can be fixed is actively making the project more secure. How is someone going to do anything about it if Google didn’t do the research?
Did you see how the FFMPEG project patched a bug for a 1995 console? That's not a good use for the limited amount of volunteers on the project. It actively makes it less secure by taking away from more pertinent bugs.
The codec can be triggered to run automatically by adversarial input. The irrelevance of the format is itself irrelevant when ffmpeg has it on by default.
Publicizing vulnerabilities is the problem though. Google is ensuring obscure or unknown vulnerabilities will now be very well known and very public.
This is significant when they represent one of the few entities on the planet likely able to find bugs at that scale due to their wealth.
So funding a swarm of bug reports, for software they benefit from, using a scale of resources not commonly available, while not contributing fixes and instead demanding timelines for disclosure, seems a lot more like they'd just like to drive people out of open source.
I think most people learned about this bug from FFmpeg's actions, not Google's. Also, you are underestimating adversaries: Google spends quite a bit of money on this, but not a lot given their revenue, because their primary purpose is not finding security bugs. There are entities that are smaller than Google but derive almost all their money from finding exploits. Their results are broadly comparable but they are only publicized when they mess up.
> so Google is actively spending money on making open source projects better and more secure
It looks like they are now starting to flood OSS with issues because "our AI tools are great", but don't want to spend a dime helping to fix those issues.
According to the ffmpeg maintainer's own website (fflabs.eu) Google is spending plenty of dimes helping to fix issues in ffmpeg. Certainly they're spending enough dimes for the maintainers to proudly display Google's logo on their site as a customer of theirs.
The user is vulnerable while the problem is unfixed. Google publishing a vulnerability doesn't change the existence of the vulnerability. If Google can find it, so can others.
Making the vulnerability public makes it easy to find to exploit, but it also makes it easy to find to fix.
If it is so easy to fix, then why doesn't Google fix it? So far they've spent more effort in spreading knowledge about the vulnerability than fixing it, so I don't agree with your assessment that Google is not actively making the world worse here.
I didn't say it was easy to fix. I said a publication made it easy to find it, if someone wanted to fix something.
If you want to fix up old codecs in ffmpeg for fun, would you rather have a list of known broken codecs and what they're doing wrong; or would you rather have to find a broken codec first.
What a strange sentence. Google can do a lot of things that nobody can do. The list of things that only Google, a handful of nation states, and a handful of Google-peers can do is probably even longer.
Sure, but running a fuzzer on ancient codecs isn't that special. I can't do it, but if I wanted to learn how, codecs would be a great place to start. (in fact, Google did some of their early fuzzing work in 2012-2014 on ffmpeg [1]) Media decoders have been the vector for how many zero interaction, high profile attacks lately? Media decoders were how many of the Macromedia Flash vulnerabilities? Codecs that haven't gotten any new media in decades but are enabled in default builds are a very good place to go looking for issues.
Google does have immense scale that makes some things easier. They can test and develop congestion control algorithms with world wide (ex-China) coverage. Only a handful of companies can do that; nation states probably can't. Google isn't all powerful either, they can't make Android updates really work even though it might be useful for them.
you'd assume that a bad actor would have found the exploit and kept it hidden for their own use. To assume otherwise is fundamentally flawed security practice.
which bad actors would have more of, as they'd have a financial incentive to make use of the found vulnerabilities. White hats don't get anything in return (financially) - it's essentially charity work.
In this world and the alternate universe both, attackers can also use _un_published vulnerabilities because they have high incentive to do research. Keeping a bug secret does not prevent it from existing or from being exploited.
As clearly stated, most users of ffmpeg are unaware of them using it. Even them knowing about a vulnerability in ffmpeg, they wouldn't know they are affected.
Really, the burden is on those shipping products that depend on ffmpeg: they are the ones who have to fix the security issues for their customers. If Google is one of those companies, they should provide the fix in the given time.
But how are those companies supposed to know they need to do anything unless someone finds and publicly reports the issue in the first place? Surely we're not advocating for a world where every vendor downstream of the ffmpeg project independently discovers and patches security vulnerabilities without ever reporting the issues upstream right?
If they both funded vulnerability scanning and vulnerability fixing (if they don't want to do it in-house, they can sponsor the upstream team), which is to me the obvious "how", I am not sure why you believe there is only one way to do it.
It's about accountability! Who really gets to do it once those who ship it to customers care, is on them to figure out (though note that maintainers will have some burden to review, integrate and maintain the change anyway).
They regularly submit code and they buy consulting from the ffmpeg maintainers according to the maintainer's own website. It seems to me like they're already funding fixes in ffmpeg, and really everyone is just mad that this particular issue didn't come with a fix. Which is honestly not a great look for convincing corporations to invest resources into contributing to upstream. If regular patches and buying dev time from the maintainers isn't enough to avoid getting grief for "not contributing" then why bother spending that time and money in the first place?
I have about 100x as much sympathy for an open source project getting time to fix a security bug than I do a multibillion dollar company with nearly infinite resources essentially blackmailing a small team of developers like this. They could -easily- pay a dev to fix the bug and send the fix to ffmpeg.
Since when are bug reports blackmail? If some retro game enthusiast discovered this bug and made a blog post about it that went to the front page of HN, is that blackmail? If someone running a fuzzer found this bug and dumped a public bug report into github is that blackmail? What if google made this report privately, but didn't say anything about when they would make it public and then just went public at some arbitrary time in the future? How is "heads up, here's a bug we found, here's the reproduction steps for it, we'll file a public bug report on it soon" blackmail?
In my case, yes, but my pipeline is closed. Processes run on isolated instances that are terminated without haste as soon as workflow ends. Even if uncaught fatal errors occur, janitor scripts run to ensure instances are terminated on a fast schedule. This isn't something running on my personal device with random content that was provided by unknown someone on the interwebs.
So while this might be a high security risk because it possibly could allow RCE, the real-world risk is very low.
Let's say that FFMPEG has a 10 CVE where a very easy stream can cause it to RCE. So what?
We are talking about software commonly for end users deployed to encode their own media. Something that rarely comes in untrusted forms. For an exploit to happen, you need to have a situation where an attacker gets out a exploited media file which people commonly transcode via FFMPEG. Not an easy task.
This sure does matter to the likes of google assuming they are using ffmpeg for their backend processing. It doesn't matter at all for just about anyone else.
You might as well tell me that `tar` has a CVE. That's great, but I don't generally go around tarring or untarring files I don't trust.
AIUI, (lib)ffmpeg is used by practically everything that does anything with video, including such definitely-security-sensitive things as Chrome, which people use to play untrusted content all the time.
Ffmpeg is a versatile toolkit used in lot of different places.
I would be shocked if any company working with user generated video from the likes of zoom or TikTok or YouTube to small apps all over which do not have it in their pipeline somewhere.
There are alternatives such as gstreamer and proprietary options. I can’t give names, but can confirm at least two moderately sized startups that use gstreamer in their media pipeline instead of ffmpeg (and no, they don’t use gst-libav).
One because they are a rust shop and gstreamer is slightly better supported in that realm (due to an official binding), the other because they do complex transformations with the source streams at a basal level vs high-level batch transformations/transcoding.
There are certainly features and use cases where gstreamer is better fit than ffmpeg.
My point was it would be hard to imagine eschewing ffmpeg completely, not that there is no value for other tools and ffmpeg is better at everything. It is so versatile and ubiquitous it is hard to not use it somewhere.
In my experience there usually is always some scenarios in the stack where throwing in ffmpeg for a step is simpler and easier even if there no proper language binding etc, for some non-core step or other.
From a security context that wouldn't matter, As long it touches data, security vulnerabilities would be a concern.
It would be surprising, not that it would impossible to forgo ffmpeg completely. It would be just like this site is written Lisp, not something you would typically expect not impossible.
I wasn’t countering your point, I just wanted to add that there are alternatives (well, an alternative in the OSS sphere) that are viable and well used outside of ffmpeg despite its ubiquity.
> On the other hand as an ffmpeg user do you care? Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it?
Yes, because publicly disclosing the vulnerability means someone will have enough information to exploit it. Without public disclosure, the chance of that is much lower.
If you use a trillion dollar AI to probe open source code in ways that no hacker could, you're kind of unearthing the vulnerabilities yourself if you disclose them.
It's standard practice for commercially-sponsored software, and it doesn't necessarily fit volunteer maintained software. You can't have the same expectations.
Vulnerabilities should be publicly disclosed. Both closed and open source software are scrutinized by the good and the bad people; sitting on vulnerabilities isn't good.
Consumers of closed source software have a pretty reasonable expectation that the creator will fix it in a timely manner. They paid money, and the (generally) the creator shouldn't put the customer in a nasty place because of errors.
Consumers of open source software should have zero expectation that someone else will fix security issues. Individuals should understand this; it's part of the deal for us using software for free. Organizations that are making money off of the work of others should have the opposite of an expectation that any vulns are fixed. If they have or should have any concern about vulnerabilities in open source software, then they need to contribute to fixing the issue somehow. Could be submitting patches, paying a contractor or vendor to submit patches, paying a maintainer to submit patches, or contributing in some other way that betters the project. The contribution they pick needs to work well with the volunteers, because some of the ones I listed would absolutely be rejected by some projects -- but not by others.
The issue is that an org like Google, with its absolute mass of technical and financial resources, went looking for security vulnerabilities in open source software with the pretense of helping. But if Google (or whoever) doesn't finish the job, then they're being a piece of shit to volunteers. The rest of the job is reviewing the vulns by hand and figuring out patches that can be accepted with absolutely minimal friction.
To your point, the beginning of the expectation should be that vulns are disclosed, since otherwise we have known insecure software. The rest of the expectation is that you don't get to pretend to do a nice thing while _knowing_ that you're dumping more work on volunteers that you profit from.
In general, wasting the time of volunteers that you're benefiting from is rude.
Specifically, organizations profiting off of volunteer work and wasting their time makes them an extractive piece of shit.
why are the standards and expectation different for google vs an independent researcher? Just because they are richer, doesn't mean they should be held to a standard that isn't done for an independent researcher.
The OSS maintainer has the responsibility to either fix, or declare they won't fix - both are appropriate actions, and they are free to make this choice. The consumer of OSS should have the right to know what vulns/issues exist in the package, so that they make as informed a decision as they can (such as adding defense in depth for vulns that the maintainers chooses not to fix).
Google makes money off ffmpeg in general but not this part of the code. They're not getting someone else to write a patch that helps them make money, because google will just disable this codec if it wasn't already disabled in their builds.
Also in general Google does investigate software they don't make money off.
> independent researchers don't make money off the projects that they investigate
but they make money off the reputational increase they earn for having their name attached to the investigation. Unless the investigation and report is anonymous and their name not attached (which, could be true for some researchers), i can say that they're not doing charity.
That's a one-time bonus they get for discovering a bug, not from using the project on production. Google also gets this reward by the way. Therefore it's still imbalanced.
You disclose so that users can decide what mitigations to take. If there's a way to mitigate the issue without a fix from the developers the users deserve to know. Whether the developers have any obligation to fix the problem is up to the license of the software, the 90 day concession is to allow those developers who are obligated or just want to issue fixes to do so before details are released.
The entire conflict here is that norms about what's considered responsible were developed in a different context, where vulnerability reports were generated at a much lower rate and dedicated CVE-searching teams were much less common. FFmpeg says this was "AI generated bug reports on an obscure 1990s hobby codec"; if that's accurate (I have no reason to doubt it, just no time to go check), I tend to agree that it doesn't make sense to apply the standards that were developed for vulnerabilities like "malicious PNG file crashes the computer when loaded".
The codec is compiled in, enabled by default, and auto detected through file magic, so the fact that it is an obscure 1990s hobby codec does not in any way make the vulnerability less exploitable. At this point I think FFmpeg is being intentionally deceptive by constantly mentioning only the ancient obscure hobby status and not the fact that it’s on by default and autodetected. They have also rejected suggestions to turn obscure hobby codecs off by default, giving more priority to their goal of playing every media format ever than to security.
I think the discussion on what standard practice should be does need to be had. This seems to be throwing blame at people following the current standard.
If the obscure coded is not included by default or cannot be triggered by any means other than being explicitly asked for, then it would be reasonable to tag it Won't Fix. If it can be triggered by other means, such as auto file type detection on a renamed file, then it doesn't matter how obscure the feature is, the exploit would affect all.
What is the alternative to a time limited embargo. I don't particularly like the idea of groups of people having exploits that they have known about for ages that haven't been publicly disclosed. That is the kind of information that finds itself in the wrong hands.
Of course companies should financially support the developers of the software they depend upon. Many do this for OSS in the form of having a paid employee that works on the project.
Specifically, FFMPEG seems to have a problem that much of their limitation of resources comes from them alienating contributors. This isn't isolated to just this bug report.
FFMPEG does autodetection of what is inside a file, the extension doesn't really matter. So it's trivial to construct a video file that's labelled .mp4 but is really using the vulnerable codec and triggers its payload upon playing it. (Given ffmpeg is also used to generate thumbnails in Windows if installed, IIRC, just having a trapped video file in a directory could be dangerous.)
Silly nitpick, but you search for vulnerabilities not CVEs. CVE is something that may or may not be assigned to track a vulnerability after it has been discovered.
Most security issues probably get patched without a CVE ever being issued.
It is accurate. This is a codec that was added for archival and digital preservation purposes. It’s like adding a Unicode block for some obscure 4000 year old dead language that we have a scant half dozen examples of writing.
Why is Google deliberately running an AI process to find these bugs if they're just going to dump them all on the FFmpeg team to fix?
They have the option to pay someone to fix them.
They also have the option to not spend resources finding the bugs in the first place.
If they think these are so damn important to find that it's worth devoting those resources to, then they can damn well pay for fixing them too.
Or they can shut the hell up and let FFmpeg do its thing in the way that has kept it one of the https://xkcd.com/2347/ pieces of everyone's infrastructure for over 2 decades.
Google is a significant contributor to ffmpeg by way of VP9/AV1/AV2. It's not like it's a gaping maw of open-source abuse, the company generally provides real value to the OSS ecosystem at an even lower level than ffmpeg (which is saying a lot, ffmpeg is pretty in-the-weeds already).
As to why they bother finding these bugs... it's because that's how Google does things. You don't wait for something to break or be exploited, you load your compiler up with santizers and go hunting for bugs.
Yeah this one is kind of trivial, but if the bug-finding infrastructure is already set up it would be even more stupid if Google just sat on it.
So to be clear, if Google doesn't include patches, you would rather they don't make bugs they find in software public so other people can fix them?
That is, you'd rather a world where Google either does know about a vulnerability and refuses to tell anyone, or just doesn't look for them at all, over a world where google looks for them and lets people know they exist, but doesn't submit their own fix for it.
Why do you want that world? Why do you want corporations to reduce the already meager amounts of work and resources they put into open source software even further?
Many people are already developing and fixing FFmpeg.
How many people are actively looking for bugs? Google, and then the other guys that don't share their findings, but perhaps sell them to the highest bidder. Seems like Google is doing some good work by just picking big, popular open source projects and seeing if they have bugs, even if they don't intend to fix them. And I doubt Google was actually using the Lucas Arts video format their latest findings were about.
However, in my mind the discussion whether Google should be developing FFmpeg (beyond the codec support mentioned elsewhere in the thread) or other OSS projects is completely separate from whether they should be finding bugs in them. I believe most everyone would agree they should. They are helping OSS in other ways though, e.g. https://itsfoss.gitlab.io/post/google-sponsors-1-million-to-... .
I would love to see Google contribute here, but I think that's a different issue.
Are the bug reports accurate? If so, then they are contributing just as if I found them and sent a bug report, I'd be contributing. Of course a PR that fixes the bug is much better than just a report, but reports have value, too.
The alternative is to leave it unfound, which is not a better alternative in my opinion. It's still there and potentially exploitable even when unreported.
The actual real alternative is that the ffmpeg maintainers quit, just like the libxml2 maintainer did.
A lot of these core pieces of infrastructure are maintained by one to three middle-aged engineers in their free time, for nothing. Meanwhile, billion dollar companies use the software everywhere, and often give nothing back except bug reports and occasional license violations.
I mean, I love "responsible disclosure." But the only result of billion dollar corporations drowning a couple of unpaid engineers in bug reports is that the engineers will walk away and leave the code 100% unmaintained.
And yeah, part of the problem here is that C-based data parsers and codecs are almost always horrendously insecure. We could rewrite it all in Rust (and I have in fact rewritten one obscure codec in Rust) or WUFFS. But again, who's going to pay for that?
The other alternative if the ffmpeg developers change the text on their "about" screen from "Security is a high priority and code review is always done with security in mind. Though due to the very large amounts of code touching untrusted data security issues are unavoidable and thus we provide as quick as possible updates to our last stable releases when new security issues are found." to something like "Security is a best-effort priority. Code review is always done with security in mind. Due to the very large amounts of code touching untrusted data security issues are unavoidable. We attempt to provide updates to our last stable releases when new security issues are found, but make no guarantees as to how long this may take. Priority will be given to reports including a proof-of-concept exploit and a patch that fixes the security bug."
Then point to the "PoC + Patch or GTFO" sign when reports come in. If you use a library with a "NO WARRANTY" license clause in an application where you're responsible for failures, it's on you to fix or mitigate the issues, not on the library authors.
But FFmpeg does not have the resources to fix these at the speed Google is finding them.
It's just not possible.
So Google is dedicating resources to finding these bugs
and feeding them to bad actors.
Bad actors who might, hypothetically have had the information before, but definitely do once Google publicizes them.
You are talking about an ideal situation; we are talking about a real situation that is happening in the real world right now, wherein the option of Google reports bug > FFmpeg fixes bug simply does not exist at the scale Google is doing it at.
A solution definitely ought to be found. Google putting up a few millionths of a percent of their revenue or so towards fixing the bugs they find in ffmpeg would be the ideal solution here, certainly. Yet it seems unlikely to actually occur.
I think the far more likely result of all the complaints is that Google simply completely disengages from ffmpeg and stops doing any security work on it. I think that would be quite bad for the security of the project - if Google can trivially find bugs at a high speed such that it overwhelms the ffmpeg developers, I would imagine bad actors can also search for them and find those same vulnerabilities Google is constantly finding, and if they know that those vulnerabilities very much exist, but that Google has simply stopped searching for them upon demand of the ffmpeg project, this would likely give them extremely high motivation to go looking in a place they can be almost certain they'll find unreported/unknown vulnerabilities in. The result would likely be a lot more 0-day attacks involving ffmpeg, which I do not think anyone regards as a good outcome (I would consider "Google publishes a bunch of vulnerabilities ffmpeg hasn't fixed so that everyone knows about them" to be a much preferable outcome, personally)
Now, you might consider that possibility fine - after all, the ffmpeg developers have no obligation to work on the project, and thus to e.g. fix any vulnerabilities in it. But if that's fine, then simply ignoring the reports Google currently makes is presumably also fine, no ?
I really don’t understand whole discourse us vs them? Why it is should be only Google fixing the bugs. Isn’t if volunteers not enough, so maybe more volunteers can step up and help FFMpeg. Via direct patches, or via directly lobbying companies to fund project.
In my opinion if the problem is money, and they cannot raise enough, then somebody should help them with that. Isn’t it?
If widely deployed infrastructure software is so full of vulnerabilities that its maintainers can't fix them as fast as they're found, maybe it shouldn't be widely deployed, or they shouldn't be its maintainers. Disabling codecs in the default build that haven't been used in 30 years might be a good move, for example.
Either way, users need to know about the vulnerabilities. That way, they can make an informed tradeoff between, for example, disabling the LucasArts Smush codec in their copy of ffmpeg, and being vulnerable to this hole (and probably many others like it).
I mean, yes, the ffmpeg maintainers are very likely to decide this on their own, abandoning the project entirely. This is already happening for quite a few core open source projects that are used by multiple billion-dollar companies and deployed to billions of users.
A lot of the projects probably should be retired and rewritten in safer system languages. But rewriting all of the widely-used projects suffering from these issues would likely cost hundreds of millions of dollars.
The alternative is that maybe some of the billion-dollar companies start making lists of all the software they ship to billions of users, and hire some paid maintainers through the Linux or Apache Foundations.
> But FFmpeg does not have the resources to fix these at the speed Google is finding them.
Google submitting a patch does not address this issue. The main work for maintainers here is making the decision whether or not they want to disable this codec, whether or not Google submits a patch to do that is completely immaterial.
> Why is Google deliberately running an AI process to find these bugs if they're just going to dump them all on the FFmpeg team to fix?
This is called fuzzing and it has been standard practice for over a decade. Nobody has had any problem with it until FFmpeg decided they didn’t like that AI filed a report against them and applied the (again, mostly standard at this point) disclosure deadline. FWIW, nobody would have likely cared except they went on their Twitter to complain, so now everyone has an opinion on it.
> My takeaway from the article was not that the report was a problem, but a change in approach from Google that they’d disclose publicly after X days, regardless of if the project had a chance to fix it.
That is not an accurate description? Project Zero was using a 90 day disclosure policy from the start, so for over a decade.
What changed[0] in 2025 is that they disclose earlier than 90 days that there is an issue, but not what the issue is. And actually, from [1] it does not look like that trial policy was applied to ffmpeg.
> To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
You clearly know that no actual demands or even requests for a fix were made, hence the scare quotes. But given you know it, why call it a "demand"?
When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline, you are simultaneously increasing the visibility of the vulnerability and name-calling the maintainers. All of this increases the pressure on the maintainers, and it's fair to call that a "demand" (quotes-included). Note that we are talking about humans who will only have their motivation dwindle: it's easy to say that they should be thick-skinned and ignore issues they can't objectively fix in a timely manner, but it's demoralizing to be called out like that when everyone knows you can't do it, and you are generally doing your best.
It's similar to someone cooking a meal for you, and you go on and complain about every little thing that could have been better instead of at least saying "thank you"!
Here, Google is doing the responsible work of reporting vulnerabilities. But any company productizing ffmpeg usage (Google included) should sponsor a security team to resolve issues in high profile projects like these too.
Sure, the problem is that Google is a behemoth and their internal org structure does not cater to this scenario, but this is what the complaint is about: make your internal teams do the right thing by both reporting, but also helping fix the issue with hands-on work. Who'd argue against halving their vulnerability finding budget and using the other half to fund a security team that fixes highest priority vulnerabilities instead?
> When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline
My understanding is that the bug in question was fixed about 100 times faster than Project Zero's standard disclosure timeline. I don't know what vulnerability report your scenario is referring to, but it certainly is not this one.
> and name-calling the maintainers
Except Google did not "name-call the maintainers" or anything even remotely resembling that. You just made it up, just like GP made up the the "demands". It's pretty telling that all these supposed misdeeds are just total fabrications.
"When you publicize... you are ... name-calling": you are taking partial quotes out of context, where I claimed that publicizing is effectively doing something else.
> When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline, you are simultaneously increasing the visibility of the vulnerability and name-calling the maintainers.
So how long should all bug reporters wait before filing public bugs against open source projects? What about closed source projects? Anyone who works in software knows to ship software is to always have way more things to do than time to do it in. By this logic, we should never make bug reports public until the software maintainers (whether OSS, Apple or Microsoft) has a fix ready. Instead of "with enough eyeballs, all bugs are shallow" the new policy going forward I guess will be "with enough blindfolds, all bugs are low priority".
It's funny you come up with that suggestion when I clearly offer a different solution: "make your internal teams do the right thing by both reporting, but also helping fix the issue with hands-on work".
It's a call not to stop reporting, but to equally invest in fixing these.
Hands on work like filing a detailed bug report with suspected line numbers, reproduction code and likely causes? Look, I get it. It would be nice if Google had filed a patch with the bug. But also not every bug report is going to get a patch with it, nor should that be the sort of expectation we have. It's hard enough getting corporations to contribute time and resources to open source projects as it is, to set an expectation that the only acceptable corporate contribution to open source is full patches for any bug reports is just going to make it that much harder to get anything out of them.
In the end, Google does submit patches and code to ffmpeg, they also buy consulting from the ffmpeg maintainers. And here they did some security testing and filed a detailed and useful bug report. But because they didn't file a patch with the bug report, we're dragging them through the mud. And for what? When another corporation looks at what Google does do, and what the response this bug report has gotten them, which do you think is the most likely lesson learned?
1) "We should invest equally in reporting and patching bugs in our open source dependencies"
2) "We should shut the hell up and shouldn't tell anyone else about bugs and vulnerabilities we discover, because even if you regularly contribute patches and money to the project, that won't be good enough. Our name and reputation will get dragged for having the audacity to file a detailed bug report without also filing a patch."
> would they stop to consider what happens if everybody does that?
It’s almost almost like bitching about the “free labor” open source projects are getting from their users, especially when that labor is of good quality and comes from a user that is actively contributing both code and money to the project is a losing strategy for open source fans and maintainers.
> All I am saying is that you should be as mindful to open source maintainers as you are to the people at companies.
And all I’m saying is there is nothing that’s “un-mindful” about reporting real bugs to an open source project, whether that report is public or not. And especially when that report is well crafted and actionable. If this report were for something that wasn’t a bug, is this report was a low quality “foo is broke, plz to fix” report with no actionable information, or if the report actually came with demands for responses and commitment timelines, then it would be a different matter. But ffmpeg runs a public bug tracker. To say then that making public bug reports is somehow disrespectful of the maintainers is ridiculous.
The fact that details of the issue _will_ be disclosed publicly is an implicit threat. Sure it's not an explicit threat, but it's definitely an implicit threat. So the demand, too, is implicit: fix this before we disclose publicly, or else your vulnerability will be public knowledge.
You should not be threatened by the fact that your software has security holes in it being made public knowledge. If you are, then your goals are fundamentally misaligned with making secure software.
I don't think that you understand the point of the delayed public disclosure. If it wasn't a threat, then there'd be no need to delay -- it would be publicly disclosed immediately.
No, publishing the vulnerability is the right thing to do for a secure world because anyone can find this stuff including nation states that weaponize it. This is a public service. Giving the dev a 90 day pre warn is a courtesy.
Expecting a reporter to fix your security vulnerabilities for you is entitlement.
If your reputation is harmed by your vulnerable software, then fix the bugs. They didn’t create the hazzard they discovered it. You created it, and acting like you’re entitled to the free labor of those that gave you the heads up is insane, and trying to extort them for their labor is even worse.
You dont get to decide that lmao. Telling everyone this project doesnt care about security if they ignore my CVE is obviously a demand and your traditions can not change that
> Telling everyone this project doesnt care about security
Google did nothing like this.
If people infer that a hypothetical project doesn't care about security because they didn't fix anything, then they're right. It's not google's fault they're factually bad at security. Making someone look bad is not always a bad action.
Drawing attention to that decision by publicly reporting a bug is not a demand for what the decision will be. I could imagine malicious attention-getting but a bug report isn't it.
If merely publishing a bug they found, and doing nothing else, would qualify by your definition as "telling everyone this project doesn't care about security", then there is absolutely nothing wrong with doing that "telling".
If the FFmpeg team does not want people to file bug reports, then they should close their public issue tracker. This is not something that I decided but a choice that they made.
Nobody is demanding anything. Google is just disclosing issues.
This opens up transparency of ffmpeg’s security posture, giving others the chance to fix it themselves, isolate where it’s run or build on entirely new foundations.
All this assuming the reports are in fact pointing to true security issues. Not talking about AI-slop reports.
> The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”
This doesn't feel like a medium-severity bug, and I think "Perhaps reconsider the severity" is a polite reading. I get that it's a bug either way, but this leaves me with a vague feeling of the ffmpeg maintainer's time being abused.
On the other hand, if the bug doesn't get filed, it doesn't get fixed. Sure Google could spend some resources on fixing it themselves, but even if they did would we not likely see a complaint about google flooding the maintainers with PR requests for obscure 30 year old codec bugs? And isn't a PR even more of a demand on the maintainer's time because now there's actual code that needs to be reviewed, tests that need to be run and another person waiting for a response on the other end?
"Given enough eyeballs, every bug is shallow" right? Well, Google just contributed some eyeballs, and now a bug has been made shallow. So what's the actual problem here? If some retro game enthusiast had filed the same but report would that be "abusing" the maintainer's time? I would think not, but then we're saying that a bug report can be "abusive" simply by the virtue of who submits it. And I'm really not sure "don't assign employees to research bugs in your open source dependencies and if you do certainly don't submit bug reports on what you find because that's abusive" is the message we want to be sending to corporations that are using these projects.
Use-after-free bugs (such as the vulnerability in question, https://issuetracker.google.com/issues/440183164) usually can be exploited to result in remote code execution, but not always. It wouldn't be prudent to bet that this case is one of the exceptions, of course.
Maintainers rarely understand or agree with the severity of a bug until an exploit beats them over the head publicly in a way they are unable to sweep under the rug.
On the other hand, reporters giving a CVE a 10 for a bug in an obscure configuration option that is disabled by default in most deployments is bit over the top. I've seen security issues being reported as world ending, being there for years, without anyone being able to make an exploit PoC.
This bug can most likely lead to RCE, proving that it can’t is generally a very difficult problem.
There’s absolutely no reason to assume that it does not lead to RCE, and certainly no reason whatsoever to invest significant time to prove that one way or the other unless you make a living selling exploits.
That quote felt pretty disingenuous. OK, so the proof of concept was found in some minor asset of an old video game. But is it an exploitable vulnerability? If so, you will quickly find it on modern-day scummy advertising networks. Will it still be "medium severity"? Not clear either way to me, from the quote.
> But appearance is operative: a security issue is something that I (as the maintainer) would need to fix regardless of who reports it
I think this is the heart of the issue and it boils off all of the unimportant details.
If it's a real, serious issue, you want to know about it and you want to fix it. Regardless of who reports it.
If it's a real, but unimportant issue, you probably at least want to track it, but aren't worried about disclosure. Regardless of who reports it.
If it's invalid, or AI slop, you probably just want to close/ignore it. Regardless of who reports it.
It seems entirely irrelevant who is reporting these issues. As a software project, ultimately you make the judgment call about what bugs you fix and what ones you don't.
But if it's a real, serious issue without an easy resolution, who is the burden on? It's not that the maintainers wouldn't fix bugs if they easily could. FFmpeg is provided "as is"[0], so everyone should be responsible for their side of things. It's not like the maintainers dumped their software on every computer and forced people to use it. Google should be responsible for their own security. I'm not adamant that Google should share the patch with others, but it would hardly be an imposition to Google if they did. And yes, I really do intend that you could replace Google with any party, big or small, commercial or noncommercial. It's painful, but no one has any inherent obligations to provide others with software in most circumstances.
[0] More or less. It seems the actual language is shied from. Is there a meaningful difference?
But if no bug report is filed, then only google gets the ability to "be responsible for their own security", everyone else either has to independently discover and then patch the bug themselves, or wait until upstream discovers the bug.
In no reasonable reading of the situation can I see how anything Google has done here has made things worse:
1) Before hand, the bug existed, but was either known by no one, or known only by people exploiting it. The maintainers weren't actively looking at or for this particular bug and so it may have continue to go undiscovered for another 20 years.
2) Then Google was the only one that knew about it (modulo exploiters) and were the only people that could take any steps to protect themselves. The maintainers still don't know so everyone else would remain unprotected until they discover it independently.
3) Now everyone knows about the issue, and are now informed to take whatever actions they deem appropriate to protect themselves. The maintainers know and can choose (or not) to patch the issue, remove the codec or any number of other steps including deciding it's too low priority in their list of todos and advising concerned people to disable/compile it out if they are worried.
#3 is objectively the better situation for everyone except people who would exploit the issue. Would it be even better if Google made a patch and submitted that too? Sure it would. But that doesn't make what they have done worthless or harmful. And more than that, there's nothing that says they can't or won't do that. Submitting a bug report and submitting a fix don't need to happen at the same time.
It's hard enough convincing corporations to spend any resources at all on contributing to upstream. Dragging them through the mud for not submitting patches in addition to any bug reports they file is in my estimation less likely to get you more patches, and more likely to just get you less resources spent on looking for bugs in the first place.
I wasn't really thinking about the disclosure part, although I probably should have. I was focusing on the patching side of things. I think you're correct that disclosure is good, but in that case, I think it increases the burden of those with resources to collaborate to produce a patch.
Well, it's open source and built by volunteers, so nobody is obligated to fix it. If FFmpeg volunteers don't want to fix it or don't have the time/bandwidth to fix it, then they won't fix it. Like any other bug or CVE in any other open source project. The burden doesn't necessarily need to be on anyone.
They aren't obligated to fix CVEs until they're exploited, and then, suddenly, they very much were obligated to fix the CVEs, and their image as FLOSS maintainers and as a project are very much tarnished.
If they are unable to fix CVEs in a timely manner, then it is very reasonable for people to judge them (accurately!) as being unable to fix CVEs in a timely manner. Maybe some people might even decide to use other projects or chip in to help out! However, it is dishonest to hide reports and pretend like bugs are being fixed on time when they are not.
While this feels like it’s perhaps bordering on somewhat silly nitpicking, the trend of conflating vulnerabilities with CVEs is probably at least mildly harmful. It’s probably good to at least try not to let people get away with this all the time.
The way many (perhaps most) people think of CVEs is badly broken. The CVE system is deeply unreliable, resulting in CVEs being issued for things that are neither bugs nor vulnerabilities while at the same time most things that probably should have CVEs assigned do not have them. Not to even mention the ridiculous mess that is CVSS.
I’m just ranting though. You know all this, almost certainly much better than me.
So what is Google gonna do if security fixes don't happen in time and the project takes a "reputational hit"? Fork it and maintain it themselves? Why not send in patches instead?
Maintaining a reputation might be enough reward for you, but not everyone is happy to work for free for a billion dollars corporation breathing down their necks. It's puzzling to me why people keep defending their free lunch.
There are people who use and depend on ffmpeg. Maintainers seem to go out of their way to solve issues these folks face. If you don't care, then ignore the bug reports and force them to solve their own problems by contributing.
These people are not customers though. The maintainers do their best, but overall the project seems to be understaffed though, so customers (for example Google, as it seems they occasionally chip in) get priority.
This isn’t true at all in my experience: disclosures happen on a timeline (60 to 90 days is common), with extensions provided as a courtesy based on remediation complexity and other case-by-case considerations. I’ve been party to plenty of advisories that went public without a fix because the upstream wasn’t interested in providing one.
The norm is the same for both. Perhaps there’s an argument that it should be longer for OSS maintainers, but OSS maintainers also have different levers at their disposal: they can just say “no, I don’t care” because nobody’s paying them. A company can’t do that, at least not without a financial hit.
To my original comment, the underlying problem here IMO is wanting to have it both ways: you can adhere to common notions of security for reputational reasons, or you can exercise your right as a maintainer to say “I don’t care,” but you can’t do both.
True - if we're talking about actual security bugs, not the "CVE slop"
P.S. I'm an open source maintainer myself, and I used to think, "oh, OSS developers should just stop whining and fix stuff." Fast forward a few years, and now I'm buried under false-positive "reports" and overwhelmed by non-coding work (deleting issue spam, triage, etc.)
P.P.S. What's worse, when your library is a security component the pressure’s even higher - one misplaced loc could break thousands of apps (we literally have a million downloads at nuget [1] )
I feel this comment is far to shallow a take. I would expect that you know better than most of HN, exactly how much a reputation security has as a cost center. Google uses ffmpeg internally, how many millions would they have to spend if they were required to not only create, but maintain ffmpeg themselves? How significant would that cost be at Google's scale?
I dont agree the following framing is accurate, but I can mention it because you've already said the important part (about how this issue exists, and mearly knowing about it doesn't create required work.) But here announcing it, and registering a CVE, Google is starting the clock. By some metrics, it was already running, but the reputational risk clearly was not. This does change priorities, and requires as urgent context switch. neither are free actions, especially not within FOSS.
To me, being someone who believes everyone, individuals and groups, have a responsibility to contribute fairly. I would frame it as Google's behavior gives the appearance weaponizing their cost center externally, given this is something Google could easily fix, but instead they shirked that responsibility to unfunded volunteers.
To be clear, I think Google (Apple, Microsoft, etc.) can and should fund more of the OSS they depend on. But this doesn’t change the fact that vulnerability reports don’t create work per se, they just reveal work that the project can choose to act on or not.
The problem isn't Google reporting vulnerabilities. It's Google using AI to find obscure bugs that affect 2 people on the planet, then making a CVE out of it, without putting any effort into fixing it themselves or funding the project. What are the ffmpeg maintainers supposed to do about this? It's a complete waste of everybody's time.
> The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”
I don't think that's an accurate description of the full scope of the problem. The codec itself is mostly unused but the code path can possibly be triggered from file fuzzing that ffmpeg uses so a maliciously crafted payload (e.g. any run of ffmpeg that touches user input without disabling this codec) could possibly be exploited.
Wrong. The original files only affect 2 people. A malicious file could be anywhere.
Do you remember when certain sequences of letters could crash iphones? The solution was not "only two people are likely to ever type that, minimum priority". Because people started spreading it on purpose.
Mark it low, and estimate by when it can be fixed (3, 4 months from now). Advise google of it. Google does not need to disclose this bug that fast.
If I do something one the side or as hobby and big corp comes by to tell me to hurry up I feel inclined to say no thanks.
The problem lies in the fact that these companies are generating work for volunteers on a different time-scale and binding them to it by giving them X days before disclosing vulnerabilities. No one wants their project to have security vulnerabilities that might affect a lot of users, which creates pressure in dealing with them.
The open source model is broken in this regard, licenses need to address revenue and impose fees on these companies, which can be used as bug bounties. Game engines do this and so should projects like FFMPEG, etc. The details are complex of course, but the current status quo is abusing people's good will.
1) dedicating compute resources to continuously fuzzing the entire project
2) dedicating engineering resources to validating the results and creating accurate and well-informed bug reports (in this case, a seriously underestimated security issue)
3) additionally for codecs that Google likely does not even internally use or compile, purely for the greater good of FFMPEG's user base
Needless to say, while I agree Google has a penny to spare to fund FFMPEG, and should (although they already contribute), I do not agree with funding this maintainer.
> - benefiting from the bugs getting fixed, but not contributing to them.
I would be very surprised if Google builds this codec when they build ffmpeg. If you run a/v codecs (like ffmpeg) in bulk, the first thing to do is sandbox the hell out of it. The second thing you do is strictly limit the containers and codecs you'll decode. Not very many people need to decode movies from old LucasArts games, for video codecs, you probably only want mpeg 1-4, h.26x, vp8, vp9, av1. And you'll want to have fuzzed those decoders as best you can too.
Nobody should be surprised that there's a security problem in this ancient decoder. Many of the eclectic codecs were written to mimic how the decoders that shipped with content were written, and most of those codecs were written assuming they were decoding a known good file, because why wouldn't they be. There's no shame, that's just how it is... there's too much to proactively investigate, so someone doing fuzzing and writing excellent reports that include diagnosis, specific location of the errors, and a way to reproduce are providing a valuable contribution.
Could they contribute more? Sure. But even if they don't, they've contributed something of value. If the maintainers can't or don't want to address it, that'd be reasonable too.
"While reading about the 4xm demuxer vulnerability, we thought that we could help FFmpeg eliminate many potential low-hanging problems from the code by making use of the Google fleet and fuzzing infrastructure we already had in place"
"At no cost to Google" seems difficult to substantiate, given that multiple sources indicate that Google is sponsoring FFmpeg both with engineering resources (for codec development) and cold hard cash (delivered to the FFmpeg core team via their consulting outfit[1]).
This is excellent, to be clear. But it's not compatible with the yarn currently being spun of a purely extractive relationship.
Yes, according to the license selected by ffmpeg. And google, according to this license selected by ffmpeg, paid them nothing. And then do some additional work, beneficial to ffmpeg.
They could, but there is really no requirement on them to do so. The security flaw was discovered by Google, but it was not created by them.
Equally there is no requirement on ffmpeg to fix these CVEs nor any other.
And, of course, there is no requirement on end-users to run software from projects which do not consider untrusted-input-validation bugs to be high priority.
> They could, but there is really no requirement on them to do so.
I see this sort of sentiment daily. The sentiment that only what is strictly legal or required is what matters.
Sometimes, you know, you have to recognise that there are social norms and being a good person matters and has intrinsic value. A society only governed by what the written law of the land explicitly states is a dystopia worse than hell.
What's "strictly legal or required" of Google here is absolutely nothing. They didn't have to do any auditing or bug hunting. They certainly didn't have to validate or create a proper bug report, and there's no requirement whatsoever that they tell anyone about it at all. They could have found the bug, found it was being actively exploited, made their own internal patch and sat quietly by while other people remained vulnerable. All of that is well within what is "strictly legal or required".
Google did more than what is "strictly legal or required", and what they did was submit a good and valid bug report. But for some reason we're mad because they didn't do even more. Why? The world is objectively a better place for having this bug report, at least now people know there's something to address.
Google did more than what is "strictly legal or required", and what they did was submit a good and valid bug report. But for some reason we're mad because they didn't do even more. Why?
"I noticed your window was broken, so I took the liberty of helping you, working for free, by posting a sign that says UNLOCKED WINDOW HERE with exact details on how it was broken. I did lots of gratis work for you which you do not need to do yourself now. The world is safer now. Why are you not grateful?"
I mean if we’re going to do sloppy analogies, a bug report for open source software as widely used as ffmpeg is more like “I noticed the trees in the back corner of your free apple orchard are producing apples with trace amounts of heavy metals. I did some literal digging and sent some soil off to the labs and it looks like your back corner field may be contaminated. Here’s a heads up about that, and also just FYI in 90 days, if you haven’t been able to get your soil remediated, I’m going to put up a sign so that people can know to avoid those apples and not get poisoned by your free orchard while it’s getting fixed.”
Yes, this is a good illustration why The Copenhagen Interpretation of Ethics makes sense when Ffmpeg is allowed to criticise the manner of actions of Google.
You're correct, but it's the social norms -- or at least, the norms as I perceive them -- that I am talking about here.
If you find yourself with potentially serious security bugs in your repo, then the social norm should be for you to take ownership of that because, well, it's your repo.
The socially unacceptable activity here should be treating security issues as an irritation, or a problem outside your control. If you're a maintainer, and you find yourself overwhelmed by genuine CVE reports, then it might be worth reflecting on the root cause of that. What ffmpeg did here was to shoot the messenger, which is non-normative.
It seems to me that they are not treating the security issue as an irritation, but instead the manner at which it was presented to them that is the problem.
> And, of course, there is no requirement on end-users to run software from projects which do not consider untrusted-input-validation bugs to be high priority.
What's this even saying?
Then they're free to fork it and never use the upstream again.
It is my understanding that the commenters in FFMPEG's favor believe that Google is doing a disservice by finding these security vulnerabilities, as they require volunteer burden to patch, and that they should either:
1) allow the vulnerabilities to remain undiscovered & unpatched zero-days (stop submitting "slop" CVEs.)
2) supply the patches (which i'm sure the goalpost will move to the maintainers being upset that they have to merge them.)
3) fund the project (including the maintainers who clearly misunderstand the severity of the vulnerabilities and describe them as "slop") (no thank you.)
What is the mission of Project Zero? Is it to build a vulnerability database, or is it to fix vulnerabilities?
If it's to fix vulnerabilities, it seems within reason to expect a patch. If the reason Google isn't sending a patch is because they truly think the maintainers can fix it better, then that seems fair. But if Google isn't sending a patch because fixing vulns "doesn't scale" then that's some pretty weak sauce.
Maybe part of the solution is creating a separate low priority queue for bug reports from groups that could fix it but chose not to.
> After finding a number of flaws in software used by many end-users while researching other problems, such as the critical "Heartbleed" vulnerability, Google decided to form a full-time team dedicated to finding such vulnerabilities, not only in Google software but any software used by its users.
It did that but it did not decide to form a team dedicated to fixing issues in software that it uses? That's the misallocation of funds that's at play here.
The ideal outcome is that Project Zero sends its discoveries off to a team who triage and develop patches for the significant vulnerabilities, and then the communication with the project is a much more helpful one.
The security and privacy org is much large than just GPZ, but the security and privacy org does not have a general remit to fix all vulns everywhere. GPZ is also not the only part of the org that finds bugs in open source software but is not generally obligated to fix them. Projects like ossfuzz are similar.
Google could staff a team that is responsible for remediating vulns in open source software that doesn't actually affect any of Google's products. Lord knows they've got enough money. I'd prefer it if they did that. But I don't really see the reasoning behind why they must do this or scrap all vuln research on open source software.
> Our mission is to make the discovery and exploitation of security vulnerabilities more difficult, and to significantly improve the safety and security of the Internet for everyone.
> We perform vulnerability research on popular software like mobile operating systems, web browsers, and open source libraries. We use the results from this research to patch serious security vulnerabilities, to improve our understanding of how exploit-based attacks work, and to drive long-term structural improvements to security.
If you are deliberately shipping insecure software, you should stop doing that. In ffmpeg's case, that means either patching the bug, or disabling the codec. They refused to do the latter because they were proud of being able to support an obscure codec. That puts the onus on them to fix the bug in it.
I can tell you with 100% certainty that there are undiscovered vulnerabilities in the Linux kernel right now. Does that mean they should stop shipping?
I do think that contributing fuzzing and quality bug reports can be beneficial to a project, but it's just human nature that when someone says "you go ahead and do the work, I'll stand here and criticize", people get angry.
Rather than going off and digging up ten time bombs which all start counting down together, how about digging up one and defusing it? Or even just contributing a bit of funding towards the team of people working for free to defuse them?
If Google really wants to improve the software quality of the open source ecosystem, the best thing they could do is solve the funding problem. Not a lot of people set out to intentionally write insecure code. The only case that immediately comes to mind is the xz backdoor attempt, which again had a root cause of too few maintainers. I think figuring out a way to get constructive resources to these projects would be a much more impressive way to contribute.
This is a company that takes a lot of pride in being the absolute best of the best. Maybe what they're doing can be justified in some way, but I see why maintainers are feeling bullied. Is Google really being excellent here?
You will note the Linux kernel is not crying on Twitter when Google submits bugs to them. They did long ago, then realized that the bugs that Google reported often showed up exploited in the wild when they didn’t fix them, and mostly decided that the continuous fuzzing was actually a good thing. This is despite not all the bugs being fixed on time (there are always new OSSFuzz bugs in the queue for fixing).
There are other CVE numbering authorities you can report a vulnerability to and apply for a CVE, or appeal, but this does possibly have a chilling effect if the vendor's CNA refuses valid vulns. (Like with MS in https://news.ycombinator.com/item?id=44957454 )
> this does possibly have a chilling effect if the vendor's CNA refuses valid vulns
The Linux kernel went in the opposite direction: Every bugfix that looks like it could be relevant to security gets a CVE[1]. The number of CVEs has increased significantly since it became a CNA.
>If Google really wants to improve the software quality of the open source ecosystem, the best thing they could do is solve the funding problem.
Google is not a monolith. If you asked the board, or the shareholders of google what they thought of open source software quality they would say they don't give a rat's ass about it. Someone within google who does care has been given very limited resources to deal with the problem, and are approaching it in the most efficient way they can.
>it's just human nature that when someone says "you go ahead and do the work, I'll stand here and criticize", people get angry
Bug reports are not criticism, they are in fact contributions, and the human thing to do when someone contributes to your project is to thank them.
>This is a company that takes a lot of pride in being the absolute best of the best.
There was an era when people actually believed that google was the best of the best, rather than saying it as a rhetorical trick, and during that era they never would have dreamed of making such self centered demands of google. This project zero business comes across as the last vestige of a dying culture within google. Why do people feel the need to be so antagonitic towards it?
>I can tell you with 100% certainty that there are undiscovered vulnerabilities in the Linux kernel right now. Does that mean they should stop shipping?
The ffmpeg authors aren't "shipping" anything; they're giving away something they make as a hobby with an explicit disclaimer of any kind of fitness for purpose. If someone needs something else, they can pay an engineer to make it for them.
This has nothing to do with payment. Not deliberately infecting your users with vulnerabilities is simply the right thing to do. Giving something away for free doesn't absolve you of certain basic ethical responsibilities.
They're not deliberately infecting users with anything. There effectively saying "here's example code showing how to deal with these video formats. NOTE THAT THESE ARE EXAMPLES THAT I WROTE FOR FUN. THEY ARE NOT MEANT FOR SERIOUS USE AND MAY NOT HANDLE ALL CORNER CASES SAFELY. THIS SHOULD BE OBVIOUS SINCE WE HAVE NO COMMERCIAL RELATIONSHIP AND YOU'RE DOWNLOADING RANDOM CODE FROM SOMEONE YOU DON'T KNOW ON THE INTERNET".
If someone goes on to use that code for serious purposes, that's on them. They were explicitly warned that this is not production commercial code. It's weekend hobby work. There's no ethical obligation to make your hobby code suitable for production use before you share it. People are allowed to write and share programs for fun.
Deliberate malware would be something like an inbuilt trojan that exfiltrates data (e.g. many commercial applications). Completely different.
They are not effectively saying that. The way they talk about the library everywhere else makes it clear that they do expect serious use. Disclaimers in the license don't override that, especially when 99% of software has a disclaimer like that. Those words are there for legal reasons only.
If they wanted to market ffmpeg as a toy project only, not to be trusted, they could do that, but they are not doing that.
Except the very idea that they owe you anything is so absurd that even if they had a contract document stating that they'd do work for you, they still wouldn't have an obligation to do so because society has decided that contracts without some consideration from both sides are not valid. Similarly, even if something you buy comes with a piece of paper saying they don't owe you anything if it breaks, the law generally says that's not true. Because you paid for it.
But they don't say they warrant their work. They have a notice reminding you that you are receiving something for free, and that thing comes with no support, and is not meant to be fit for any particular use you might be thinking of, and that if you want support/help fulfilling some purpose, you can pay someone (maybe even them if you'd like) for that service. Because the way the world works is that as a general principle, other people don't owe you something for nothing. This is not just some legal mumbo jumbo. This is how life works for everyone. It's clear that they're not being malicious (they're not distributing a virus or something), and that's the most you can expect from them.
Computer security is always contextual, but as a general rule, if you're going to be accepting random input from unknown parties, you should have an expert that knows how to do that safely. And as mentioned elsewhere in these comments, such an expert would already be compiling out codecs they don't need and running the processing in a sandboxed environment to mitigate any issues. These days even software written in-house is run in sandboxed environments with minimal permissions when competent professionals are making things. That's just standard practice.
So they should be proud that they support obscure codecs, and by default the onus is on no one to ensure it's free from bugs. If an engineer needs to make a processing pipeline, the onus is always on them to do that correctly. If they want to use a free, unsupported hobby tool as part of their serious engineering project, it's on them to know how to manage any risks involved with that decision. Making good decisions here is literally their job.
All I'm asking for right here is consistency about whether the library is mostly secure. The ethical requirement is to follow through on your claims and implications, while making claims and implications is completely optional.
> Computer security is always contextual, but as a general rule, if you're going to be accepting random input from unknown parties, you should have an expert that knows how to do that safely. And as mentioned elsewhere in these comments, such an expert would already be compiling out codecs they don't need and running the processing in a sandboxed environment to mitigate any issues.
Sandboxing is great defense in depth but most software should not require sandboxing. And expecting everyone to have an expert tweaking compilation is not realistic. Defaults matter, and security expectations need to be established between the site, the documentation, and the defaults, not left as a footgun for only experts to avoid.
The library probably is mostly secure, and it might even be the best library out there for what it does. That still leaves them with no ethical requirement at all.
People are allowed to make secure, robust software for fun. They can take pride in how good of a job they do at that. They can correctly point out that their software is the best. That still leaves them with no obligations at all for having shared their project for free.
If you are not an expert in hardening computers, don't run random untrusted inputs through it, or pay someone to deliver a turnkey hardened system to you. That someone might be Adobe selling their codecs/processing tools, or it might be an individual or a company like Redhat that just customizes ffmpeg for you. In any case, if you're not paying someone, you should be grateful for whatever goodwill you get, and if you don't like it, you can immediately get a full refund. You don't even have to ask.
The person doing serious things in a professional context is always the one with the obligation to do them correctly. When I was at IBM, we used exactly 1 external library (for very early processor initialization) and 1 firmware blob in the product I worked on, and they were paid deliverables from hardware vendors. We also paid for our compiler. Everything else (kernel, drivers, firmware, tools) was in-house. If companies want to use random free code they found on the Internet without any kind of contract in place, that's up to them.
It is if they fix bugs like this. Status quo everything is fine with their actions, they don't need to do anything they aren't already doing.
If they decide they don't want to fix bugs like this, I would say they have the ethical obligation to make it clear that the software is no longer mostly secure. This is quite easy to accomplish. It's not a significant burden in any way.
Basically, if they want to go the less-secure route, I want it to be true that they're "effectively saying" that all caps text you wrote earlier. That's all. A two minute edit to their front page would be enough. They could edit the text that currently says "A complete, cross-platform solution to record, convert and stream audio and video." I'll even personally commit $10 to pay for those two minutes of labor, if they decide to go that route.
> Providing a real CVE is a contribution, not a burden. The ffmpeg folks can ignore it, since by all indications it's pretty minor.
Re-read the article. There's CVEs and then there's CVEs. This is the former, and they're shoving tons of those down the throats of unpaid volunteers while contributing nothing back.
What Google's effectively doing is like a food safety inspection company going to the local food bank to get the food that they operate their corporate cafeteria on just to save a buck, then calling the health department on a monthly basis to report any and all health violations they think they might have seen, while contributing nothing of help back to the food bank.
I have read the article. The expectation for a tool like ffmpeg is that regardless of what kind of file you put into it, it safely handles it.
This is an actual bug in submitted code. It doesn't matter that it's for some obscure codec, it's technically maintained by the ffmpeg project and is fair game for vulnerability reports.
Given that Google is also a major contributor to open-source video, this is more like a food manufacturer making sure that grocery stores are following health code when they stock their food.
Mind you, the grocery store has no obligation to listen to them in this metaphor and is free to just let the report/CVE sit for a while.
This is why many have warned against things like MIT licence. Yes, it gives you source code and does easily get incorporated into a lot of projects but it comes at the cost of potential abuse.
Yes, GPL 3 is a lot ideologically but it was trying to limit excessive leeching.
Now that I have opened the flood gates of a 20 year old debate, time to walk away.
Google Project Zero just looks for security issues in popular open source packages, regardless of if Google itself even uses those packages or not.
So I'm not sure what GPLv3 really has to do with it in this case, if it under was a "No billion dollar company allowed" non-free-but-source-available license, this same thing would have happened if the project was popular enough for Project Zero to have looked at it for security issues.
The difference is that Google does use it, though. They use it heavily. All of us in the video industry do - Google, Amazon, Disney, Sony, Viacom, or whoever. Companies you may have never heard of build it into their solutions that are used by big networks and other streaming services, too.
But opening security issues here is not related to that in any way. It's an obscure file format Google definitely doesn't use, the security issue is irrelevant to Google's usages of it.
The critique would make sense if Google was asking for ffmpeg to implement something that Google wanted, instead of sending a patch. But they don't actually care about this one, they aren't actually asking for them to fix it for their benefit, they are sending a notice of a security issue that only affects people who are not Google to ffmpeg.
Opening a security issue is not the problem. A public disclosure so soon when there are so many machine-assisted reports for such obscure issues is the problem.
If Google wants to force a faster turnaround on the fixes, they can send the reports with patches or they can pay for prioritization.
Three months is "soon"? What do you think is reasonable?
And like so many posters in this thread, you seem to be under the impression that Google needed this fixed at some specific timeline. In reality the fix timeline, or even a total lack of a fix, makes no impact to them. They almost certainly already disable these kinds of codecs in their build. They reported this for the good of the ecosystem and the millions of users who were vulnerable.
Google does not "want this fixed", this isn't a bug report from a team using ffmpeg, it's a security analysis from a whitehat security project.
I think really if there's all these machine generated meaningless security reports, wasting time with that sounds like a very sensible complaint, but then they should point at that junk as the problem.
But for the specific CVE discussed it really looks to me like they are doing everything right: it's a real, default-configuration exploitable issue, they reported it and ffmpeg didn't fix or ask for any extension then it gets publicly disclosed after 90 days per a standard security disclosure policy.
What in GPL3 or MIT mandates that Google fix this bug and submit a PR or simply sends a bug report and walks away? I don't see how this applies at all.
AGPL, with no CLA that lets the owners relicense. Then we'll see if the using corporation fully believes in open source.
There's a reason Google turned into year 2000 Microsoft "it's viral!" re. the AGPL. They're less able to ignore the intent of the license and lock away their changes.
As much as I hate to be on Google's side I think they're doing a reasonable thing here. Valid bug reports are valuable contributions on their own, and disclosing security issues is standard practice. Not disclosing them doesn't help anyone. Security through obscurity is not security. If neither FFMPEG nor Google dedicate resources to fixing the issue in 90 days, making it public at least ensures that it gets visibility and thus has a higher chance to get fixed by a third party.
I'm sure Google could (and probably should) do even more to help, but FFMPEG directing social media rage at a company for contributing to their project is a bone-headed move. There are countless non-Google companies relying on FFMPEG that do much less for the project, and a shit show like this is certainly not going to encourage them to get involved.
As someone who worked as a software engineer at Google on a service that heavily depended on FFmpeg, its absurd that Google posts security bugs (which have the obvious potential outcome of driving more free work) vs just paying an engineer to fix the bug.
I promise they are spending more on extra compute for resiliency and redundancy for FFMPEG issues than it would cost for a single SWE to just write a fix and then shepherd through the FFmpeg approval process.
As someone who was on a project that stalled for a year because our patchset wasn't accepted by a different open source project (not Linux either), I can tell you from experience that it's not as easy as folks here make it out to be. Some maintainers (and Googlers) really want you to study at their mountaintop monastery before your code is worthy, and scrutiny is even higher now due to AI, as we can see from the complaints about this bug report. Now, I've merged enough open source patches on my personal time to know that most projects aren't like that, but based on this interaction, I seriously wonder if Google's patch would've been accepted without incident.
Maybe AmaGoogSoft deserves this, but then what's the threshold? If I'm in charge of Zoom or Discord and one of my engineers finds a bug, should I let them report it and risk a public blow-up? Or does my company's revenue need to be below $1B? $100M? This just poisons the well for everyone.
Bonus comment: I was present for conversations about how Google should just write an internal version because of all the stability issues, but that that work would never get prioritized or be considered valuable because it wouldn't get anyone promoted (to be fair, given how widely FFmpeg is used, it would have gotten an L4 or L5 promoted, but it would have been a near sisyphean task over years to get to the point where you could demostrate the ridiculously high XXm-XXXm returns that would come from just helping to improve FFmpeg).
That was exactly my thought. To be fair, I probably end up thinking that because this article is written is this trashy dumbass style, which portrays it as "Google bug reports vs. ffmpeg bug fixes", which is simply unfair: as you've said, a bug report is a contribution, not some kind of a demand. That being said, I kinda do understand if someone from ffmpeg said something snarky about that on Twitter, since surely if Google (of all things) as an organization sees it valuable to contribute by sending bug reports, it surely isn't less feasible (logistically or economically) for them to also work on a patch, than it is for random people within ffmpeg mailing list itself.
I get the idea of publicly disclosing security issues to large well funded companies that need to be incentivized to fix them. But I think open source has a good argument that in terms of risk reward tradeoff, publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.
In addition to your point, it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days. The security issue should certainly be disclosed - when its responsible to do so.
Now, if Google or whoever really feels like fixing fast is so important, then they could very well contribute by submitting a patch along with their issue report.
> ...then they could very well contribute by submitting a patch along with their issue report.
I don't want to discourage anyone from submitting patches, but that does not necessarily remove all (or even the bulk of) the work from the maintainers. As someone who has received numerous patches to multimedia libraries from security researchers, they still need review, they often have to be rewritten, and most importantly, the issue must be understood by someone with the appropriate domain knowledge and context to know if the patch merely papers over the symptoms or resolves the underlying issue, whether the solution breaks anything else, and whether or not there might be more, similar issues lurking. It is hard for someone not deeply involved in the project to do all of those things.
> it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days
This is very far from obvious. If google doesn't feel like prioritising a critical issue, it remains irresponsible not to warn other users of the same library.
If that’s the case why give the OSS project any time to fix at all before public disclosure? They should just publish immediately, no? Warn other users asap.
Because it gives maintainers a chance to fix the issue, which they’ll do if they feel it is a priority. Google does not decide your priorities for you, they just give you an option to make their report a priority if you so choose.
Why do you think it has to be all or nothing? They are both reasonable concerns. That's why reasonable disclosure windows are usually short but not zero.
Full (immediate) disclosure, where no time is given to anyone to do anything before the vulnerability is publicly disclosed, was historically the default, yes. Coordinated vulnerability disclosure (or "responsible disclosure" as many call it) only exists because the security researchers that practice it believe it is a more effective way of minimizing how much the vulnerability might be exploited before it is fixed.
Unless the maintainers are incompetent or uncooperative this does not feel like a good strategy. It is a good strategy on Google's side because it is easier for them to manage
> In addition to your point, it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days.
So when the xz backdoor was discovered, you think it would have been better to sit on that quietly and try to both wrest control of upstream away from the upstream maintainers and wait until all the downstream projects had reverted the changes in their copies before making that public? Personally I'm glad that went public early. Yes there is a tradeoff between speed of public disclosure and publicity for a vulnerability, but ultimately a vulnerability is a vulnerability and people are better off knowing there's a problem than hoping that only the good guys know about it. If a Debian bug starts tee-ing all my network traffic to the CCP and the NSA, I'd rather know about it before a patch is available, at least that way I can decide to shut down my Debian boxes.
The XZ backdoor is not a bug but a malicious payload inserted by malicious actors. The security vulnerability would immediately been used as it was created by attackers.
This bug is almost certainly too obscure to be found and exploited in the time the fix can be produced by Ffmpeg. On the other hand, this vuln being public so soon means any attacker is now free to develop their exploit before a fix is available.
If Google's goal is security, this vulnerability should only be disclosed after it's fixed or a reasonable time (which, according to ffmpeg dev, 90 days is not enough because they receive too many reports by Google).
A bug is a bug, regardless of the intent of the insertion. You have no idea if this bug was or wasn't intentionally inserted. It's of course very likely that it wasn't, but you don't and can't know that, especially given that malicious bug insertion is going to be designed to look innocent and have plausible deniability. Likewise, you don't know that the use of the XZ backdoor was imminent. For all you know the intent was to let it sit for a release or two, maybe with an eye towards waiting for it to appear in a particular down stream target, or just to make it harder to identify the source. Yes, just like it is unlikely that the ffmpeg bug was intentional, it's also unlikely the xz backdoor was intended to be a sleeper vulnerability.
But ultimately that's my point. You as an individual do not know who else has access or information about the bug/vulnerability you have found, nor do you have any insight into how quickly they intend to exploit that if they do know about it. So the right thing to do when you find a vulnerability is to make it public so that people can begin mitigating it. Private disclosure periods exist because they recognize there is an inherent tradeoff and asymmetry in making the information public and having effective remediations. So the disclosure period attempts to strike a balance, taking the risk that the bug is known and being actively exploited for the benefit of closing the gap between public knowledge and remediation. But inherently it is a risk that the bug reporter and the project maintainers are forcing on other people, which is why the end goal must ALWAYS be public disclosure sooner rather than later.
A 25 years old bug in software is not the same as a backdoor (not a bug, a full on backdoor..). The bug is so old if someone put it there intentionally, well congrats on the 25yo 0day.
Meanwhile the XZ backdoor was 100% meant to be used. I didn't say when and that doesn't matter, there is a malicious actor with the knowledge to exploit it. We can't say the same regarding the bug in a 1998 codec that was found by extensive fuzzing, and without obvious exploitation path.
Now, should it be patched? Absolutely, but should the patch be done asap at the cost of other maybe more important security patches? Maybe, maybe not. Not all bugs are security vulns, and not all security vulns are exploitable
> publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.
You can never be sure that you're the only one in the world that has discovered or will discover a vulnerability, especially if the vulnerability can be found by an LLM. If you keep a vulnerability a secret, then you're leaving open a known opportunity for criminals and spying governments to find a zero day, maybe even a decade from now.
For this one in particular: AFAIK, since the codec is enabled by default, anyone who processes a maliciously crafted .mp4 file with ffmpeg is vulnerable. Being an open-source project, ffmpeg has no obligation to provide me secure software or to patch known vulnerabilities. But publicly disclosing those vulnerabilities means that I can take steps to protect myself (such as disabling this obscure niche codec that I'm literally never going to use), without any pressure on ffmpeg to do any work at all. The fact that ffmpeg commits themselves to fixing known vulnerabilities is commendable, and I appreciate them for that, but they're the ones volunteering to do that -- they don't owe it to anyone. Open-source maintainers always have the right to ignore a bug report; it's not an obligation to do work unless they make it one.
Vulnerability research is itself a form of contribution to open source -- a highly specialized and much more expensive form of contribution than contributing code. FFmpeg has a point that companies should be better about funding and contributing to open-source projects that they rely on, but telling security researchers that their highly valuable contribution is not welcome because it's not enough is absurd, and is itself an example of making ridiculous demands for free work from a volunteer in the open-source community. It sends the message that white-hat security research is not welcome, which is a deterrent to future researchers from ethically finding and disclosing vulnerabilities in the future.
As an FFmpeg user, I am better off in a world where Google disclosed this vulnerability -- regardless of whether they, FFmpeg, or anyone else wrote a patch -- because a vulnerability I know about is less dangerous than one I don't know about.
But if open source is reliant on public contributors to fix things, then the bug should be open so anyone can take a stab at fixing it, rather than relying on the closed group of maintainers
> publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.
Not publicly disclosing it also carries risk. Library users get wrong impression that library has no vulnerabilities, while numerous bugs are reported but don't appear due to FOSS policy.
You are missing the tiny little fact that apparently a large portion of infosec people are of the opinion that insecure software must not exist. At any cost. No shades of gray.
A bunch of people who make era-defining software for free. A labor of love.
Another bunch of people who make era-defining software where they extract everything they can. From customers, transactionally. From the first bunch, pure extraction (slavery, anyone?).
Not how the terms slavery and taxation are usually defined no.
If you choose to reduce them to such a level you ignore all their differences and focus on some carefully termed similarities you could make the case they're the same for that specific definition I suppose.
Exactly. The call-out is not "please stop doing security research". It is, "if you have a lot of money to spend on security research, please spend some of it on discovering the bugs, and some on fixing them (or paying us to fix them), instead of all of it on discovering bugs too fast for us to fix them in time".
Look, I know you're being snarky, but YES. All of the viable open-source video codecs of the past 10 years would not have happened without Google. Not just for technical reasons, but for expensive patent-related legal reasons too.
Given that ffmpeg is an open-source video transcoding tool, I don't think you can easily just dismiss this as "big company abuses open source."
The ffmpeg devs are volunteers or paid to work on specific parts of the tool. That's why they're unimpressed. What Google is doing here is pretty reasonable.
You got lower chances of getting hacked by a random file on the internet. At Project Zero level they're also not CVE seeking - it doesn't even matter at that scale, it's not an independent trying to become known.
I have yet to see one on any project I’ve been attached to that was actually exploitable under real circumstances. But the CVE hunting teams treat them all as if they were.
TFA is about Project Zero getting uppity about an unexploitable non-issue in ffmpeg.
Project Zero hasn't reported any vulnerabilities in any software I maintain. Lots of other security groups have, some well respected as well, but to my knowledge none of these "outside" reports were actual vulnerabilities when analyzed in context.
You are welcome to view the report however you like, but a world where an easily reproducible OOB read and UAF in the default configuration is an "unexploitable non-issue" is not reality.
Is this sarcasm? While it may be true that my mother does not know what ffmpeg is I'm almost positive she interacts with stuff that uses it literally every single day.
To help those without access to X, the PR thread linked appears to give 2014 as the last time they reported solid contributions to helping fix security issues in ffmpeg.
Unfortunately, nobody not on Twitter can see anything except the first message, which just appears to be some corporate-deflection-speak.
It’s a trillion dollar company. I’m sure they could find rifle through their couch cushions and find more than enough money and under-utilised devs to contribute finance or patches.
Nice to see coming from Google. Some other current and form Google employees were unfortunately not as professional in their interactions on Twitter in regards to this issue.
If you rely on it, pay for it. So easy. Especially if you are a BIIG company. Money gives the maintainers joy, free time and appreciation for their work.
Wakey wakey people, FOSS is here to F you. It can be free, while those, who rely on it and using it, also pay for it.
"They could shut down three product lines with an email"
If you (Amazon, in this case) can put it that way, it seems like throwing them 10 or 20 thousand a year would simply be a good insurance policy! Any benefits you might get in goodwill and influence are a bonus.
How do you think Jeff got a 500 million dollars yacht? Not by writing checks.
But on a more serious note, it is crazy that between Google and Amazon they can not fund them with 50k each per year, so that they can pay people to work on this.
Specially Google, with Youtube, they can very easily pay them more. 100k~200k easily.
I'm just imagining them having 4 or 5 250K a year security workers pumping out endless bug reports for one guy in Norway who works on ffmpeg at night and weekends for free because he loves open source and demanding him meet their program deadlines lol
Double funny considering new-grads who may polish up some UI features or rewrite components for the 10th time will get paid 200-400K TC at these same companies. Evidently these companies value something other than straight labor.
If I had built up 500 million in a savings account to buy a yacht, giving 50k of that to FFmpeg devs would put off my ability to buy a yacht by nearly a whole day.
Boltzmann brain-wise it clearly doesn't make sense to wait that long.
> How do you think Jeff got a 500 million dollars yacht? Not by writing checks.
A rising tide lifts all yachts. If he had written the check, my instinct tells me, he would have enough for two yachts. Goodwill is an actual line item on 10Q's and 10K's. I don't know why companies think it's worth ignoring.
Should Google be doing more to support ffmpeg? Yes.
Should Google stop devoting resources to identifying and reporting security vulnerabilities in ffmpeg?
I cannot bring myself to a mindset where my answer to this question is also "yes".
It would be one thing if Google were pressuring the ffmpeg maintainers in their prioritization decisions, but as far as I can tell, Google is essentially just disclosing that this vulnerability exists?
Maybe the CVE process carries prioritization implications I don't fully understand. Eager to be educated if that is the case.
I'm not being dismissive. I understand the imperetive of identifying and fixing vulnerabilities. I also understand the detrimental impact that these problems can potentially have on Google.
What I don't understand is the choice to have a public facing project about this. Can anyone shine a light on this?
Honestly it seems kind of weird that HN comments are becoming so hostile toward a company hiring security researchers and having them do free security research on popular projects, then giving away the results for free.
There are many groups and companies that do security research on common software and then sell the resulting vulnerabilities to people who don’t have your best interests in mind. Having a major company get ahead of this and share the results is a win for all of us.
A lot of people in this comment section don’t understand the broader security ecosystem. There are many vendors who will even provide patched versions of software to work around security issues that aren’t yet solved upstream. Some times these patches disable features or break functionality, or simply aren’t at a level where upstream is interested yet. But patching known issues is valuable.
Getting patches accepted upstream in big open source projects isn’t always easy. They tend to want things done a certain way or have a high bar to clear for anyone submitting work.
And pushing forward the idea that "responsible disclosure" doesn't mean the software creator can just sit on a bug for as long as they want and act superior and indignant when the researcher gives up and publishes anyway because the creator is dragging their ass.
Project Zero's public existence came out of the post-Snowden period where Google was publicly pissed at the NSA/etc for spying on them (e.g. by tapping their fiber links).
A lot of their research involves stuff they personally benefit from if they were secure. ffmpeg, libxml2, various kinds of mobile device firmware, Linux kernels and userspace components, you name it.
Their security team gaining experience on other projects can teach them some more diversity in terms of (malware) approaches and vulnerability classes, which can in turn be used to secure their own software better.
For other projects there's some vanity/reputation to be gained. Having some big names with impressive resumes publicly talk about their work can help attract talent.
Lastly, Google got real upset that the NSA spied on them (without their knowledge, they can't help against warrants of course).
Then again, there's probably also some Silicon Valley bullshit money being thrown around. Makes you wonder why they don't invest a little bit more to pay someone to submit a fix.
I would imagine it's mostly a PR/marketing thing. That way the researchers can point to being part of something other people know about, and Google gets positive PR (though maybe not in this case) for spending resources on making software in general more secure.
They should set up a foundation/LLC if they don't have one already and require a support contract for fixing any bugs in niche codecs. Target the 95%-98% use cases for "free" work. If someone gives them a CVE on something ancient, just note it with an issue tracker and that the problem is currently unsponsored. Have default build flags to omit all the ancient/buggy stuff. If nobody is willing to pay to fix all the ancient crap, then nobody should be using it. But if someone is willing to pay, then it gets fixed.
I understand ffmpeg being angry at the workload but this is how it is with large open source projects. Ffmpeg has no obligation to fix any of this. Open source is a gift and is provided as is. If Google demanded a fix I could see this being an issue. As it is right now it just seems like a bad look. If they wanted compensation then they should change the model, there's nothing wrong with that. Google found a bug, they reported it. If it's a valid bug then it's a valid bug end of story. Software owes it to its users to be secure, but again it's up to the maintainers if they also believe that. Maybe this pushes Google to make an alternative, which I'd be excited for.
I disagree, as software engineers we owe it to the craft to create correct software especially when we intend to distribute. Anything less is poor taste.
You bring up licensing. I’m not talking about legally I’m talking about a social contract.
The choice of license is also a a partial descriptor of the social contract. If I wanted to work on it for “customers” I would sell it. I don’t owe you anything otherwise.
The social contract is “here is something I’ve worked on for free, and it is a gift. Take it or leave it.”
For GP's sake, even before you make it to FYPM levels of angry, you will be in over your head. It's too much work. I remember being very early in my career and feeling like GP does. This is very easily more than a full-time job. The demands people will make of you and the attitudes they will use to do it will make you crazy.
> ffmpeg owes me nothing. I haven't paid them a dime.
That is true. At the same time Google also does not owe the ffmpeg devs anything either. It applies both ways. The whole "pay us or we won't fix this" makes no sense.
> Google also does not owe the ffmpeg devs anything either.
Then they can stop reporting bugs with their assinine one size fits all "policy." It's unwelcome and unnecessary.
> It applies both ways.
The difference is I do not presume things upon the ffmpeg developers. I just use their software.
> The whole "pay us or we won't fix this" makes no sense.
Pay us or stop reporting obscure bugs in unused codecs found using "AI" scanning, or at least, if you do, then change your disclosure policy for those "bugs." That's the actual argument and is far more reasonable.
It doesn’t matter if it affects their business or not. They found an issue and they reported it. Ffmpeg could request that they report it privately perhaps. Google has a moral duty to report the bug.
Software should be correct and secure. Of course this can’t always be the case but it’s what we should strive for. I think that’s baseline
> That does not impact their business or their operations in any way whatsoever.
I don't know what tools and backends they use exactly, but working purely by statistics, I'm sure some place in Google's massive cloud compute empire is relying on ffmpeg to process data from the internet.
It's unlikely the specific codec that is the issue but the bug report suggests that the code path could be hit by a maliciously crafted payload since ffmpeg does file fuzzing. They almost certainly have ffmpeg stuff that touches user submitted videos.
They're probably not manually selecting which codecs and codec parameters to accept and sticking to the default ones instead.
Plus, this bug was reported by AI, so it was as much a proof of concept/experiment/demonstration of their AI security scanner as it was an attempt to help secure ffmpeg
I read this as nobody wants CVEs open on their product, so you might feel forced to fix them. I find it more understandable if we talk about web frameworks: Wordpress don't want security CVEs open for months or years, or users would be upset they introduce new features while neglecting safety.
I am a nobody, and whenever I found a bug I work extra to attach a fix in the same issue. Google should do the same.
I think the glaring issue underlying this is that the big companies are not investing enough in the tools they rely on.
I agree with some of the arguments that patching up vulnerabilities is important, but it's crazy to put that expectation on unpaid volunteers when you flood them with CVE's some completely irrelevant.
Also the solution is fairly simple: Either, you submit a PR instead of an issue. Or, you send a generous donation with the issue to reward and motivate the people who do the work.
The amount of revenue they generate using these packages will easily offset the cost of funding the projects, so I really think it's a fair expectation for companies to contribute either by delivering work or funds.
The solution is even simpler. The project puts the bug report in its triage backlog. It works through it in its own time, and decides on severity and priority. That's the time-honored method.
The compounding factor here is the automated reporting and disclosure process of Google's Project Zero. GPZ automatically discloses bugs after 90 days. Even if Google does not expect bugs to be fixed within this period, the FFmpeg devs clearly feel pressure.
But it is an open source project, basically a hobby for most devs. Why accept pressure at all? Continue to proceed in the time-honored method. If and when Youtube explodes because of a FFmpeg bug, Google has only itself to blame. They could have done something but decided to freeload.
I don't understand the rational for announcing that a vulnerability in project X was discovered before the patch is released. I read the project zero blogspot announcement but it doesn't make much sense to me. Google claims this is help downsteam users but that feels like a largely non-issue to me.
If you announce a vulnerability (unspecified) is found in a project before the patch is released doesn't that just incentivize bad actors to now direct their efforts at finding a vulnerability in that project?
The reason for this policy is that if you don’t keep a deadline upstream can just sit on the report forever while bad actors can find and exploit the vulnerabilities, which harms downstream users because they are left entirely unaware that the vulnerability even exists. The idea behind public disclosure is that downstream is now made aware of the bug and can take appropriate action on their side (for example, by avoiding the software, sponsoring a fix, etc.)
"Don't announce an unpatched vulnerability ever" used to be the norm. It caused a massive problem: most companies and organizations would never patch security vulnerabilities, so vulnerabilities would last years or sometimes decades being actively exploited with nobody knowing about it.
Changing the norm to "We don't announce unpatched vulnerabilities but there is a deadline" was a massive improvement.
Maybe for a small project? I think the difference here is rather minimal. Everybody "knows" code often has security bugs so this announcement wouldn't technically be new information. For a large project such as ffmpeg, I doubt there is a lack of effort in finding exploits in ffmpeg given how widely it is used.
I don't see why actors would suddenly reallocate large amounts of effort especially since a patch is now known to be coming for the issue that was found and thus the usefulness of the bug (even if found) is rather limited.
There are some rhetorical slights of hand. If Google does the work to find and report vulnerabilities, great. Nice contribution regardless of who provides it. The OSS developer can ignore it by accepting the consequences that will be. They are not forced to fix it except by themselves.
The many large corporations should be funding these tools they depend on to increase time allocations and thus ability to be responsive but this isn't an either/or. These type of thinking erode the communities of such projects and minds of the contributors.
FWIW I've totally been that developer trapped in that perspective so I empathize, there are simply better mental stances available.
The vulnerability in question is a Use After Free. Google used AI to find this bug, it would've taken them 3 seconds to fix it.
Burning cash to generate spam bug reports to burden volunteer projects when you have the extra cash to burn to just fix the damn issue leaves a very sour taste in my mouth.
Use After Free takes 3 seconds to fix if you defer free until the end of the program. If you have to do something else, or you don't want to leak memory, then it probably takes longer than 3 seconds.
Probably the right solution is to disable this codec. You should have to make a choice to compile with it; although if you're running ffmpeg in a context where security matters, you really should be hand picking the enabled codecs anyway.
If it takes 3 seconds to fix it, then how is this some massive burden on the maintainers? The bug report pointed to the relevant lines, the maintainers are the most familiar with the code and it probably would have taken them 1.5 seconds to not only fix it, but validate the fix. It probably took more time to write up the complaint about the bugs than to implement the fix.
Maybe if it was an actual engineer from Google doing this they would have gotten a better response. Don’t expect people to treat AIs the same way we treat people.
But if you send me an automated report and then tell me to jump I’m telling you to f*ck off.
It takes more time to read and understand the bug report, than to fix it. Instead of using googles time, they used ffmpegs voluntary time.
If this happens another 1000 times (easily possible with AI) google just got free labour and free publicity for "discovering 1000 critical bugs (but not fixing them even so they were easy to do)"
It takes even more time to read and understand a patch. Not only to you have to do all of the work of reading and understanding the bug report for which the patch is relevant, but you now also have to read and understand the submitted code, which until just this moment you didn't even know was necessary and have no specific context for. Then you have to validate whether or not the code in the patch is sufficient to fix the issue or whether those changes have any additional knock on effects. Yes, you could hope that the Google coders did this, but since you already have such a low opinion of Google's efforts and behavior on this front, I would argue that trusting their submission without validation would be insane.
Then if there's any changes or additional work to be done, you now have to spend time communicating with the patch sumbmitter, either getting them to make the requested changes, or rejecting their patch outright and writing it on your own.
And after all that we'd be right back here, only instead of the complain being "we don't have enough time to review all your bug reports" it would be "we don't have enough time to review all your PRs"
Notably, the vulnerability is also in a part which isn't included by default and nobody uses. I'm not sure that even warrants a CVE? A simple bug report would have probably been fine. If they think this is really a CVE, a bug fix commit would have been warranted.
One problem here is that CVE scoring is basically entirely bugged, something scored 8.7 could be an RCE exploit or a "may be able to waste CPU" issue.
That's the difference between "it may or may not be that there's someone who cares" versus "no one should be running this software anywhere in the general vicinity of untrusted inputs".
> One problem here is that CVE scoring is basically entirely bugged, something scored 8.7 could be an RCE exploit or a "may be able to waste CPU" issue.
+100000
My favorite 8.x or higher CVEs are the ones where you would have to take untrusted user input, bypass all the standard ways to ingest and handle that type of data, and pass it into some internal function of a library. And then the end result is that a regex call becomes more expensive.
If you think that's bad, you should look at Linux kernel CVEs. They're basically gone rogue when it comes to CVEs. Every minor bug gets flagged as a CVE, regardless of impact. Often, exploitation requires root access. If you have root, you've already won and can do whatever the hell you want. No need to exploit a bug to cause problems.
You’re right about scoring, at least largely. Let’s not conflate the CVE system and the CVSS system, though. They are related but distinct. CVE is just an identifier system.
It is included in most builds of ffmpeg, for example in most Linux packages or in Windows build linked to on ffmpeg.org that I use. But yeah, it's a very niche format that nobody uses.
AIUI there's no such thing as "really a CVE". A CVE is merely a standardized identifier for a bug so you can call it "CVE-2025-XXXXX" rather than "that use-after-free Google found in ffmpeg with AI." It doesn't imply anything else about the bug, except that it may impact security. The Linux kernel assigns one to every bugfix that may impact security (which is most kernel bugs) to avoid controversy about whether they should be assigned.
Yes - more than a sour taste. This is hideous behavior. It is the polar opposite of everything intelligent engineers have understood regarding free-and-open software for decades.
This take sounds great until you realize this is literally Google using their resources to help an open source project (by reporting issues in it that ALREADY EXIST and NEED to be fixed OR users made aware if not fixed) AND they also help them by upstreaming patches (just not for this specific issue) regularly AND with monetary support.
So...your entire premise is patently false and wrong.
google is a customer of fflabs and has enrolled them in summer of code. They also provide free fuzzing. ffmpeg is a foss, gpl-licensed project. nobody has any obligation to contribute, thus it isn't exploitation.
It’s a reproducible use-after-free in a codec that ships by default with most desktop and server distributions.
The recent iOS zero-day (CVE-2025-43300) targeted the rarely used DNG image format. How long before this FFMPEG vulnerability is exploited to compromise legacy devices in the wild, I wonder?
I’m not a fan of this grandstanding for arguably questionable funding. (I surely would not fund those who believe these issues are slop.) I’d like to think most contributors already understand the severity and genuinely care about keeping FFMPEG secure.
Bugs in little-used corners of the project are a massive red flag, that's how some of the most serious OpenSSL bugs have emerged. If the code is in there, and someone can trigger it with a crafted input, then it is as bad as any other bug.
So what if ffmpeg has open CVEs? What is Google going to do? Swap it? Leave them open, let Google ship products with CVE'd dependencies, and then they'll be forced to act.
Why would Google act if they got smart guys working for them for free? Stop fixing Google-reported bugs.
Part of the issue is that FFmpeg is almost a meta-project. It contains so many possible optional dependencies. Which is great for features, nit so great if you quickly want to know if you're exposed to the latest CVE.
Honestly, I kind of think that ffmpeg should just document that it's not secure and that you're expected to run it in a sandbox if you plan to use it on possibly-malicious input. All the big cloud users and browsers are doing this already, so it would hardly even change anything.
ffmpeg is complaining that security bugs are such a drag that it's driving people away from their hobby/passion projects. Well, if fixing security bugs isn't your passion, why not just say that? Say it's not your priority, and if someone else wants it to be a priority, they can write the patches. Problem solved?
As in another comment, adding code/fixes only adds more work to the existing ffmpeg team as they need to review and maintain it forever. It's not good enough. Even security fixes in the style of "drive-by patching" are derided in security-oriented open source projects.
Adding code/fixes is a tiny fraction of work compared to reviewing and maintaining.
The only reasonable way is for Google and other big corps to either sponsor members of the existing team or donate money to the project. And making it long term not one-shotting for publicity.
It's frustrating to me how many people are siding with FFmpeg here considering how unprofessional and generally asshole-ish they are being.
I feel that this is mostly a kneejerk reaction to AI and Google in general, with people coming up with arguments to support their reaction after already forming an opinion.
It's a volunteer project, they have no requirement to be 'professional'. That's basically the root of the whole issue. A hobby project is not a product, and its developers are not vendors. Free software is not a supply chain.
To be clear, what does relicensing to AGPL do here? Does the AGPL include licensing terms that forbid filing bug reports without also including code patches? Or does it just make ffmpeg that much less appealing to projects and cut off the steady stream of contributions that it has gotten from google since 2009? https://git.ffmpeg.org/gitweb/ffmpeg.git/search/HEAD?pg=3;s=...
Right, but how is that a benefit here? The bug report was a valid report, ffmpeg is objectively better for it having been filed. Google contributes to ffmpeg on a regular basis according to the git history. They also buy consulting services from the ffmpeg maintainers according to the maintainer's own website. If ffmpeg was banned from Google, all of that would probably stop.
So not only would ffmpeg have multiple uncovered vulnerabilities, they would have less contributions and patches and less money for funding the maintainers. And for what? To satisfy the unfocused and mistaken rage of the peanut gallery online?
> To satisfy the unfocused and mistaken rage of the peanut gallery online?
That's only one possible benefit.
Another could be to gain leverage on big tech companies via dual licensing. If Google, Amazon, etc want to continue using FFmpeg without forking, they could do so by paying for the old LGPL license. It would likely be cheaper than maintaining a fork. They'd also have to release their changes anyways due to LGPL if they ever ship it client side.
So the incentive to contribute rather than fork would remain, and the only difference is that they have to pay a licensing fee.
Ofc this is probably just a fantasy. Relicensing FFmpeg like this probably isn't easy or possible.
Watch places like Amazon and Google suddenly stop updating and trying to find alternatives.
Like how Apple stopped using up to date the GNU tools in 2008 because of GPL3. That moved showed me then that Apple did not want you to use your computer as your computer.
Well, to continue that timeline. "Big Tech" freezes their version to the last gpl'ed version, and each commences their own (non-trivial effort) to make their own version (assuming the last gpl'ed version was not feature-complete for all their future uses).
And of course, they won't share with each other. So another driver would be fear of a slight competitive disadvantage vs other-big-tech-monstrosity having a better version.
Now, in this scenario, some tech CEO, somewhere has this brilliant bright spark.
"Hey, instead of dumping all these manhours & resources into DIYing it, with no guarantee that we still won't be left behind - why don't we just throw 100k at the original oss project. We'll milk the publicity, and ... we won't have to do the work, and ... my competitors won't be able to use it"
I think the main issue is the 90 day disclosure. Maybe Google can allow the maintainers to extend that deadline for low / medium impact issues.
I see no reason to follow the 90 day timeline if the maintainer is working on it. Especially considering it is very possible for Google to overwhelm a project with thousands of vulnerability reports.
Otherwise, I don't think Google should issue a patch, just like in this case, only FFmpeg people know it is an obscure codec that nobody really uses, and maybe the reasonable approach would be to simply remove it. Google don't know that, unless they somehow take over the project.
And AFAIK Google is one of the biggest sponsor for SPI, which is the fiscal sponsor for ffmpeg. So not sure where the not paying thing came from.
Sometimes it's hard: for many kinds of projects, I don't think anyone would use them if they were not open source (or at least source-available). Just like I wouldn't use a proprietary password manager, and I wouldn't use WhatsApp if I had a choice. Rather I use Signal because it's open source.
How to get people to use your app if it's not open source, and therefore not free?
For some projects, it feels better to have some people use it even if you did it for free than to just not do it at all (or do it and keep it in a drawer), right?
I am wondering, I haven't found a solution. Until now I've been open sourcing stuff, and overall I think it has maybe brought more frustration, but on the other hand maybe it has some value as my "portfolio" (though that's not clear).
But it can be profited for not-so-big corps, so I'm still working for free.
Also I have never received requests from TooBigTech, but I've received a lot of requests from small companies/startups. Sometimes it went as far as asking for a permissive licence, because they did not want my copyleft licence. Never offered to pay for anything though.
Corporations extract a ton of value from projects like ffmpeg. They can either pay an employee to fix the issues or setup some sort of contract with members of the community to fix bugs or make feature enhancements.
Nearly everyone here probably knows someone who has done free labor and "worked for exposure", and most people acknowledge that this is a scam, and we don't have a huge issue condemning the people running the scam. I've known people who have done free art commissions because of this stuff, and this "exposure" never translated to money.
Are the people who got scammed into "working for exposure" required to work for those people?
No, of course not, no one held a gun to their head, but it's still kind of crappy. The influencers that are "paying in exposure" are taking advantage of power dynamics and giving vague false promises of success in order to avoid paying for shit that they really should be paying for.
I've grown a bit disillusioned with contributing to Github.
I've said this on here before, but a few months ago I wrote a simple patch for LMAX Disruptor, which was merged in. I like Disruptor, it's a very neat library, and at first I thought it was super cool to have my code merged.
But after a few minutes, I started thinking: I just donated my time to help a for-profit company make more money. LMAX isn't a charity, they're trading company, and I donated my time to improve their software. They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.
I'm not very upset over this particular example since my change was extremely simple and didn't take much time at all to implement (just adding annotations to interfaces), so I didn't donate a lot of labor in the end, but it still made me think that maybe I shouldn't be contributing to every open source project I use.
I understand the feeling. There is a huge asymmetry between individual contributors and huge profitable companies.
But I think a frame shift that might help is that you're not actually donating your time to LMAX (or whoever). You're instead contributing to make software that you've already benefited from become better. Any open source library represents many multiple developer-years that you've benefited from and are using for free. When you contribute back, you're participating in an exchange that started when you first used their library, not making a one-way donation.
> They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.
This can easily be flipped: you wouldn't have contributed if their software didn't add value to your life first and so you should pay them to use Disruptor.
Neither framing quite captures what's happening. You're not in an exchange with LMAX but maintaining a commons you're already part of. You wouldn't feel taken advantage of when you reshelve a book properly at a public library so why feel bad about this?
Now count how many libraries you use in your day to day paid work that are opensource and you didn't have to pay anything for them. If you want to think selfishly about how awful it is to contribute to that body of work, maybe also purge them all from your codebase and contact companies that sell them?
That was not similar. The Microsoft dev was demanding things and rightfully shamed over it. Everyone giving Google the same shame over reporting an exploitable bug with no expectations is being ridiculous.
The message sounds like a human warning of someone is being fed-up with feeling taken advantage of. They see the profit being made, and they're not even getting drops of it.
If this isn't addressed, it makes this repo a target for actors that don't care
about the welfare of Amazon, Google etc.
It seems quite predictable that someone will see this as a human weakness and try to exploit it, my question is whether we'll get them the support they need before the consequence hits, or whether we'll claim this was a surprise after the fact.
So to me it seems notifications of bugs is good, you want to create visibility of problems.
The problem is the pressure to fix the bugs in x amount of time otherwise they will be made public. Additionally flooding the team with so many bugs that they can never keep up.
Perhaps the solution is not in do or don't submit but in the way how the patches are submitted. I think a solution would be to have a separate channel for these big companies to submit "flood" bug reports generated by AI, and if those reports won't be disclosed in x amount of time that would also take the pressure of the maintainers, the maintainers can set priories to most pressing issues and keep the backlog of smaller issues that may require attention in the future (or be picked up by small/new contributors).
I used to be paid for a decade to work on open source software - by my employer.
We always upstreamed fixes. This is the only way.
Filing bugs, etc, is also has some value, but if a big company uses a piece of open source software and makes money with it (even indirectly), they can contribute engineering time (or money).
How about a hybrid open source license, where the software is free for anyone to use unless they're a commercial entity with $1B or greater in annual revenue?
Google might even prefer this deal, if it means more maintainer activity and fewer vulnerabilities.
Only accepting bugs with a fix is not a solution. Because who is going to vet the patches? Are you going to accept a Chinese patch for some obscure security issue? This is how real security problems are introduced.
Why not? The three letters are not going to send their backdoored patches under a pseudonym people like you would find suspicious. They would send it (and very likely are doing that already) under the name of "James Smith".
You really should check out much much code in e.g. the Linux kernel is written outside of "the West". It's not the 90s anymore.
This is not the first article I've seen where developers say they're getting overwhelmed by AI-generated bug reports. Maybe this is a new way people can volunteer to help open source.
If anyone is struggling to triage bug reports in a Rust open source project, please contact me and I will see if this is something I can donate some recurring time to.
> “The position of the FFmpeg X account is that somehow disclosing vulnerabilities is a bad thing. Google provides more assistance to open source software projects than almost any other organization, and these debates are more likely to drive away potential sponsors than to attract them.”
This position likely to drive away maintainers. Generally the maintainers need these projects less than the big companies that use them. I'm not sure what Google's endgame is
I don't consider a security issue to be a "standard bug." I need to look at it, and [maybe] fix it, regardless of who reported it.
But in my projects, I have gotten requests (sometimes, demands) that I change things like the published API (a general-purpose API), to optimize some niche functionality for one user.
I'll usually politely decline these, and respond with an explanation as to why, along with suggestions for them to add it, after the fact.
It’s a security issue for a stream type almost nobody uses. It’s a little like saying your graphics program in 2025 is exploitable by a malformed PCX file, or your music player has a security bug only when playing an Impulse Tracker module.
Sure, triage it. It shouldn’t be publicly disclosed within a week of the report though, because the fix is still a relatively low priority.
Security is adversarial. It doesn't matter whether the users intentionally use the vulnerable codec. What matters is whether an adversary can make the users to use it. Given the codec is compiled in by default on Ubuntu, and given that IIUC the bug would already be triggered by ffmpeg's file format probing, it seems pretty likely that the answer to that is yes.
Yes, security is by definition adversarial. Thanks for the most basic lesson.
How are you getting ffmpeg to process a stream or file type different from the one you’re expecting? Most use cases of ffmpeg are against known input and known output types. If you’re just stuffing user-supplied files through your tools, then yes you have a different threat model.
> How are you getting ffmpeg to process a stream or file type different from the one you’re expecting?
... That is how ffmpeg works? With default settings it auto-detects the input codec from the bitstream, and the output codec from the extension. You have to go out of your way to force the input codec and disable the auto-detection, and I don't think most software using ffmpeg as a backend would force the user to manually do it, because users can't be trusted to know those details.
In the industry I think folks generally know what they’re feeding into it and what they’re wanting out of it. When there’s a handoff between companies the stream encoding, bitrate, and resolution are generally part of the project spec. Within a company, your teams should know what they’re feeding into a tool and it’s probably not some obscure LucasArts game codec.
If it’s a potential problem for home users, yeah, that’s an issue but it’s not every use of the tool.
If no one uses the stream type, then not fixing the bug won't hurt.
The people who do use the stream type are at risk, and have been at risk all along. They need to stop using the stream type, or get the bug fixed, or triage the but as not exploitable.
It'd be a silly license or condition, but, a license that says employees of companies in the S&P500 cant file bugs without an $X contribution, and cant expect a response in under Y days without a larger one, would be a funny way to combat it. Companies have no problem making software non-free or AGPL when it becomes inconvenient so maybe they can put up or shut up.
Where in this bug report was there any expectation for a response? They filed a private bug report and have a policy of making private reports public in 90 days whether or not they get a response. How did the OSS world go from "with enough eyes all bugs are shallow" to "filing a bug report is demanding I respond to you"?
> cant file bugs without an $X contribution, and cant expect a response in under Y days
A license to define that nobody can expect a response? Or file bugs?
None of this has anything to do with the issue. They can just turn off Google’s access to the bug tracker. No license needed.
However Google is free to publish security research they find.
It would be most concerning if projects started including “Nobody is allowed to do security research on this project” licenses. Who would benefit from that?
Given the cost of discovering these issues, and the massive risk of exploitation, it’s likely that Google/Amazon/etc have them fixed in their private forks.
Fixing a private fork takes 1/5-1/10 the time of shepherding a PR to meet the maintainers expectations. And why spend 5x dev time to contribute fixes to your competitor?
If ffmpeg doesn't want to recieve bug reports that's their right. They better not complain when google goes full disclosure on them. They are essentially asking for it at this point.
I remember Rebel Assault 2... how did an AI bot even discover a specific vulnerability in ASM that surfaces when playing the first 10-20 frames of some video from that?
Not too fond of maintainers getting too uppity about this stuff. I get that it can be frustrating to receive bug report after bug report from people who are unwilling or unable to contribute to the code base, or at the very least to donate to the team.
But the way I see it, a bug report is a bug report, no matter how small or big the bug or the team, it should be addressed.
I don’t know, I’m not exactly a pillar of the FOSS community with weight behind my words.
When you already work 40+ hours a week and big companies suddenly start an AI snowblower that shoots a dozen extra hours of work every week at you without doing anything to balance that (like, for instance, also opening PRs with patches that fix the bugs), the relationship starts feeling like being an unpaid employee of their project.
What's the point of just showering these things with bug reports when the same tool (or a similar one) can also apparently fix the problem too?
The problem with security reports in general is security people are rampant self-promoters. (Linus once called them something worse.)
Imagine you're a humble volunteer OSS developer. If a security researcher finds a bug in your code they're going to make up a cute name for it, start a website with a logo, Google is going to give them a million dollar bounty, they're going to go to Defcon and get a prize and I assume go to some kind of secret security people orgy where everyone is dressed like they're in The Matrix.
Nobody is going to do any of this for you when you fix it.
Except that the only people publicizing this bug were the people running the ffmpeg Twitter account. Without them it would have been one of thousands of vulnerabilities reported with no fanfare, no logos, and no conference talks.
Doesn't really fit with your narrative of security researchers as shameless glory hounds, does it?
How do they know that next week it's not going to be one of those 10 page Project Zero blog posts? (Which like all Google engineer blog posts, usually end up mostly being about how smart the person who wrote the blog post is.)
Note FFmpeg and cURL have already had maintainers quit from burnout from too much attention from security researchers.
> it can be frustrating to receive bug report after bug report from people
As the article states, these are AI-generated bug reports. So it's a trillion-dollar company throwing AI slop over the wall and demanding a 90-day turn around from unpaid volunteers.
That is completely irrelevant, the gross part is that (if true) they are demanding them to be fixed in a given time. Sounds like the epitome of entitlement to me, to say the least.
No one is demanding anything, the report itself is a 90 day grace period before being publicly published. If the issues are slop then what exactly is your complaint?
It’s a reproducible use-after-free in a codec that ships by default with most desktop and server distributions. It can be leveraged in an exploit chain to compromise a system.
I'm not a Google fan, but if the maintainers are unable to understand that, I welcome a fork.
There is a convergence of very annoying trends happening: more and more are garbage found and written using AI and with an impact which is questionable at best, the way CVE are published and classified is idiotic and platform founding vulnerability research like Google are more and more hostile to projects leaving very little time to actually work on fixes before publishing.
This is leading to more and more open source developers throwing the towel.
You could argue that, but I think that a bug is the software failing to do what it was specified, or what it promised to do. If security wasn't promised, it's not a bug.
Which is exactly the case here. This CVE is for a hobby codec written to support digital preservation of a some obscure video files from the 90’s that are used nowhere else. No security was promised.
They are not published in project bug trackers and are managed completely differently so no, personally, I don't view CVE as bug reports. Also, please, don't distrort what I say and omit part of my comment, thank you.
Some of them are not even bugs in the traditional sense of the world but expected behaviours which can lead to unsecure side effects.
It seems like you might misunderstand what CVEs are? They're just identifiers.
This was a bug, which caused an exploitable security vulnerability. The bug was reported to ffmpeg, over their preferred method for being notified about vulnerabilities in the software they maintain. Once ffmpeg fixed the bug, a CVE number was issued for the purpose of tracking (e.g. which versions are vulnerable, which were never vulnerable, which have a fix).
Having a CVE identifier is important because we can't just talk about "the ffmpeg vulnerability" when there have been a dozen this year, each with different attack surfaces. But it really is just an arbitrary number, while the bug is the actual problem.
I'm not misunderstanding anything. CVE involves a third party and it's not just a number. It's a number and an evaluation of severity.
Things which are usually managed inside a project now have a visibility outside of it. You might justify it as you want like the need to have an identifier. It doesn't fundamentally change how that impacts the dynamic.
Also, the discussion is not about a specific bug. It's a general discussion regarding how Google handles disclosure in the general case.
Is it unreasonable to ask that if a massive company funds someone to find a CVE in an open source project, they should also submit a patch? Google is a search company. Seems kind of... evil... to pay your devs to find holes in something with nothing to do with searching, then refuse to pay them to fix the problem they noticed.
No it's not "unreasonable" to ask for patches along with bug fixes, but it is unreasonable to be mad if they don't. They could just not file the bug reports at all, and that is an objectively worse outcome.
Note that most open source contributions by Googlers are, as recommended by policy, done under their own personal accounts. There's a required registry internally mapping from their personal account to their @google.com identity.
The nice thing is that the open source contributions done by a Googler aren't necessarily tied to their Google identity.
No, my stance is that it is reasonable for ffmpeg to ask for patches along with bug fixes and that is it simultaneously reasonable for Google to submit bug reports without those patches. Just like it would be reasonable for Google to ask for a feature in ffmpeg and it's equally reasonable for the ffmpeg maintainers to decline to implement the feature. Reasonableness is not a binary thing.
Are you on the autistic spectrum and/or not a native speaker of English? If we are discussing if FFMPEG's stance is reasonable, then it follows we are discussing of Google's actions are unreasonable.
Google is absolutely being unreasonable here -- they should instruct their engineers to submit a patch when submitting CVEs, and FFMPEG is perfectly valid to engage in a little activism to nudge them along.
>it's not "unreasonable" to ask for patches along with bug fixes, but it is unreasonable to be mad if they don't
So the ask (make a patch for your CVEs) is reasonable. It follows that to fail to do so is unreasonable. Whether the poster agrees Google is unreasonable or not is up for debate, but if they choose to espouse that the request is reasonable and that Google is reasonable, they're putting forth an irrational belief not rooted in their own logic.
But hey, lots of folks on HN are biased towards Google for financial reasons, so I totally get it.
But either their stance is how I said, or if their stance differs they are a hypocrite, there really is no middle ground here.
I would suggest that FFmpeg spin up a commercial arm that gets support contracts with Google, Amazon, etc, but with a tight leash so that it does not undermine the open source project. Would need clean guidance as to what the commercial arm does and does not.
According to https://xcancel.com/argvee/status/1986194861855478213#m, google is a customer of fflabs.eu (https://fflabs.eu) which is just such a "support contracts" arm. Certainly the fflabs.eu site claims at the bottom that Netflix, Google and Meta are all customers of theirs and the site also claims that their team is comprised of a number of ffmpeg contributors including the "lead maintainer" (https://fflabs.eu/about/).
Additionally, a search of the git commits shows a regular stream of commits from `google.com` addresses. So as near as I can tell, Google does regularly contribute code to the project, they are a customer of the project maintainer's commercial support company (and that fact is on the support company's website) and they submit high quality bug reports for vulnerabilities. What are we mad about again?
In general it is not good when companies get too much power. See how shopify is eating away the ruby infrastructure right now after RubyCentral panicked when shopify blackmailed the community by taking away funding. Of course it is their money, but the whole ecosystem suddenly submitting to one huge company, really really sucks to no ends. Hobbyists are at the weakest here.
While I don't think FFmpeg's response is a great one ("fund us or stop submitting bugs"); I think Google is being pretty shitty here. For a company that prides itself in its engineering prowess and contributions to the OSS community (as they like to remind me all the time) to behave this way is just all around shitty.
Submit the bug AND the patch and be done with it; don't make it someone else's problem when it's an OSS library/tool. A for-profit vendor? Absolutely. But this? Hell naw.
If it just were that simple.
The reality is that this is a very slippery slope and you won’t get a support contract just like that with a “tight leash”
FFmpeg should just dual license at this point. If you're wanting shit fixed. You pay for it (based on usage) or GTFO. Should solve all of the current issues around this.
You mean, Google reports a bug, and ffmpeg devs say "GTFO"? Let's assume this is a real bug: is that what you would the ffmpeg developers to say to Google?
I absolutely understand the issue that a filthy-rich company tries to leech off of real unpaid humans. I don't understand how that issue leads to "GTFO, we won't fix these bugs". That makes no sense to me.
And maybe it's fine to have AI-generated articles that summarize Twitter threads for HN, but this is not a good summarization of the discussion that unfolded in the wake of this complaint. For one, it doesn't mention a reply from Google security, which you would think should be pretty relevant here.
It's of excellent quality. They've made things about as easy for the FFmpeg devs as possible without actually fixing the bug, which might entail legal complications they want to avoid.
> it is unreasonable for a trillion-dollar corporation like Google, which heavily relies on FFmpeg in its products, to shift the workload of fixing vulnerabilities to unpaid volunteers. They believe Google should either provide patches with vulnerability reports or directly support the project’s maintenance.
This is so basic it shouldn't even have to be said.
The first part is technically true but doesn't apply to this situation. "Shift the workload"? This isn't google's bug, and google doesn't need it fixed. It was never their workload, and has not been shifted.
The last part is just wrong. Google does directly support the project's maintenance.
This is dumb. Obscurity doesn’t create security. It’s unfortunate if ffmpeg doesn’t have the money to fix reported bugs but that doesn’t mean they should be ignorant of them. I don’t see any entitlement out of Google either - I expected this article would have a GH issue thread with a whiny YouTube engineer yelling at maintainers.
The first thing you can do is actually read the article. The question is not about the security reports but Google's policy on disclosing the vulnerability after x days. It works for crazy lazy corps. But not for OSS projects.
In practice, it doesn't matter all that much whether the software project containing the vulnerability has the resources to fix it: if a vulnerability is left in the software, undisclosed to the public, the impact to the users is all the same.
I, and I think most security researchers do too, believe that it would be incredibly negligent for someone who has discovered a security vulnerability to allow it to go unfixed indefinitely without even disclosing its existence. Certainly, ffmpeg developers do not owe security to their users, but security researchers consider that they have a duty to disclose them, even if they go unfixed (and I think most people would prefer to know an unfixed vulnerability exists than to get hit by a 0-day attack). There's gotta be a point where you disclose a vulnerability, the deadline can never be indefinite, otherwise you're just very likely allowing 0-day attacks to occur (in fact, I would think that if this whole thing never happened and we instead got headlines in a year saying "GOOGLE SAT ON CRITICAL VULNERABILITY INVOLVED IN MASSIVE HACK" people would consider what Google did to be far worse).
To be clear, I do in fact think it would be very much best if Google were to use a few millionths of a percent of their revenue to fund ffmpeg, or at least make patches for vulnerabilities. But regardless of how much you criticize the lack of patches accompanying vulnerability reports, I would find it much worse if Google were to instead not report or disclose the vulnerability at all, even if they did so at the request of developers saying they lacked resources to fix vulnerabilities.
> I, and I think most security researchers do too, believe that it would be incredibly negligent for someone who has discovered a security vulnerability to allow it to go unfixed indefinitely without even disclosing its existence.
Because security researchers want to move on from one thing to another. And nobody said indefinitely. Its about a path that works for OSS project.
Its also not about security through obscurity. You are LITERALLY telling the world check this vuln in this software. Oooh too bad the devs didnt fix it. Anybody in the sec biz would be following Google's security research.
Putting you in a spotlight and telling it doesn't make any difference is silly.
Agreed that obscurity is not security. However we don't want to make it easy for hackers to get a catalog of vulnerabilities to pick and choose from. I think the issue is public disclosure of vulnerabilities after a deadline. The hobbyists can't just keep up.
Open source projects want people to use their work. Google wants bugs -- especially security ones -- found and fixed fast. Both goals make sense. The tension starts when open source developers expect payment for fixes, and corporations like Google expect fixes for free.
Paying for bug fixes sounds fair, but it risks turning incentives upside down. If security reports start paying the bills, some maintainers might quietly hope for more vulnerabilities to patch. That's a dangerous feedback loop.
On the other hand, Google funding open source directly isn't automatically better. Money always comes with strings. Funding lets Google nudge project priorities, intentionally or not -- and suddenly the "open" ecosystem starts bending toward corporate interests.
There's no neat solution. Software wants to be used. Bugs want to be found and fixed. But good faith and shared responsibility are supposed to be the glue that holds the open source world together.
Maybe the simplest fix right now is cultural, not technical: fewer arguments on Twitter, more collaboration, and more gratitude. If you rely on open source, donate to the maintainers who make your life easier. The ecosystem stays healthy when we feed it, not when we fight over it.
Amusing. I suppose the intended optional behavior is for Google to fix internally then run the public PR. Less optimal for us normal users since the security issue will be visible publicly in the PR until merging, though it won't affect Google (who will carry the fixed code before disclosure).
Unfortunately, and now even more with GenAI, a lot of Open Source boils down to doing free Labor to rich capitalists get even more rich while making sure we kill a lot of small business our professional class could have started.
is this a fundamental problem with the open source model, though? if we work within a model of continuous refinement and improvement but also accept the constraint that there's a fundamental limit to the resources someone is willing to give up in exchange for nothing (whether the resource in question is dev effort for no money or money for no ownership stake in the final product) then you see where something infinite is running up against something infinite and there's just no way to square that.
Forking puts you in another hell as Google. Now you have to pay someone to maintain your fork! Maybe for a project that’s well and fully complete that’s OK. But something like FFmpeg is gonna get updated all the time, as the specs for video codecs are tweaked or released.
Their choice becomes to:
- maintain a complex fork, constantly integrating from upstream.
- Or pin to some old version and maybe go through a Herculean effort to rebase when something they truly must have merges upstream.
- Or genuinely fork it and employ an expert in this highly specific domain to write what will often end up being parallel features and security patches to mainline FFmpeg.
Or, of course, pay someone in doing OSS to fix it in mainline. Which is the beauty of open source; that’s genuinely the least painful option, and also happens to be the one that benefits the community the most.
Any time I have tried to fix a bug in an open source project I was immediately struck down with abusive attitudes about how I didn't do something exactly the way they wanted it that isn't really documented.
If that's what I have to expect, I'd rather not even interact with them at all.
I don't think this is what typically happens. Many of my bug reports were handled.
For instance, I reported to the xorg-bug tracker that one app behaved oddly when I did --version on it. I was batch-reporting all xorg-applications via a ruby script.
Alan Coopersmith, the elderly hero that he is, fixed this not long after my report. (It was a real bug; granted, a super-small one, but still.)
I could give many more examples here. (I don't remember the exact date but I think I reported this within the last 3 years or so. Unfortunately reporting bugs in xorg-apps is ... a bit archaic. I also stopped reporting bugs to KDE because I hate bugzilla. Github issues spoiled me, they are so easy and convenient to use.)
I feel you, and that's a different issue than the one in this thread, which is in general if maintainers ignore bug reports, their projects will be forked and the issue fixed anyway but not in the main project.
Is there really slop here though? It sounds like the specific case identified was a real use after free in an obscure file format but which is enabled by default.
If it was slop they could complain that it was wasting their time on false or unimportant reports, instead they seem to be complaining that the program reported a legitimate security issue?
If a maintainer complains about slop bug reports, instead of assuming the worst of the maintainer, it'll often be more productive to put yourself in their shoes and consider the context. An individual case may simply be the nth case in a larger picture (say, the straw that broke the camel's back). Whenever this nth case is observed, if you only consider that single case, a response also informed by detailed personal consideration of the preceding (n-1) cases may appear grossly and irrationally disproportionate, especially when the observer isn't personally that involved.
For a human, generating bug reports requires a little labor with a human in the loop, which imposes a natural rate limit on how many reports are submitted, which also imposes a natural triaging of whether it's personally worth it to report the bug. It could be worth it if you're prosocially interested in the project or if your operations depend on it enough that you are willing to pay a little to help it along.
For a large company which is using LLMs to automatically generate bug reports, the cost is much lower (indeed it may be longer-term profitable from a standpoint like marketing, finding product niches, refining models, etc.) This can be asymmetric with the maintainer's perspective, where the quality and volume of reports matter in affecting maintainer throughput and quality of life.
People volunteer to make billionaires even more profit - crazy world. Who even have the time and resources to volunteer so much??? I don't get all this at all.
The mixture of motivations includes a desire to contribute something to humanity, a desire to make a mark or achieve recognition, or a desire to gain experience that might enhance your ability to get hired.
It can be a hobby like model trains and it can be a a social context like joining a club or going to church.
But it's safe to say that nobody is volunteering "to make billionaires even more profit."
Google leveraging AI to spam ffmpeg devs with bugs that range from real to obscure to wrongly reported may be annoying. But even then I still don't think Google is to be held accountable for reporting bugs nor is it required to fix bugs. Note: I do think Google should help pay for costs and what not. If they were a good company they would not only report bugs but also have had developers fix the bugs, but they are selfish and greedy, everyone knows that. Even then they are not responsible for bugs in ffmpeg. And IF the bug report is valid, then I also see no problem.
The article also confuses things. For instance:
"Many in the FFmpeg community argue, with reason, that it is unreasonable for a trillion-dollar corporation like Google, which heavily relies on FFmpeg in its products, to shift the workload of fixing vulnerabilities to unpaid volunteers"
How could Google do that? It is not Google's decision. That is up to volunteers. If they refuse to fix bug reports reported from Google then this is fine. But it is THEIR decision, not Google.
"With this policy change, GPZ announces that it has reported an issue on a specific project within a week of discovery, and the security standard 90-day disclosure clock then starts, regardless of whether a patch is available or not."
Well, many opinions here. I think ALL bugs and exploits should be INSTANTLY AND WITHOUT ANY DELAY, be made fully transparent and public. I understand the other side of the medal too, bla bla we need time to fix it bla bla. I totally understand it. Even then I believe the only truthful, honest way to deal with this, is 100% transparency at all times. This includes when there are negative side effects too, such as open holes. I believe in transparency, not in secrecy. There can not be any compromise here IMO.
"Many volunteer open source program maintainers and developers feel this is massively unfair to put them under such pressure when Google has billions to address the problem."
So what? Google reports issues. You can either fix that or not. Either way is a strategy. It is not Google's fault when software can be exploited, unless they wrote the code. Conversely, the bug or flaw would still exist in the code EVEN IF GOOGLE WOULD NOT REPORT IT. So I don't understand this part. I totally understand the issue of Google being greedy, but this here is not solely about Google's greed. This is also how a project deals with (real) issues (if they are not real then you can ask Google why they send out so much spam).
That Google abuses AI to spam down real human beings is evil and shabby. I am all for ending Google on this planet - it does so much evil. But either it is a bug, or not. I don't understand the opinion of ffmpeg devs "because it is Google, we want zero bug reports". That just makes no sense.
"The fundamental problem remains that the FFmpeg team lacks the financial and developer resources to address a flood of AI-created CVEs."
Well, that is more an issue in how to handle Google spamming down people. Sue them in court so that they stop spamming. But if it is a legit bug report, why is that a problem? Are ffmpeg devs concerned about the code quality being bad? If it is about money then even though I think all of Google's assets should be seized and the CEOs that have done so much evil in the last 20 years be put to court, it really is not their responsibility to fix anything written by others. That's just not how software engineering works; it makes no sense. It seems people confuse ethics with responsibilities here. The GPL doesn't mandate code fixing to be done; it mandates that if you publish a derivative etc... of the code, that code has to be published under the same licence and made available to people. That's about it, give or take. It doesn't say corporations or anyone else HAS to fix something.
"On the other hand, security experts are certainly right in thinking that FFmpeg is a critical part of the Internet’s technology framework and that security issues do need to be made public responsibly and addressed."
I am all for that too, but even stricter - all security issues are to be made public instantly, without delay, fully and completely. I went to open source because I got tired of Microsoft. Why would I want to go back to evil? Not being transparent here is no valid excuse IMO.
"The reality is, however, that without more support from the trillion-dollar companies that profit from open source, many woefully underfunded, volunteer-driven critical open-source projects will no longer be maintained at all."
Wait - so it is Google's fault if projects die due to lack of funding? How does that explanation work?
You can choose another licence model. Many choose BSD/MIT. Others choose GPL. And so forth.
"For example, Wellnhofer has said he will no longer maintain libxml2 in December. Libxml2 is a critical library in all web browsers, web servers, LibreOffice and numerous Linux packages. We don’t need any more arguments; we need real support for critical open source programs before we have another major security breach."
Yes, that is a problem - the funding part. I completely agree. I still don't understand the "logic" of trying to force corporations to have to do so when they are not obliged. If you don't want corporations to use your code, specify that in the licence. The GPL does not do that. I am confused about this "debate" because it makes no real sense to me from an objective point of view. The only part that I can understand pisses off real humans is when Google uses AI as a pester-spam attack orgy. Hopefully a court agrees and spits up any company with more than 100 developers into smaller entities the moment they use AI to spam real human beings.
> If they were a good company they would not only report bugs but also have had developers fix the bugs, but they are selfish and greedy, everyone knows that.
Google has a pretty regular stream of commits in the ffmpeg git history, and is proudly declared to be a customer of `fflabs.eu`, which appears to be the ffmpeg lead maintainer's private consulting company. Two of the maintainers on ffmpeg's "hire a dev" page[1] are also listed as employees of fflabs[2]. Honestly, Google seems like they're being a model for how corporations can give back to OSS better and benefit from that, but instead everyone is up in arms because they don't give patches in all of their bug reports.
I saw that tweet and thought it looked crazy. To me, it sounds like a death threat. Especially given that it's posted on X.com, after a really heated argument, with no clarification afterwards.
If you didn't actually post that tweet, that's great! I'm happy to be corrected
If you hear a rumor that sounds too crazy to be true on social media, maybe don't repeat it as fact. Imagine how you would feel reading something like that.
Its a special kind of irony to post AI slop complaining about someone's ai slop that isn't actually ai slop just devs whining about being expected to maintain their code instead of being able to extort the messengers to do the work for them.
So many of these threads recently. The abstract pattern is: people are surprised that many other people who one might reasonably expect should give you money or resources, or otherwise behave in a cooperative reasonable way, are psychopaths and don't do that.
“How dare ffmpeg be so arrogant!
Don’t they know who we are?
Fork ffmpeg and kill the project! I grant a budget of 30 million to crush this dissent! Open source projects must know who’s boss! I’ll stomp em like a union!”
…. overheard at a meeting of CEO and CTO at generic evil mega tech corp recently.
They probably want to drown you in CVEs to force deprecation on the world and everybody into their system like they do with everything else they touch.
2025 is the year where I finally have to ask myself: "What is Google actually doing for me?" and the answer is "Nothing". I am no longer able to search with Lynx. I had to move to startpage.com for that. I never used, and never will, gmail or any other of their oservices, except for YouTube (which they bought). YouTube search is finally totally unusable, pushing short form content onto me which I really really don't care about. I have to click the "Show me less shorts" button on my iPhone every few weeks, because the stuff keeps coming back. I don't care about their AI summaries, I was always happy (when I was still able to use Google in my daily workflow) with the excerpt they were giving me since what, 2002? I don't have an Android phone, because their Accessibility was lacking behind for almost two decades. They dropped the "Don't be evil" motto. ... Where do I even stop? Google used to be great. Now, all they do for me is to maintain storage for my TV-replacement, YouTube. I say maintain storage, because that is all they do. I can't even meaningfully search that content. I have given up on them. It is over.
They obviously need to be reminded that the only reason Google has to care about FLOSS projects is when they can effectively use them to create an advertising panopticon under the company's complete control.
Just mark CVEs as bugs and get to them when you can. In this case, if Google doesn't like it, then so be it. It'll get fixed eventually. Don't like how long it takes? Pay someone to contribute back. Until then, hurry up and wait.
Please bro, please, fix our bugs bro, just this one bug bro, last one I swear, you and I will make big money, you are the best bro, I love you bro.
-- big tech companies
FFmpeg should stop fixing security bugs reported by Google, MS, Amazon, Meta etc. and instead wait for security patches from them. If FFmpeg maintainers will leave it exposed, those companies will rush to fixing it, because they'd be screwed otherwise. Every single one of them is dependent on FFmpeg exactly as shown in https://xkcd.com/2347/
Because they are making more money in profit than some mid-sized American cities' economies do in a year while contributing nothing back. If they don't want massive security vulnerabilities in their services using FFmpeg, maybe they need to pony up about .1 seconds' worth of their quarterly earnings to the project either in cash or in manpower.
It's not FFmpeg's problem if someone uses a vulnerability to pwn YouTube, it's Google's problem.
Also, in the article, they mention that Google's using AI to look for bugs and report them, and one of them that it found was a problem in the code that handles the rendering of a few frames of a game from 1995. That sort of slop isn't helping anyone. It's throwing the signal-to-noise ratio of the bug filings way the hell off.
If they contribute nothing back, what are all the `google.com` email addresses in the git history doing? If they contribute nothing back, why are they listed as a customer of `fflabs.eu` which is apparently a private consulting company for ffmpeg run by some of the ffmpeg lead maintainers?
What do we think the lesson corporations are going to take from this is?
1) "You should file patches with your bug reports"
2) "Even when you submit patches and you hire the developers of OSS projects as consultants, you will still get dragged through the mud if you don't contribute a patch with every bug report you make, so you might as well not contribute anything back at all"
The text and context of the complaint can be used to steelman it, adopting the principle of charity.
From that perspective, the most likely problem is not that bugs are being reported, nor even that patches are not being included with bug reports. The problem is that a shift from human-initiated bug reports to large-scale LLM generation of bug reports by large corporate entities generates a lot more work and changes the value proposition of bug reports for maintainers.
Even if you use LLMs to generate bug reports, you should have a human vet and repro them as real and significant and ensure they are written up for humans accurately and concisely, including all information that would be pertinent to a human. A human can make fairly educated decisions on how to combine and prioritize bug reports, including some degree of triage based on the overall volume of submissions relative to their value. A human can be "trained" to conform to whatever the internal policies or requirements are for reports.
Go ahead and pay someone to do it. If you don't want to pay, then why are you dumping that work on others?
Even after this, managing the new backlog entries and indeed dealing with a significantly larger archive of old bug reports going forward is a significant drag on human labor - bug reports themselves entail labor. Again, the old value proposition was that this was outweighed by the value of the highest-value human-made reports and intangibles of human involvement.
Bug reports are, either implicitly or explicitly, requests to do work. Patches may be part of a solution, but are not necessary. A large corporate entity which is operationally dependent on an open source project and uses automation to file unusually large volumes of bug reports is not filing them to be ignored. It isn't unreasonable to ask them to pay for that work which they are, one way or another, asking to have done.
> Even if you use LLMs to generate bug reports, you should have a human vet and repro them as real and significant and ensure they are written up for humans accurately and concisely, including all information that would be pertinent to a human.
Look at the report that's the center of this controversy. It's detailed, has a clear explanation of the issue at hand, has references and links to the relevant code locations where the submitter believes the issue is and has a minimal reproduction of the issue to both validate the issue and the fix. We can assume the issue is indeed valid and significant as ffmpeg patched it before the 90 day disclosure window. There is certainly nothing about it that screams low effort machine generated report without human review, and at least one commenter in this discussion claims to have inside knowledge that all these reports are written by verified and written by humans before submission to the projects.
I won't pretend that it's a perfect bug report, but I will say if every bug report I got for the rest of my career was of this caliber, I'd be a quite happy with that.
> It isn't unreasonable to ask them to pay for that work which they are, one way or another, asking to have done.
Google quite literally hires some of the ffmpeg maintainers as consultants as attested to by those same maintainer's own website (fflabs.eu). They are very plainly putting cold hard cash directly into the funds of the maintainers for the express purpose of them maintaining and developing ffmpeg. And that's on top of the code their own employees submit with some regularity. As near as I can tell, Google does everything people complaining about this are saying they don't do, and it's still not enough. One has to wonder then what would be enough?
Google might be aiming to replace ffmpeg as the world's best media professor. Remember how Jia Tan (under different names) flooded xz with work before stepping up as a maintainer.
Google, through YouTube and YouTube TV, already runs one of the most significant video processing lines of business in the world. If they had any interest in supplanting FFmpeg with their own software stack, they wouldn't need to muck around with CVEs to do so.
Open source is not only about being able to read the code: the open source definition includes "Free Redistribution" (anyone who has the software can give away copies, and get paid if they want) and "No Discrimination Against Fields of Endeavor", among other requirements.
These two requirements combined make it impossible to distribute open source software with the provision that it is only free for individuals.
> Many in the FFmpeg community argue, with reason, that it is unreasonable for a trillion-dollar corporation like Google, which heavily relies on FFmpeg in its products, to shift the workload of fixing vulnerabilities to unpaid volunteers.
That's capitalism, they need to quit their whining or move to North Korea. /s The whole point is to maximize value to the shareholders, and the more work they can shove onto unpaid volunteers, the move money they can shove into stock buybacks or dividends.
The system is broken. IMHO, there outta be a law mandating reasonable payments from multi-billion dollar companies to open source software maintainers.
From TFA this was telling:
Thus, as Mark Atwood, an open source policy expert, pointed out on Twitter, he had to keep telling Amazon to not do things that would mess up FFmpeg because, he had to keep explaining to his bosses that “They are not a vendor, there is no NDA, we have no leverage, your VP has refused to help fund them, and they could kill three major product lines tomorrow with an email. So, stop, and listen to me … ”
I agree with the headline here. If Google can pay someone to find bugs, they can pay someone to fix them. How many time have managers said "Don't come to me with problems, come with solutions"
I've been a proponent of upstreaming fixes for open source software.
Why? - It makes continued downstream consumption easier, you don't have to rely on fragile secret patches. - It gives back to projects that helped you to begin with, it's a simple form of paying it forward. - It all around seems like the "ethical" and "correct" thing to do.
Unfortunately, in my experience, there's often a lot of barriers within companies to upstream. Reasons can be everything from compliance, processes, you name it... It's unfortunate.
I have a very distinct recollection of talks about hardware aspirations and upstreaming software fixes at a large company. The cultural response was jarring.
As yet, Valve is the only company I know of doing this, and it's paying off in dividends both for Linux and for Valve. In just 5ish years of Valve investing people and money into Linux- specifically mesa and WINE, Linux has gone from a product that is kind of shaky with Windows, to "I can throw a windows program or game at it and it usually works". Imagine how further the OSS ecosystem would be if Open Source hadn't existed, only FOSS; and companies were legally obligated to either publish source code or otherwise invest in the ecosystem.
WINE, CodeWeavers, Mesa, Red Hat, and plenty of others have been pumping money into the Linux graphics subsystems for a very long time. It's cool that Valve was able to use its considerable wealth to build a business off of it. But they came in at a pretty opportune time.
Windows support had gotten a boost from .NET going open source as well as other stuff MS began to relax about. It also helped that OpenGL was put to rest and there was a new graphics API that could reasonably emulate DirectX. I don't know much about the backstory of Mesa, but it's pretty cool tech that has been developing for a long time.
Credit to wine and cross over (?)for years and years of work as well
Valve is so successful because it is a private company, and the CEO is the CTO and he is essentially the corporate equivalent of a religious monk. How else can you get 20+ years to slowly build a software business?
As a side note YC and tech startups themselves have become reality TV. Your goal should be Valve! You should be Gabe Newell! You don’t need to be famous! Just build something valuable and be patient
Ironically, Gabe is more famous than the rest of whoever you're talking about, not because he seeks fame but just because he generally does right by his customers and makes himself accessible. Telling gamers to email him with questions, concerns, comments, anything, and then actually responding. Even though he's apparently spending most of his time hanging out on yachts, people love him because he makes an effort to be tuned in to what his customers want. If you do that, you'll be famous in a better way than what you can get from reality TV.
Steam is the most dominant game tool on the planet and landed when there was not yet a market for it. Very few other projects will get to the level of success it has in any sector, anywhere.
GabeN was also a MS developer back in the day and likely would have been well off regardless, but he didn't need to play the YC A-B-let's-shoehorn-AI bullshit games that are 100% de rigeour for all startups in 2025.
From what I understand, Gabe/Valve almost went bust during Half Life's development. His gamble paid off when that turned into a runaway success, but he still could have lost it when he bet again on HL2 and Steam; at the time it was extremely controversial to make those a package deal. If Half Life 2 had been not quite as good as it turned out to be, it could have turned out to be a studio with a one hit wonder that burned their goodwill with some sketchy DRM sort of scheme on their second game.
> How else can you get 20+ years to slowly build a software business?
It used to be normal to build a business slowly over 20 years. Now everyone grabs for the venture capital, grows so fast they almost burst, and the venture capital inevitably ends in enshittification as companies are forced by shareholders to go against their business model and shit over their customers in order to generate exponential profit margins.
WINE was a thing for years and generally worked okay for a lot of things.
I was playing Fallout 3 on WINE well before Valve got involved with minimal tweaks or DIY effort.
Proton with Steam works flawlessly for most things including AAA games like RDR2 and it's great, but don't forget that WINE was out there making it work for a while
> WINE was a thing for years and generally worked okay for a lot of things.
Yes, but Valve's involvement handled "the last 10% takes the 90% of the time" part of WINE, and that's a great impact, IMHO.
Trivia: I remember WINE guys laughing at WMF cursor exploit, then finding the exploit works on WINE too and fix it in panic, and then bragging bug-for-bug compatibility with Windows. It was hilarious.
Also, WINE allowed Linux systems to be carrier for Windows USB flash drive virii without being affected by them for many years.
But this is a perfect example of one of those "90/10" esque ideas.
Even if Wine was 90% there technologically, the most important 90% is really that last 10.
I'm glad you threw in "I know of", because that part is true.
Feel free to read lore.kernel.org, and sort out where the people contributing many patches actually work.
Can't you just give the information you are hinting at? Other people than OP read this. You basically tell me to go read thousands of messages on a mailing list just solve your rhetorical question. (answer: Intel, Redhat, Meta, Google, Suse, Arm and Oracle. There are much more efficient ways to find this.) Yes, they are the main kernel contributors and have been for many years. I'm still not sure I understand the comment.
I think GP answered as they did because there are so many examples it's hard to know where to start.
It's not entirely unlike if someone said "the only person I know writing books successfully is Brandon Sanderson." I do think "you ought to go check out your local book store" would be a valid response.
https://lwn.net/Articles/1038358/
No, "just" having to debunk BS from a BSer who lazily threw out misinformation is not the way to go. It's the BSer that needs to do more work.
I'd say as a counterpoint that just because someone works at, say, Meta or Oracle, and also contributes to OSS projects, that doesn't equate to the company they work at funding upstream projects (at least not by itself).
I don't even have to link the xkcd comic because everyone already knows which one goes here.
Everyone I know who contributes to Linux upstream is paid to do it. It's not really worth the hassle to bother trying if you weren't getting paid. It's also very easy to find companies that will pay you to work on Linux and upstream.
People don't use their company email addresses for private work.
At GOOG you’re required to, by policy.
Linus does...
Well, if they use their work email, doesn't that mean their kernel work is endorsed by their employer?
They totally broke CSGO Legacy's code to push its sequel CS2 and won't accept fixes for it because it's 'not being supported'.
To be clear, both of those are closed source, proprietary games owned by Valve. It makes sense for them to want to consolidate their player base in one game.
> Valve is the only company I know of [upstreaming fixes for open source software]
Sorry, that's ridiculous. Basically every major free software dependency of every major platform or application is maintained by people on the payroll of one or another tech giant (edit: or an entity like LF or Linaro funded by the giants, or in a smaller handful of cases a foundation like the FSF with reasonably deep industry funding). Some are better than others, sure. Most should probably be doing more. FFMpeg in particular is a project that hasn't had a lot of love from platform vendors (most of whom really don't care about software codecs or legacy formats anymore), and that's surely a sore point.
But to pretend that SteamOS is the only project working with upstreams is just laughable.
From my time working at a Fortune 100 company, if I ever mentioned pushing even small patches to libraries we effing used, I'd just be met "try to focus on your tickets". Their OSS library and policies were also super byzantine, seemingly needing review of everything you'd release, but the few times I tried to do it the official way, I just never heard anything back from the black-hole mailing list you were supposed to contact.
Yes, I've also worked on OpenStack components at a university, and there I see Red Hat or IBM employees pushing up loads of changes. I don't know if I've ever seen a Walmart, UnitedHealth, Chase Bank, or Exxon Mobil (to pick some of the largest companies) email address push changes.
I don't know about ExxonMobil but Walmart, UnitedHealth Group, and JPMorganChase employees do actively contribute to open source projects. Maybe just not the ones you used. They have also published some of their own.
https://github.com/walmartlabs
https://github.com/Optum
https://github.com/jpmorganchase
To steelman this: I've never worked at any of the companies you listed but most likely Red Hat and IBM employees (Is there still a difference?) are being paid specifically to work on Openstack, as they get money from support contracts. When Walmart of Chase use Openstack there is a rather small team who is implementing openstack to be used as a platform. They are then paying IBM/Redhat for that support. There probably isn't really the expertise in the Openstack team at Warlmart to be adding patches. Some companies spend a different amount of money on in house technology than others, and then open source it.
Those aren’t tech giants. They're just shit companies. I agree they greatly outnumber Big Tech, in employees if not talent.
Check again. The Optum unit of UnitedHealth Group has huge revenue from software and technical services. If just that part of the business was spun out it would be one of the top 20 US tech companies.
too bad UnitedHealthGroup is capital-E Evil and is literally running the "death panels" that insane right wing propaganda tried to scare us about
What a lot of people don't realize is that it's mostly employer HR departments running the "death panels". UHG and its competitors would be happy to sell insurance policies that cover absolutely everything with no questions asked: this would be easier for them to administer without the hassles of utilization management and claim edits. But customers — mainly large employers — demand that insurers (or third-party administrators) impose more restrictive coverage rules in order to hold down medical costs.
Ultimately there will always be some healthcare rationing. This happens in every country. For example, the UK NHS has death panels which decide that certain treatments won't be covered at all because they're not cost effective. Resources are limited and demand is effectively infinite. So the only real question is how we do the rationing.
> UHG and its competitors would be happy to sell insurance policies that cover absolutely everything with no questions asked...But customers — mainly large employers — demand that insurers (or third-party administrators) impose more restrictive coverage rules in order to hold down medical costs.
UHG has been caught denying claims for things that employers already paid them to cover for their employees. You can't blame HR departments for that. You also can't blame HR for UHG upcoding/overbilling which eats into the limited resources of hospitals and the limited resource of taxpayer money ultimately resulting in fewer people able to get the healthcare they need just so that UHG can line their own pockets.
While HR departments do have their own issues, they're nowhere near the level of pure evil that UHG is.
Walmart is a tech giant.
FWIW, when working at a major Silicon Valley tech company in the mid 2010s, my team made significant contributions to OSS projects including OpenStack and the Linux kernel as a core part of our work for Walmart.
The work to upstream our changes was included in the Statements of Work which Walmart signed off on, and our time spent on those efforts was billed to them.
The stats for those projects will have recorded my former employer as the direct source of those contributions - but they wouldn't have existed had it not been for Walmart.
Sure, but the parent’s comment hits on something perhaps. All the tech giants contribute more haphazardly and for their own internal uses.
Valve does seem somewhat rare position of making a proper Linux distro work well with games. Google’s Chromebooks don’t contribute to the linux ecosystem in the same holistic fashion it seems.
> Unfortunately, in my experience, there's often a lot of barriers within companies to upstream. Reasons can be everything from compliance, processes, you name it... It's unfortunate.
I’ve been at several companies where upstreaming was encouraged for everything. The fewer internal forks we could maintain, the better.
What surprised me was how many obstacles we’d run into in some of the upstream projects. The amount of time we lost to trying to appease a few maintainers who were never happy with code unless they wrote it themselves was mind boggling.
For some projects you can basically forget about upstreaming anything other than an obvious and urgent bug fix because the barriers can be so high.
While there's sometimes maintainer-prima-donna egos the contend with there's also this:
Any patch sent in also needs to be maintained into the future, and most of the time it's the maintainers that need to do that, not the people contributing the patch. Therefore any feature-patches (as opposed to simple bugfixes) are quite often refused, even if they add useful functionality, because the maintainers conclude they will not be able to maintain the functionality into the future (because no one on the maintaining team has experience in a certain field, for example).
The quality bar for a 'drive by patch' which is contributed without the promise of future support is ridiculously high and it has to be. Other peoples' code is always harder to maintain than your own so it has to make up for that in quality.
Not any patch. Sometimes there are patches that are not explicitly fixing defects, but for example they surface a boolean setting that some upstream library started to expose. That setting is exactly like a dozen other settings already there. It's made using the same coding style and has all requisite things other settings have.
Will you be still making a fuss over it?
Maybe, it depends!
Maybe the developer intends to some day change the internal implementation, such that that particular boolean flag wouldn't make sense any more. Or they're considering taking out the option entirely, and thus simplifying the codebase by making it so it only works one way.
Maybe the developer just doesn't care about your use case. If I have a project that works fine for what I do with it, why should I also care about some other use case you have for my work? I'm not your employee. Your product doesn't put a roof over my head.
I don't want a job where I do free work, for a bunch of companies who all make money off my work. That's a bad deal. Its a bad deal even if my code gets better as a result. I have 150 projects on github. I don't want to be punished if any of those projects become popular.
We can't go around punishing projects like ffmpeg or ruby on rails for the crime of being useful.
> Maybe the developer just doesn't care about your use case. If I have a project that works fine for what I do with it, why should I also care about some other use case you have for my work?
Then say you don't expect contributions at all. That's a fair game, I'm ok with it. I will then exercise my rights granted by your license in another way (forking and making my own fix most likely). My gripe is with projects that write prominently "PRs welcome", and then make enough red tape to signal that nah, not really.
I don't know.
The pattern I have seen is that if you want to contribute a fix into a project, you are expected to "engage with the community", wear their badge, invest into the whole thing. I don't want to be in your community, I want to fix a bug in a thing I'm using and go on with my life.
Given the usual dynamics of online communities which are getting somehow increasingly more prone to dramas, toxicity, tribalism, and polarization, I just as increasingly want to have no part in them most of the time.
I think many projects would be better for having a lane for drive-by contributors who could work on fixing bugs that prevent their day-to-day from working without expectations of becoming full-time engaged. The project could set an expectation that "we will rewrite your patch as we see fit so we could integrate and maintain it, if we need/want to". I wouldn't care as long as the problem is taken care of in some way.
In my experience simple bugfixes are nearly always accepted without fuss (in active projects, that is. Some project in maintenance mode where the last commit was 3 months ago is a different story, because then probably just no-one can be arsed to look at the patch).
Some simple setting expose like you describe can sometimes go in without a fuss or it can stall, that depends on a lot of factors. Like the other reply states: it could go against future plans. Or it could be difficult for the maintainer to see the ramifications of a simple looking change. It sucks that it is that way (I have sent in a few patches for obscure CUPS bugs which have stayed in limbo, so I know the feeling ;-) ) but it is hardly surprising. From a project's point of view drive-by patches very often cost more than they add so to get something included you often need to do a very thorough writeup as for why something is a good idea.
> I just as increasingly want to have no part in them most of the time. If all people you meet are assholes.... ;-P Not to say you are an asshole, or at least not more than most people, but I have been in this situation myself more than once, and it really pays to stay (overly) polite and not let your annoyance about being brushed off slip through the mask. The text-only nature of these kind of communications are very sensitive to misinterpretations and annoyances.
It would be nice if all you'd need for a patch to be included somewhere was for it to be useful. But alas there's a certain amount of social engineering needed as well. And imho this has always been the case. If you feel it gets increasingly hostile that's probably your own developer burnout speaking (by do I know that one :-P )
Being allowed to contribute to open source is a privilege, not a right.
You could also just pay for it.
“Pay my way or take the highway” is as close to the closed-source ethos as you can possibly get. Collaboration is not feasible if the barrier of entry is too high and those involved make no effort to foster a collaborative environment.
Thanks, I prefer that job where I am paid for writing code, not having to pay to write code.
> The amount of time we lost to trying to appease a few maintainers who were never happy with code unless they wrote it themselves was mind boggling.
That brings us full circle to the topic because one important thing that gets people motivated into accepting other people's changes to their code is being paid.
If you work in FOSS side projects as well as a proprietary day job, you know it: you accept changes at work that you wouldn't in those side projects.
In the first place, you write the code in ways you wouldn't due to conventions you disagree with, in some crap language you wouldn't use voluntarily, and so it geos.
People working on their own FOSS project want everything their way, because that's one of the benefits of working on your own FOSS project.
I've literally had my employer's attorneys tell me I can't upstream patches because it would put my employer's name on the project, and they don't want the liability.
No, it didn't help giving them copies of licenses that have the usual liability clauses.
It seems a lot of corporate lawyers fundamentally misunderstand open source.
Corporate counsel will usually say no to anything unusual because there's no personal upside for them to say yes. If you escalate over their heads with a clear business case then you can often get a senior executive to overrule the attorneys and maybe even change the company policy going forward. But this is a huge amount of extra unpaid work, and potentially politically risky if you don't have a sold management chain.
I don't know if it would work, but sometimes I consider a "moochers" rule wrt opensource code.
Like, here's the deal: The work is proper, legit opensource. You can use it for free, with no obligations.
But if your company makes a profit from it, you're expected to either donate money to the project or contribute code back in kind. (Eg security patches, bug fixes, or contribute your own opensource projects to the ecosystem, etc).
If you don't, all issues you raise and PRs get tagged with a special "moocher" status. They're automatically - by default - ignored or put in a low priority bin. If your employees attend any events, or join a community discord or anything like that, you get a "moocher" badge, so everyone can see that you're a parasite or you work for parasites. Thats ok; opensource licenses explicitly allow parasites. I'm sure you're a nice person. But we don't really welcome parasites in our social spaces, or allow parasites to take up extra time from the developers.
I've spent the last 32 years pushing every employer I've had to contribute back to open source. Because of the sector I work in, more often than not I'm constrained by incredibly tight NDAs.
I can usually stop short of providing code and file a bug that explains the replication case and how to fix it. I've taken patches and upstreamed them pseudonymously on my own time when the employer believed the GPL meant they couldn't own the modifications.
If after all that you still want to label me a moocher at cons, that's your choice.
You can wear your secret cape with pride, don't worry about the moocher badge.
It goes even further sometimes, I've seen someone in the Go community slack announce they are going to dial back their activity because of Very Serious Clauses in their Apple contract.
That seems to imply that Apple employees are prohibited from being good internet citizens and e.g. helping people out with any kind of software issue. This presumably includes contributing to open source, although I'm sure they can get approval for that. But the fact they have to get approval for it is already a chilling effect.
Apple? Not interested in being a good internet citizen? Say it ain't so!
Why would they invest resources - scarce, expensive time of attorneys - in researching and solving this problem? The attorneys' job is to help the company profit, to maximize ROI for legal work. Where is the ROI here? And remember, just positive ROI is unacceptable; they want maximum ROI per hour worked. When the CEO asks them how this project maximized ROI, what do they say?
I believe in FOSS and can make an argument that lots of people on HN will accept, but many outside this context will not understand it or care.
If you fixed something in an open source library you use, and you don't push that upstream, you are bound to re-apply that patch with every library update you do. And today's compliance rules require you to essentially keep all libraries up to date all the time, or your CVE scanners will light up. So fixing this upstream in the original project has a measurable impact on your "time spent on compliance and updates KPI".
This touches on what I ended up telling them: maintaining a local patchset is expensive and fragile. Running customized versions of things is a self-inflicted compliance problem.
I still had to upstream anonymously, though.
That is a real benefit, I agree.
Sounds like your employers attorneys need to be brought to heel by management. Like most things, this is a problem of management not understanding that details matter.
I upstreamed a 1-line fix, plus tests, at my previous company. I had to go through a multi-month process of red tape and legal reviews to make it happen. That was a discouraging experience to say the least.
My favorite is when while you were working through all that, the upstream decided they need a CLA. And then you have to go through another round of checking to see if your company thinks it's ok for you to agree to sign that for a 1 line change.
Certainly easier to give a good bug report and let upstream write the change, if they will.
I found a tiny bug in a library. A single, trivial, “the docs say this utility function does X, but it actually does Y”. I’m not even allowed to file a bug report. It took me some time to figure out how to even ask for permission, and they referred it to some committee where it’s in limbo.
In this scenario does your employer have strong controls around what whether you can write hobby code on your own time?
One of my past employers in the UK added to the policy all the software the employee writes during the employment (eg. during the weekend, on the personal hardware), is owned by the company.
Several software engineers left, several didn't sign it.
Yes, company was very toxic apart of that. Yeah, I should name and shame but I won't be doxxing myself.
Many years ago an employer tried to to that and everyone .. just refused to sign the new contracts. The whole thing sat in standoff limbo for months until the dotcom crash happened and the issue became moot when we were all made redundant.
Generally yes. Or yes, you could just do it yourself in your free time.
This is what I've done in those rare cases I've had to fix a bug in a tool or a library I've used professionally. I've also made sure to do that using online identities with no connection to my employer so that any small positive publicity for the contribution lands on my own CV instead of the bureaucratic company getting the bragging rights.
Even at places that are permissive about hobby code, a company ought to want to put its name on open source contributions. These build awareness in the programming community of the company and can possibly serve as a channel for recruitment leads. But the (usually false) perception of legal risk and misguided ideas about what constitutes productivity usually sink any attempts.
It is amazing how companies want this "marketing" but don't want to put the actual effort to make it possible.
A tech company I worked at once had a "sponsorship fund" to "sponsor causes" that employees wanted, it was actually good money but a drop in the bucket for a company. A lot of employees voted for sponsoring Vue.js, which is what we used. Eventually, after months of silence, legal/finance decided it was too much work.
But hey it wasn't an exception. The local animal shelter was the second most voted and legal/finance also couldn't figure it out how to donate.
In the end the money went to nowhere.
The only "developer marketing" they were doing was sending me in my free time to do panels with other developers in local universities and conferences. Of course it was unpaid, but in return I used it to get another job.
My team lead once approved me upstreaming some changes to a open source project, so long as I did it using my private account.
Basically I got to do the work on company time&dime, but I couldn't give my employer credit, due to this kind of legal red tape.
I liked that teamlead
Maybe I've just gotten lucky, but at companies I've worked for I've usually gotten the go-ahead to contribute upstream on open source projects so as long as it's something important for what we are working on. The only reason I didn't do a whole lot as part of my work at Google is because most of the open source projects I contributed to at Google were Google projects that I could contribute to from the google3 side, and that doesn't count.
interesting, checking the git history of FFmpeg, google has approximately 643 contributions
prints 643> Unfortunately, in my experience, there's often a lot of barriers within companies to upstream. Reasons can be everything from compliance, processes, you name it...
True. In my case I literally had to fight for it. Our lawyers were worried about a weakened patent portfolio and whatnot. In my case at least I won and now we have a culture of upstreaming changes. So don't give up the fight, you might win.
It would probably be easier for these companies to pay Collabora or Igalia.
> Unfortunately, in my experience, there's often a lot of barriers within companies to upstream. Reasons can be everything from compliance, processes, you name it... It's unfortunate.
I sympathize and understand those issues for small companies, but after a certain size those excuses stop being convincing.
Especially for a software company like Google who runs dozens of open source projects, employs an army of lawyers to monitor compliance, and surely has to deal with those issues on a daily basis anyway.
At some point there needs to be pushback. Companies get a huge amount of value from hobbiest open source projects, and eventually they need to start helping out or be told to go away.
This is why all open source software should be copyleft. No discussion to be had: either you upstream changes, or that open source developer's going to get funded via the courts.
Nothing says solid industry better than having 3 majors product lines from a trillion dollar company depending of unpaid volunteer labor.
I think if you look a bit deeper, all product lines from said trillion dollor company rely on open source to some degree. They should be spending hundreds of millions in sponsorship of OS projects. They should put the maintainers on their payroll. Not even reporting to a manager, just pay them a salary for their OS work.
I have a feeling that if they do this, the economy would be hurt (somehow).
None of us want the economy to be hurt, right?
That's what's going to happen when these corporations extract the last value from OSS and all the maintainers give up, so..
How could ffmpeg maintainers kill three major AWS product lines with an email?
In a follow-up tweet, Mark Atwood eloborates: "Amazon was very carefully complying with the licenses on FFmpeg. One of my jobs there was to make sure the company was doing so. Continuing to make sure the company was was often the reason I was having a meeting like that inside the company."
I interpret this as meaning there was an implied "if you screw this up" at the end of "they could kill three major product lines with an email."
Are you interpreting that as "if we violate the license, they can revoke our right to use the software" ?? And they use it in 3 products so that would be really bad. That would make sense to have a compliance person.
Possibly Twitch, Amazon Prime Video, and another one that escapes my mind (AWS-related?).
AWS for sure (Elemental maybe?), but could also be Ring.
Yeah - Amazon Elastic Transcoder which they just shut down and replaced with Elemental MediaConvert is almost certainly just managed "ffmpeg as a Service" under the hood.
And Blink. I used to contract with them a few years back, they all rely heavily on open source.
Twitch definitely. This whole brouhaha has been brewing for a while, and can be traced back to a spat between Theo and ffmpeg.
In the now deleted tweet Theo thrashed VLC codecs to which ffmpeg replied basically "send patches, but you wouldn't be able to". The reply to which was
--- start quote ---
https://x.com/theo/status/1952441894023389357
You clearly have no idea how much of my history was in ffmpeg. I built a ton of early twitch infra on top of yall.
--- end quote ---
This culminated in Theo offering a 20k bounty to ffmpeg if they remove the people running ffmpeg twitter account. Which prompted a lot of heated discussion.
So when Google Project Zero posted their bug... ffmpeg went understandably ballistic
Still doesn't make any sense.
The company that I work at makes sure anything that uses third-party library, whether in internal tools/shipped product/hosted product, goes through legal review. And you'd better comply with whatever the legal team asks you to do. Unless you and everyone around you are as dumb as a potato, you are not going to do things that blatantly violates licenses, like shipping a binary with modified but undisclosed GPL source code. And you can be sure that (1) it's hard to use anything GPL or LGPL in the first place (2) even if you are allowed to, someone will tell you to be extra careful and exactly do what you are told to (or not to)
And as long as Amazon is complying with ffmpeg's LGPL license, ffmpeg can't just stop licensing existing code via an email. Of course, unless there is some secret deal, but again, in that case, someone in the giant corporation will make sure you follow what's in the contract.
Basically, at company at Amazon where there are functional legal teams, the chance of someone "screwing up" is very small.
Easy: ffmpeg discontinues or relicenses some ffmpeg functionality that AWS depends on for those product alines and AWS is screwed. I've seen that happen in other open source projects.
But if it gets relicensed, they would still be able to use the current version. Amazon definitely would be able to fund an independent fork.
And then the argument for refusing to just pay ffmpeg developers gets even more flimsy.
The entire point here is to pay for the fixes/features you keep demanding, else the project is just going to do as it desires and ignore you.
More and more OSS projects are getting to this point as large enterprises (especially in the SaaS/PaaS spheres) continue to take advantage of those projects and treat them like unpaid workers.
Heard of OpenSearch?
There are many reasons, often good ones, not to pay money for an open source project but instead fund your own projects, from a company's perspective.
Not really. Their whole reason for not funding open source is it essentially funds their competitors who use the same projects. That's why they'd rather build a closed fork in-house than just hand money to ffmpeg.
It's a dumb reason, especially when there are CVE bugs like this one, but that's how executives think.
> Their whole reason for not funding open source is it essentially funds their competitors who use the same projects. That's why they'd rather build a closed fork in-house than just hand money to ffmpeg.
So the premise here is that AWS should waste their own money maintaining an internal fork in order to try to make their competitors do the same thing? But then Google or Intel or someone just fixes it a bit later and wisely upstreams it so they can pay less than you by not maintaining an internal fork. Meanwhile you're still paying the money even though the public version has the fix because now you either need to keep maintaining your incompatible fork or pay again to switch back off of it. So what you've done is buy yourself a competitive disadvantage.
> that's how executives think.
That's how cargo cult executives think.
Just because you've seen someone else doing something doesn't mean you should do it. They might not be smarter than you.
It's the tragedy of the commons all over again. You can see it in action everywhere people or communities should cooperate for the common good but don’t. Because many either fear being taken advantage of or quietly try to exploit the situation for their own gain.
The tragedy of the commons is actually something else. The problem there comes from one of two things.
The first is that you have a shared finite resource, the classic example being a field for grazing which can only support so many cattle. Everyone then has the incentive to graze their cattle there and over-graze the field until it's a barren cloud of dust because you might as well get what you can before it's gone. But that doesn't apply to software because it's not a finite resource. "He who lights his taper at mine, receives light without darkening me."
The second is that you're trying to produce an infinite resource, and then everybody wants somebody else to do it. This is the one that nominally applies to software, but only if you weren't already doing it for yourself! If you can justify the effort based only on your own usage then you don't lose anything by letting everyone else use it, and moreover you have something to gain, both because it builds goodwill and encourages reciprocity, and because most software has a network effect so you're better off if other people are using the same version you are. It also makes it so the effort you have to justify is only making some incremental improvement(s) to existing code instead of having to start from scratch or perpetually pay the ongoing maintenance costs of a private fork.
This is especially true if your company's business involves interacting with anything that even vaguely resembles a consolidated market, e.g. if your business is selling or leasing any kind of hardware. Because then you're in "Commoditize Your Complement" territory where you want the software to be a zero-margin fungible commodity instead of a consolidated market and you'd otherwise have a proprietary software company like Microsoft or Oracle extracting fees from you or competing with your hardware offering for the customer's finite total spend.
But their competitors also fund them, which makes it a net positive sum.
Google, AWS, Vimeo, etc can demand all they want. But they’re just another voice without any incentives that aid the project. If they find having an in-house ffmpeg focused on their needs to be preferable, go for it; that’s OSS.
But given its license, they’re going to have to reveal those changes anyways (since many of the most common codecs trigger the GPL over LGPL clause of the license) or rewrite a significant chunk of the library.
ffmpeg is LGPL, so they can't make a proprietary fork anyways
Sounds like it would be a lot of churn for nothing; if they can fund a fork, then they could fund the original project, no?
They COULD, but history has shown they would rather start and maintain their own fork.
It might not make sense morally, but it makes total sense from a business perspective… if they are going to pay for the development, they are going to want to maintain control.
If they want that level of control, reimburse for all the prior development too. - ie: buy that business.
As it stands, they're just abusing someone's gift.
Like jerks.
I always like to point out that "Open Source" was a deliberate watering-down of the moralizing messaging of Free Software to try and sell businesses on the benefits of developing software in the open.
> We realized it was time to dump the confrontational attitude that has been associated with "free software" in the past and sell the idea strictly on the same pragmatic, business-case grounds that motivated Netscape.
https://web.archive.org/web/20021001164015/http://www.openso...
I like FS, but it's always had kind of nebulous morality, though. It lumps in humans with companies, which cannot have morals, under the blanket term "users".
This is the same tortured logic as Citizens United and Santa Clara Co vs Southern Pacific Railroad, but applied to FS freedoms instead of corporate personhood and the 1st Amendment.
I like the FS' freedoms, but I favor economic justice more, and existing FS licenses don't support that well in the 21st c. This is why we get articles like this every month about deep-pocketed corporate free riders.
Agree in some ways. Still, discussing the nitty gritty is superfluous, the important underlying message you are making is more existential.
Open source software is critical infrastructure at this point. Maintainers should be helped out, at least by their largest users. If free riding continues, and maintainers' burden becomes too large, supply chain attacks are bound to happen.
> Agree in some ways. Still, discussing the nitty gritty is superfluous, the important underlying message you are making is more existential.
It's an important conversation to have.
I remember a particular developer...I'll be honest, I remember his name, but I remember him being a pretty controversial figure here, so I'll pretend not to know them to avoid reflexive downvotes...but this developer made a particular argument that I always felt was compelling.
> If you do open source, you’re my hero and I support you. If you’re a corporation, let’s talk business.
The developer meant this in the context of preferring the GPL as a license, but the problem with the GPL is that it still treats all comers equally. It's very possible for a corporation to fork a GPL project and simply crush the original project by throwing warm bodies at their projects.
Such a project no longer represents the interests of the free software community as a whole, but its maintainers specifically. I also think that this can apply to projects that are alternatives to popular GPL projects, except for the license being permissive.
We need to revisit the four freedoms, because I no longer think they are fit for purpose.
There should be a "if you use this product in a for-profit environment, and you have a yearly revenue of $500,000,000,000+ ... you can afford to pay X * 100,000/yr" license.
That's the Llama license and yeah, a lot of people prefer this approach, but many don't consider it open source. I don't either.
In fact, we are probably just really lucky that some early programmers were kooky believers in the free software philosophy. Thank God for them. So much of what I do owes to the resulting ecosystem that was built back then.
I reckon this is an impedance mismatch between "Open Source Advocacy" and Open Source as a programming hobby/lifestyle/itch-to-scratch that drives people to write and release code as Open Source (of whatever flavour they choose, even if FSS and/or OSF don't consider that license to qualify as "Open Source").
I think Stallmann's ideological "allowing users to run, modify, and share the software without restrictions" stance is good, but I think for me at least that should apply to "users" as human persons, and doesn't necessarily apply to "corporate personhood" and other non-human "users". I don't see a good way to make that distinction work in practice, but I think it's something that if going to become more and more problematic as time goes on, and LLM slop contributions and bug reports somehow feed into this too.
I was watching MongoDB and Redis Labs experiments with non-OSF approved licences clearly targeted at AWS "abusing" those projects, but sadly neither of those cases seemed to work out in the long term. Also sadly, I do not have any suggestions of how to help...
There is also the AGPL.
Do they want control or do they really want something that works that they don't have to worry about?
The only reason for needing control would be if it was part of their secret sauce and at that point they can fork it and fuck off.
These companies should be heavily shamed for leaching off the goodwill of the OSS community.
If they can fund a fork, they can continue business as usual until the need arises
A fork is more expensive to maintain than funding/contributing to the original project. You have to duplicate all future work yourselves, third party code starts expecting their version instead of your version, etc.
Nobody said the fork cannot diverge from the original project.
Funding ffmpeg also essentially funds their competitors, but a closed fork in-house doesn't. Submitting bugs costs less than both, hence why they still use ffmpeg in the first place.
They can't - it's LGPL 2.1. So the fork would be public essentially.
With a bit of needless work the fixes could be copied and they would still end up funding them.
Oh the irony - we don't want to pay for ffmpeg's development, but sure can finance a fork if we have to.
It still takes expensive humans to do this so they are incentivized to use the free labor.
Yes, definitely. I was just saying that if the license ever did change, they would move to an in-house library. In fact, they would probably release the library for consumer use as an AWS product.
something more dangerous would be "amazon is already breaking the license, but the maintainers for now havent put in the work to stop the infringement"
ffmpeg cannot relicense anything because it doesn't own anything. The contributors own the license to their code.
Relicensing isn't necessary. If you violate the GPL with respect to a work you automatically lose your license to that work.
It's enough if one or two main contributors assert their copyrights. Their contributions are so tangled with everything else after years of development that it can't meaningfully be separated away.
In addition, there is the potential for software users to sue for GPL compliance. At least that is the theory behind the lawsuit against Vizio:
https://sfconservancy.org/copyleft-compliance/vizio.html
But that's only relevant if AWS (in this example) violates the GPL license, and it doesn't really seem like they have?
They can switch from LGPLv2.1 to GPLv2 or GPLv3 for future development because the license has an explicit provision for that.
I don’t know about ffmpeg, but plenty of OSS projects have outlined rules for who/when a project-wide/administrative decision can be made. It’s usually outlined in a CONTRIB or similar file.
Doubtful that's enough for a copyright grant. You'd need a signed CLA.
No one said make it proprietary; there are other OSS licenses that would make ffmpeg non-viable for commercial usage.
You need a copyright grant to change the license in any way.
(Except for the part in the LGPL that lets you relicense it to later versions.)
Wouldn’t that only affect new versions and current versions are still licensed under the old license ?
If you breach the LGPLv2/GPLv2 licence then you lose all rights to use the software.
There's no penalty clause, there's no recovery clause. If you don't comply with the licence conditions then you don't have a licence. If you don't have a licence then you can't use the program, any version of the program. And if your products depend on that program then you lose your products.
The theoretical email would be a notification that they had breached the licence and could no longer use the software. The obvious implication being that AWS was wanting to do something that went contrary to the restrictions in the GPL, and he was trying to convince them not to.
Open up an Amazon media app and navigate around enough, and you'll encounter a page with all their "Third Party Software Licenses."
For instance, here's one for the Amazon Music apps, which includes an FFMpeg license: https://www.amazon.com/gp/help/customer/display.html?nodeId=...
Is the idea that ffmpeg could change its license and wreak havoc?
And? How does that give the ffmpeg authors a power over Amazon? (Hint: it doesn’t and the guy we’re discussing is spewing nonsense for maximum retweets)
I'd guess Prime Video heavily relies on ffmpeg, then you got Elastic Transcode and the Elemental Video Services. Probably Cloudfront also has special things for streaming that rely on ffmpeg.
The "kill it with an email" probably means that whoever said this is afraid that some usecase there wouldn't stand up to an audit by the usual patent troll mothercluckers. The patents surrounding video are so complex, old and plentiful that I'd assume full compliance is outright impossible.
AWS MediaConvert as well which is a huge API (in surface it covers) which is under Elemental but is kinda it's own thing - willing to bet (though I don't know) that that is ffmpeg somewhere underneath.
The API manual for it is nearly 4000 pages and it can do insane stuff[1].
I had to use it at last job(TM), it's not terrible API wise.
[1] https://docs.aws.amazon.com/pdfs/mediaconvert/latest/apirefe... CAUTION: big PDF.
" ... and it can do insane stuff"
That's a pretty good indicator it's likely just ffmpeg in an AWS Hoodie/Trenchcoat.
If Google can pay someone to find bugs, they can pay someone to fix them.
Sounds like they'll just throw their employees to work on it rather than monetarily fund it, that way they can aura farm.
As a Googler, I wish I was as optimistic as you. There is an internal sentiment that valuable roles are being removed that aren't aligned with strategic initiatives, even roles that are widely believed to improve developer productivity. See the entire python maintainers team being laid off: https://www.reddit.com/r/AskProgramming/comments/1cem1wk/goo...
Roles fixing FFmpeg bugs would be a hard sell in this environment, imho.
Finding the bug is 95% of the effort. The idea that reporting obscure security bugs is worthless is BS.
> "Don't come to me with problems, come with solutions"
The problem is, the issue in the article is explicitly named as "CVE slop", so if the patch is of the same quality, it might require quite some work anyway.
The linked report seems to me to be the furthest thing from "slop". It is an S-tier bug report that includes a complete narrative, crash artifacts, and detailed repro instructions. I can't believe anyone is complaining about what is tied for the best bug report I have ever seen. https://issuetracker.google.com/issues/440183164?pli=1
It's a good quality bug report.
But it's also a bug report about the decoder for "SANM ANIM v0" - a format so obscure almost all the search results are the bug report itself. Possibly a format exclusive to mid-1990s LucasArts games [1]
Pretty crazy that ffmpeg supports the codec in the first place, IMHO.
I can understand volunteers not wanting to sink time into maintaining a codec to play a video format that hasn't been used since the Clinton administration. gstreamer divides their plugins into 'good', 'bad' and 'ugly' to give them somewhere to stash unmaintained codecs.
[1] https://web.archive.org/web/20250419105551/https://wiki.mult...
It's a codec that is enabled by default at least on major Linux distributions, and that will be processed by ffmpeg without any extra flags. Anyone playing an untrusted video file without explictly overriding the codec autodetection is vulnerable.
The format being obscure and having no real usage doesn't help when it's the attackers creating the files. The obscure formats are exposing just as much attack surface as the common ones.
> Pretty crazy that ffmpeg supports the codec in the first place, IMHO.
Yes.
Sure, it's a valid bug report. But I don't understand why there has been so much drama over this when all the ffmpeg folks have to do is say "sorry, this isn't a priority for us so we'll get to it as soon as we can" and put the issue in the backlog as a low priority. If Google wants the issue fixed faster, they can submit a fix. If they don't care enough to do that, they can wait. No big deal either way. Instead, ffmpeg is getting into a public tiff with them over what seems to be a very easily handled issue.
Yes, you're very right. They could simply have killed a codec that no one uses anymore. Or put it behind a compile flag, so if you really want, you can still enable it
But no. Intentionally or not, there was a whole drama created around it [1], with folks being criticized [2] for saying exactly what you said above, because their past (!) employers.
Instead of using the situation to highlight the need for more corporate funding for opensource projects in general, it became a public s**storm, with developers questioning their future contributions to projects. Shameful.
[1] https://news.ycombinator.com/item?id=45806269
[2] https://x.com/FFmpeg/status/1985334445357051931
FFMPEG is upset because Google made the exploit public. They preferred that it remained a zero-day until they decided it was a priority.
I don't understand how anyone believes that behavior is acceptable.
That behaviour is indeed totally unacceptable. At your job. Where they're paying you, and especially if they're paying you at FAANG type pay scales.
If you're an unpaid volunteer? Yeah - nah. They can tell you "Sorry, I'm playing with my cat for the next 3 months, maybe I'll get to it after that?", or just "Fuck off, I don't care."
(I'm now playing out a revenge fantasy in my head where the ffmpeg team does nothing, and Facebook or Palantir or someone similar get _deeply_ hacked via the exploit Google published and theat starts the planets biggest ever pointless lawyers-chasing-the-deepest-pockets fight.)
Or perhaps you’re a FAANG security researcher and your time will be better spent serving the OSS community as a whole by submitting as many useful bug reports as possible, instead of slightly fewer reports with patches included.
In this particular case it’s hardly obvious which patch you should submit. You could fix this particular bug (and leave in place the horrible clunky codec that nobody ever uses) OR you could just submit a patch that puts it behind a compile flag. This is really a decision for the maintainers, and submitting the latter (much better!) patch would not save the maintainers any meaningful amount of time anyway.
I don’t understand how it helps the community to publicly release instructions for attacking people, unless you’re trying to incentivize a company to fix their crap. In this case, there is no company to incentivize, so just report it privately.
You can say publicly that “there is an ABC class vulnerability in XYZ component” so that users are aware of the risk.
It’s OSS so somebody who cares will fix it, and if nobody cares then it doesn’t really matter.
This also informs users that it’s not safe to use ffmpeg or software derived from it to open untrusted files, and perhaps most importantly releasing this tells the distro package maintainers to disable the particular codec when packaging.
Right, I just don’t see why they need to publish the actual exploit.
They have not, neither have they indicated that they’re planning to do so.
I thought that was how the 90 day disclosure timeline worked?
After 90 days they just disclose the vulnerability. From there, developing an exploit is still a fairly complex task.
There's no exploit on the bug report at least, unless you consider the crash reproducer one.
UAF bugs lead to RCE exploit chains.
They can if someone manages to develop an exploit. Let's not confuse vulnerabilities and exploits.
This bug might lead to vulnerability and that's enough. It makes no sense to waste lot of time and research whether it is possible or not - it is faster to remove the buggy codec nobody needs or make a fix.
Ffmpeg makes it trivial to enable and disable individual codecs at compile time. Perhaps it's the Linux distros that need to make a change here?
I get that the ffmpeg people have limited time and resources, I get that it would be nice if Google (or literally anyone else) patched this themselves and submitted that upstream. But "everyone else down stream of us should compile out our security hole" is a terrible way to go about things. If this is so obscure of a bug that there's no real risk, then there's no need for anyone to worry that the bug has been reported and will be publicized. On the other hand, if it's so dangerous that everyone should be rebuilding ffmpeg from source and compiling it out, then it really needs to be fixed in the up stream.
Edit: And also, how is anyone supposed to know they should compile the codec our unless someone makes a bug report and makes it public in the first place?
> But "everyone else down stream of us should compile out our security hole" is a terrible way to go about things.
Is that somehow _less_ of a terrible way to think than "someone who's contributed their time as a volunteer to an open source software project that we have come to rely on, now has some sort of an obligation to drop everything and do more unpaid work for a trillion dollar company"?
> it really needs to be fixed in the up stream
Lots of people love using "Free Software" that they didn't have to write as essential parts of their business.
Way too many of them seem to blink right when they get to this bit of the licence they got it with:
SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
(That's directly from Section 15 "NO WARRANTY of https://code.ffmpeg.org/FFmpeg/FFmpeg/src/branch/release/4.0... )
> someone who's contributed their time as a volunteer to an open source software project that we have come to rely on, now has some sort of an obligation to drop everything and do more unpaid work for a trillion dollar company
If you could highlight the relevant part of the bug report that demanded the developers "drop everything" and do "unpaid work for a trillion dollar company", that would be great because I'm having trouble finding it. I see "hey, we found this bug, we found where we think the issue is in the code and here's a minimal reproduction. Also FYI we've sent you this bug report privately, but we will also be filing a public bug report after 90 days." And no, I don't think having a policy of doing a private bug report followed by a public report some time later qualifies as a demand. They could have just made a public report from the get go. They could also have made a private report and then surprised them with a public bug report some arbitrary amount of time later. Giving someone a private heads up before filing a public bug report is a courtesy, not a demand.
And it's really funny to complain about Google expecting "unpaid work for a trillion dollar company", when the maintainers proudly proclaim that the likes of no less than Google ARE paying them for consulting work on ffmpeg[1][2][3]
[1]: https://fflabs.eu [2]: https://fflabs.eu/about/ [3]: https://ffmpeg.org/consulting.html
Publicly posting an exploitable bug IS asking for someone to drop everything and come fix the issue NOW.
So when someone finds a bug in software, in your mind the only acceptable options are:
1) Fix it yourself
2) Sit on it silently until the maintainers finally get some time to fix it
That seems crazy to me. For one, not everyone who discovers a bug can fix it themselves. But also a patch doesn't fix it until it's merged. If filing a public bug report is expecting the maintainers to "drop everything and do free labor" then certainly dropping an unexpected PR with new code that makes heretofore unseen changes to a claimed security vulnerability must surely be a much stronger demand that the maintainers "drop everything" and do the "free labor" of validating the bug, validating the patch, merging the patch etc etc etc. So if the maintainers don't have time to patch a bug from a highly detailed bug report, they probably don't have time to review an unexpected patch for the same. So then what? Does people sit on that bug silently until someone finally gets around to having the time to review the PR. Or are they allowed to go public with the PR even though that's far more clearly a "demand to drop everything and come fix the issue NOW".
I for one am quite happy the guy who found the XZ backdoor went public before a fix was in place. And if tomorrow someone discovers that all Debian 13 releases have a vulnerable SSH installation that allows root logins with the password `12345`, I frankly don't give a damn how overworked the SSH or Debian maintainers are, I want them to go public with that information too so the rest of us can shut off our Debian servers.
xz was a fundamentally different problem, it was code that had been maliciously introduced to a widespread library and the corrupted version was in the process of being deployed to multiple distributions. The clock was very much ticking.
The clock is always ticking. You have no idea when you find a vulnerability who knows about it or how or whether it is currently being actively exploited. A choice to delay disclosure is a choice to take on the risk that the bug is being actively exploited in order to reduce the gap (and risk in that gap) between public disclosure and remediations being available. But critically, it is a risk that is being forced on the users of the software. They are unable to make an informed decision about accepting the risk because they don't know there is a risk. Public disclosure, sooner rather than later MUST be the goal of all bug reports, no matter how serious and no matter how overworked the maintainers.
Responsible disclosure policies for contributor-driven projects can differ from commercial projects. Also, if Google has the funds to pay for bug finding, they also have the funds for bug fixing the community projects they depend on.
> Responsible disclosure policies for contributor-driven projects can differ from commercial projects.
The can, but there's not an obvious reason why they should. If anything, public disclosure timelines for commercial closed source projects should be much much longer than for contributor-driven projects because once a bug is public ANYONE can fix it in the contributor-driven project, where as for a commercial project, you're entirely at the mercy of the commercial entities timelines.
> Also, if Google has the funds to pay for bug finding, they also have the funds for bug fixing the community projects they depend on.
They do. And they do. They literally higher the ffmpeg maintainers via the maintainer's consulting business (fflabs.eu) and they routinely contribute code to the ffmpeg project.
> The can, but there's not an obvious reason why they should.
Of course there are obvious reasons: corporations have the resources and incentives to fix them promptly once threatened with disclosure. Corporations don't respond well otherwise. None of these apply to volunteer projects.
> They literally higher the ffmpeg maintainers via the maintainer's consulting business (fflabs.eu) and they routinely contribute code to the ffmpeg project.
Great, then they should loop in the people they're paying on any notification of a vulnerability.
Of course, if this has truly been the case then nobody would have heard of this debacle.
> None of these apply to volunteer projects.
How so? Volunteer projects have maintainers assigned to the project writing code. The "resources" to fix a bug promptly are simply choosing to allocate your developer resources to fixing the bug. Of course, volunteers might not want to do that, but then again, a company might not want to allocate their developers to fixing a bug either. But in either case the solution is to prioritize spending developer hours on the bug instead of on some other aspect of your project. In fact, volunteer driven projects have one huge resource that corporations don't, a theoretically infinite supply of developers to work on the project. Anyone with an interest can pick up the task of fixing the bug. That's the promise of open source right? Many eyes making all bugs shallow.
As for incentives, apparently both corporations and volunteer projects are "incentivized" to preserve their reputation. If volunteer projects weren't, we wouldn't be having this insane discussion where some people are claiming filing a bug report is tantamount to blackmail.
The only difference between the volunteer project and the corporation is even the head of a volunteer project can't literally force someone to work on an issue under the threat of being fired. I guess technically they could threaten to expel them from the project and I'm sure some bigger projects could also deny funding from their donation pool to a developer that refuses to play ball, but obviously that's not quite the same as being fired from your day job.
> Great, then they should loop in the people they're paying on any notification of a vulnerability.
If only there was some generally agreed upon and standardized way of looping the right people in on notifications of a bug. Some sort of "bug report" that you could give a team. It could include things like what issue you think you've found, places in the code that you believe are the cause of the issue, possibly suggested remediations, maybe even a minimum test case so that you can easily reproduce and validate the bug. Even better if there were some sort email address[1] that you could send these sorts of reports to if you didn't necessarily want to make them public right away. Or maybe there could be a big public database you could submit the reports to where anyone could see things that need work and could pick up the work[2] even if the maintainers themselves didn't. That would be swell, I'm sure some smart person will figure out a system like that one day.
[1]: https://ffmpeg.org/security.html [2]: https://ffmpeg.org/bugreports.html
Here’s where I’m coming from: it would really suck if the outcome of all this was for ffmpeg to drop support for niche codecs.
It may be the case that ffmpeg cannot reasonably support every format while maintaining the same level of security. In that case, it makes sense for distros to disable some formats by default. I still think it’s great that they’re supported by the ffmpeg project.
I agree there would probably need to be some unified guidance about which formats to enable.
I agree, it would suck if ffmpeg dropped support for niche codec altogether. But that's orthogonal to whether or not the bug reports should be made public. And realistically the only way distros (or anyone) can know if they should or need to disable some formats by default is if the issues with those formats are public knowledge so people can make informed decisions. Otherwise you're just arbitrarily picking some formats to enable and some not to based on age or some other less useful criteria.
Every change breaks somebody's workflow.
There are dozens if not hundreds of issues just like this one in ffmpeg, except for codecs that are infinitely more common. Google has been running all sorts of fuzzers against ffmpeg for over a decade at this point and it just never ends. It's a 20 year old C project maintained by poorly funded volunteers that mostly gives every media file ever the be-liberal-in-what-you-accept treatment, because people complain if it doesn't decode some bizarrely non-standard MPEG4 variant recorded with some Chinese plastic toy from 2008. Of course it has all of the out-of-bounds bugs. I poked around on the issue tracker for like 5 minutes and found several "high impact" issues similar to the one in TFA just from the last two or three months, including at least one that hasn't passed the 90 day disclosure window yet.
Nobody who takes security even remotely seriously should decode untrusted media files outside of a sandboxed environment. Modern media formats are in themselves so complex one starts wondering if they're actually Turing complete, and in ffmpeg the attack surface is effectively infinitely large.
The issue is CVE slop because it just doesn't matter if you consider the big picture.
Some example issues to illustrate my point:
https://issuetracker.google.com/issues/436511754 https://issuetracker.google.com/issues/445394503 https://issuetracker.google.com/issues/436510316 https://issuetracker.google.com/issues/433502298
I don't get why you think linking to multiple legitimate and high quality bug reports with detailed analysis and precise reproduction instructions demonstrates "slop". It is the opposite.
This is software that is directly or indirectly run by millions of people on untrusted media files without sandboxing. It's not even that they don't care about security, it's that they're unaware that they should care. It should go without saying that they don't deserve to be hacked just because of that. Big companies doing tons of engineering work to add defense in depth for use cases on their own infrastructure (via sandboxing or disabling obsolete codecs) doesn't help those users. Finding and fixing the vulnerabilities does.
All of these reports are effectively autogenerated by Big Sleep from fuzzing.
Again, Google has been doing this sort of thing for over a decade and has found untold thousands of vulnerabilities like this one. It is not at all clear to me that their doing so has been all that valuable.
Google fuzzing open source projects has eliminated a lot of low hanging fruit from being exploited. I am surprised you think that finding these vulnerabilities so they can be fixed has not been valuable.
AI found the bug, but the analysis and bug report were entirely written by a human without AI assistance. Source: I work with the author.
Anyone running this code with untrusted input needs to sandbox it (which Google has been doing all along).
> Google has been running all sorts of fuzzers against ffmpeg for over a decade at this point
Yeah. It's called YouTube... Why run fuzzers if you can get people to upload a few million random videos every day? ;-)
(I wonder if the BigSleep AI was trained on or prompted with YouTube error logs?)
I'm sure that a hacker wouldn't think of trying to use an obscure format...
https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...
Your point is still taken, but just to clarify that these are different situations. JBIG2 is included for legacy. The Lucas art codec is included for... completion's sake(?)
The problem is that if you have a process using ffmpeg and an attacker feeds it a video with this codec, ffmpeg will proceed to auto-detect the codec, attempt to decrypt and then then break everything.
If the format is old and obscure, and the implementation is broken, it shouldn't be on by default.
Sorry, I probably wasn't clear enough in my comment. I was trying to say that being old gives some legitimacy for existing. Just because it is old doesn't mean it isn't used. Though yes, this should be better determined to make sure it isn't breaking workflows you don't know about.
But old AND obscure, well it's nice that it is supported but enabled by default? Fully with you there.
Yeah but as you can see from the bug report ffmpeg automatically triggers the codec based on file magic, so it is possible that if you run some kind of network service or anything that handles hostile data an attacker could trigger the bug.
It feels like maybe people do not realize that Google is not the only company that can run fuzzers against ffmpeg? Attackers are also highly incentivized to do so and they will not do you the courtesy of filing bug reports.
Best response would be to drop this codec entirely, or have it off by default. At least distros should do that.
The actual best response would be to run any "unsupported" codecs in a WASM sandbox. That way you are not throwing away work, Google can stop running fuzzers against random formats from 1995, and you can legitimately say that the worst that can happen with these formats is a process crash. Everybody wins.
Hmmmm. There's probably just one guy who wrote the ffmpeg code for that format. _Maybe_ one or two more who contributed fixes or enhancements?
The ffmpeg project need to get in touch and get then to assign copyright to the ffmpeg project, then delete that format/decoder from ffmpeg. Then go back to Google with an offer to licence then a commercial version of ffmpeg with the fixed SANM ANIM v0 decoder, for the low low price of only 0.0001% of YouTube's revenue every year. That'd likely make them the best funded open source project ever, if they pulled it off.
Google is not paying anyone to find bugs. They are running AIs indiscriminately.
https://en.wikipedia.org/wiki/Project_Zero
Someone is making the tools to find these bugs. It's not like they're telling ChatGPT "go find bugs lol"
And running those models on large codebases like these isnt anywhere close to free either.
A human at Google investigates all of the bugs fuzzers and AI find manually and manually writes bug reports for upstream with more analysis. They are certainly paid to do that. They are also paid to develop tooling to find bugs.
I'm not sure what you think you mean when you say "running AIs indiscriminately". It's quite expensive to run AI this way, so it needs to be done with very careful consideration.
Still, they are paying for the computing resources needed to run the AI/agents etc.
[dead]
Someone started it running, they are responsible for the results.
Does it matter? Either it's a valid bug or it's not. Either it's of high importance or it's not.
They certainly paid someone to run the so-called AIs.
I’m an open source maintainer, so I empathize with the sentiment that large companies appear to produce labor for unpaid maintainers by disclosing security issues. But appearance is operative: a security issue is something that I (as the maintainer) would need to fix regardless of who reports it, or would otherwise need to accept the reputational hit that comes with not triaging security reports. That’s sometimes perfectly fine (it’s okay for projects to decide that security isn’t a priority!), but you can’t have it both ways.
If google bears no role in fixing the issues it finds and nobody else is being paid to do it either, it functionally is just providing free security vulnerability research for malicious actors because almost nobody can take over or switch off of ffmpeg.
I don’t think vulnerability researchers are having trouble finding exploitable bugs in FFmpeg, so I don’t know how much this actually holds. Much of the cost center of vulnerability research is weaponization and making an exploit reliable against a specific set of targets.
(The argument also seems backwards to me: Google appears to use a lot of not-inexpensive human talent to produce high quality reports to projects, instead of dumping an ASan log and calling it a day. If all they cared about was shoveling labor onto OSS maintainers, they could make things a lot easier for themselves than they currently do!)
Internally, Google maintains their own completely separate FFMpeg fork as well as a hardened sandbox for running that fork. Since they keep pace with releases to receive security fixes, there’s potentially lots of upstreamable work (with some effort on both sides…)
My understanding from adjacent threads in this discussion is that Google does in fact make significant upstream contributions to FFmpeg. Per policy those are often made with personal emails, but multiple people have said that Google’s investment in FFmpeg’s security and codec support have been significant.
(But also, while this is great, it doesn’t make an expectation of a patch with a security report reasonable! Most security reports don’t come with patches.)
So your claim is that buggy software is better than documented buggy software?
I think so, yes. Certainly it's more effort to both find and exploit a bug than to simply exploit an existing one someone else found for you.
Yeah it's more effort, but I'd argue that security through obscurity is a super naive approach. I'm not on Google's side here, but so much infrastructure is "secured" by gatekeeping knowledge.
I don't think you should try to invoke the idea of naivete when you fail to address the unhappy but perfectly simple reality that the ideal option doesn't exist, is a fantasy that isn't actually available, and among the available options, even though none are good, one is worse than another.
"obscurity isn't security" is true enough, as far as it goes, but is just not that far.
And "put the bugs that won't be fixed soon on a billboard" is worse.
The super naive approach is ignoring that and thinking that "fix the bugs" is a thing that exists.
If I know it's a bug and I use ffmpeg, I can avoid it by disabling the affected codec. That's pretty valuable.
More fantasy. Presumes the bug only exists in some part of ffmpeg that can be disabled at all, and that you don't need, and that you are even in control over your use of ffmpeg in the first place.
Sure, in maybe 1 special lucky case you might be empowered. And in 99 other cases you are subject to a bug without being in the remotest control over it since it's buried away within something you use and don't even have the option not to use the surface service or app let alone control it's subcomponents.
It's a heck of a lot better than being unaware of it.
(To put this in context: I assume that on average a published security vulnerability is known about to at least some malicious actors before it's published. If it's published, it's me finding out about it, not the bad actors suddenly getting a new tool)
it's only better if you can act on it equal to the bad guys. If the bad guys get to act on it before you, or before some other good guys do on your behalf, then no it's not better
remember we're not talking about keeping a bug secret, we're talking about using a power tool to generate a fire hose of bugs and only doing that, not fixing them
The bug in question revolves around support for codec that has never been in wide use, and was only in obscure use over 25 years ago.
There is no "the bug". The discussion is about what to do with the power of bug-finding tools.
"The bug" in question refers to the one found by the bug-finding tool the article claims triggered the latest episode of debate. Nobody is claiming it's the only bug, just that this triggering bug highlighted was a clear example of where there is actually such a clear cut line.
Google does contribute some patches for codecs they actually consume e.g. https://github.com/FFmpeg/FFmpeg/commit/b1febda061955c6f4bfb..., the bug in question was just an example of one the bug finding tool found that they didn't consume - which leads to this conversation.
Which codec is it?
I believe it's: sanm LucasArts SANM/SMUSH video
The bug exists whether it's reported to the maintainers or not, so yeah, it's pretty naive.
You observe that it is better to be informed than ignorant.
This is true. Congratulations. Man we are all so smart for getting that right. How could anyone get something so obvious and simple wrong?
What you leave out is "in a vacuum" and "all else being equal".
We are not in a vacuum and all else is not equal, and there are more than those 2 factors alone that interact.
Given that Google is both the company generating the bug reports and one of the companies using the buggy library, while most of the ffmpeg maintainers presumably aren't using their libraries to run companies with a $3.52 trillion dollar market cap, would you argue that going public with vulnerabilities that affect your own product before you've fixed them is also a naive approach?
Sorry, but this states a lot of assumption as fact to ask a question which only makes sense if it's all true. I feel Google should assist the project more financially given how much they use it, but I don't think Google shipping products using every codec they find bugs for with their open source fuzzer project is a reasonable guess. I certainly doubt YouTube/Chrome let's you upload/compiles ffmpeg with this LucasArts format, as an example. For security issues relevant to their usage via Chrome CVEs etc, they seem to contribute on fixes as needed. E.g. here is one via fuzzing or a codec they use and work on internally https://github.com/FFmpeg/FFmpeg/commit/b1febda061955c6f4bfb...
In regards whether it's a bad idea to publicly document security concerns found regardless whether you plan on fixing them, it often depends if you ask the product manager what they want for their product or what the security concerned folks in general want for every product :).
> I think so, yes. Certainly it's more effort to both find and exploit a bug than to simply exploit an existing one someone else found for you.
That just means the script kiddies will have more trouble, while more scary actors like foreign intellegence agencies will have free reign.
Foreign intelligence has free rein either way. The script kiddies are the only ones that can be stopped by technological solutions.
it’s not a claim it’s common sense that’s why we have notice periods
> it functionally is just providing free security vulnerability research for malicious actors because almost nobody can take over or switch off of ffmpeg
At least, if this information is public, someone can act on it and sandbox ffmpeg for their use case, if they think it's worth it.
I personally prefer to have this information be accessible to all users.
This is a weird argument. Basically condoning security through obscurity: If nobody reports the bug then we just pretend it doesn’t exist, right?
There are many groups searching for security vulnerabilities in popular open source software who deliberately do not disclose them. They do this to save them for their own use or even to sell them to bad actors.
It’s starting to feel silly to demonize Google for doing security research at this point.
> It’s starting to feel silly to demonize Google for doing security research at this point.
Aren't most people here demonizing Google for dedicating the resources to find bugs, but not to fix them?
And not giving the maintainners reasonable amount of time to fix. This was triggered by recent change of policy on google side.
The timeline is industry standard at this point. The point is make sure folks take security more seriously. If you start deviating from the script, others will expect the same exceptions and it would lose that ability. Sometimes it's good to let something fail loudly to show this is a problem. If ffmpeg doesn't have enough maintainers, then they should fail and let downstream customers know so they have more pressure to contribute resources. Playing superman and trying to prevent them from seeing the problem will just lead to burn out.
Is it industry standard to run automatic AI tools and spam the upstream with bug reports? To then expect the bugs to be fixed within a 90 days is a bit much.
It's not some lone report of an important bug, it's AI spam that put forth security issues at a speed greater than they have resources to fix it.
"AI tools" and "spam" are knee jerk reactions, not an accurate picture of the bug filed: https://issuetracker.google.com/issues/440183164?utm_source=...
whether or not AI found it, clearly a human refined it and produced a very high quality bug report. There's no AI slop here. No spam.
I guess the question that a person at Google who discovers a bug they don’t personally have time to fix is, should they report the bug at all? They don’t necessarily know if someone else will be able to pick it up. So the current “always report” rule makes sense since you don’t have to figure out if someone can fix it.
The same question applies if they have time to fix it in six months, since that presumably still gives attackers a large window of time.
In this case the bug was so obscure it’s kind of silly.
It doesn't matter how obscure it is if it's a vulnerability that's enabled in default builds.
This was not a case of stumbling across a bug. This was dedicated security research taking days if not weeks of high paid employees to find.
And after all that, they just drop an issue, instead of spending a little extra time on producing a patch.
It’s possible that this is a more efficient use of their time when it comes to open source security as a whole, most projects do not have a problem with reports like this.
If not pumping out patches allows them to get more security issues fixed, that’s fine!
From the perspective of Google maybe, but from the perspective of open source projects, how much does this drain them?
Making open source code more secure and at the same time less prevalent seems like a net loss for society. And if those researchers could spare some time to write patches for open source projects, that might benefit society more than dropping disclosure deadlines on volunteers.
I’m specifically talking from the perspective of everybody but Google.
High quality bug reports like this are very good for open source projects.
Security by obscurity. In 2025. On HN.
My takeaway from the article was not that the report was a problem, but a change in approach from Google that they’d disclose publicly after X days, regardless of if the project had a chance to fix it.
To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
On the other hand as an ffmpeg user do you care? Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it? I mean someone could already be using the vulnerability regardless of what Google does.
>Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it?
Yes? It's in the license
>NO WARRANTY
>15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
If I really care, I can submit a patch or pay someone to. The ffmpeg devs don't owe me anything.
Not being told the existence of bugs is different from having a warranty on software. How would you submit a patch on a bug you were not aware of?
Google should provide a fix but it's been standard to disclose a bug after a fixed time because the lack of disclosure doesn't remove the existence of the bug. This might have to be rethought in the context of OSS bugs but an MIT license shouldn't mean other people can't disclose bugs in my project.
Google publicly disclosing the bug doesn't only let affected users know. It also lets attackers know how they can exploit the software.
Holding public disclosure over the heads of maintainers if they don't act fast enough is damaging not only to the project, but to end users themselves also. There was no pressing need to publicly disclose this 25 year old bug.
How is having a disclosure policy so that you balance the tradeoffs between informing people and leaving a bug unreported "holding" anything over the heads of the maintainers? They could just file public bug reports from the beginning. There's no requirement that they file non-public reports first, and certainly not everyone who does file a bug report is going to do so privately. If this is such a minuscule bug, then whether it's public or not doesn't matter. And if it's not a minuscule bug, then certainly giving some private period, but then also making a public disclosure is the only responsible thing to do.
Come on, we let this argument die a decade ago. Disclosure timelines that match what the software author wants is a courtesy, not a requirement.
That license also doesn't give the ffmpeg devs the right to dictate which bugs you're allowed to find, disclose privately, or disclose publicly. The software is provided as-is, without warranty, and I can do what I want with it, including reporting bugs. The ffmpeg devs can simply not read the bug reports, if they hate bug reports so much.
All the license means is that I can’t sue them. It doesn’t mean I have to like it.
Just because software makes no guarantees about being safe doesn’t mean I want it to be unsafe.
Sorry to put it this bluntly, but you are not going to get what you want unless you do it yourself or you can convince, pay, browbeat, or threaten somebody to provide it for you.
If the software makes no guarantees about being safe, then you should assume it is unsafe.
Have you ever used a piece of software that DID make guarantees about being safe?
Every software I've ever used had a "NO WARRANTY" clause of some kind in the license. Whether an open-source license or a EULA. Every single one. Except, perhaps, for public-domain software that explicitly had no license, but even "licenses" like CC0 explicitly include "Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work ..."
I don't know what our contract terms were for security issues, but I've certainly worked on a product where we had 5 figure penalties for any processing errors or any failures of our system to perform its actions by certain times of day. You can absolutely have these things in a contract if you pay for it, and mass market software that you pay for likely also has some implied merchantability depending on jurisdiction.
But yes things you get for free have no guarantees and there should be no expectations put in the gift giver beyond not being actively intentionally malicious.
Point. As part of a negotiated contract, some companies might indeed put in guarantees of software quality; I've never worked in the nuclear industry or any other industries where that would be required, so my perspective was a little skewed. But all mass-distributed software I've ever seen or heard of, free or not, has that "no warranty" clause, and only individual contracts are exceptions.
Also, "depending on jurisdiction" is a good point as well. I'd forgotten how often I've seen things like "Offer not valid in the state of Delaware/California/wherever" or "If you live in Tennessee, this part of the contract is preempted by state law". (All states here are pulled out of a hat and used for examples only, I'm not thinking of any real laws).
OK, then you can't decode videos.
Anyone who has seen how the software is sausaged knows that. Security flaws will happen, no matter what the lawyers put in the license.
And still, we live in a society. We have to use software, bugs or not.
not possible to guarantee safety
This is a fantastic argument for the universe where Google does not disclose vulnerability until the maintainers had had reasonable time to fix it.
In this world the user is left vulnerable because attackers can use published vulnerabilities that the maintainers are to overwhelmed to fix
This program discloses security issues to the projects and only discloses them after they have had a "reasonable" chance to fix it though, and projects can request extensions before disclosure if projects plan to fix it but need more time.
Google runs this security program even on libraries they do not use at all, where it's not a demand, it's just whitehat security auditing. I don't see the meaningful difference between Google doing it and some guy with a blog doing it here.
Google is a multi-billion dollar company, which is paying people to find these bugs in the first place.
That's a pretty core difference.
Great, so Google is actively spending money on making open source projects better and more secure. And for some reason everyone is now mad at them for it because they didn't also spend additional money making patches themselves. We can absolutely wish and ask that they spend some money and resources on making those patches, but this whole thing feels like the message most corporations are going to take is "don't do anything to contribute to open source projects at all, because if you don't do it just right, they're going to drag you through the mud for it" rather than "submit more patches"
Why should Google not be expected to also contribute fixes to a core dependency of their browser, or to help funding the developers? Just publishing bug reports by themselves does not make open source projects secure!
Google does do that.
This bit of ffmpeg is not a Chrome dependency, and likely isn’t used in internal Google tools either.
> Just publishing bug reports by themselves does not make open source projects secure!
It does, especially when you first privately report them to the maintainers and give them a plenty of time to fix the bug.
It doesn't if you report lots of "security" issues (like this 25 years old bug) and give too little time to fix them.
Nobody is against Google reporting bugs, but they use automatic AI to spam them and then expect a prompt fix. If you can't expect the maintainers to fix the bug before disclosure, then it is a balancing act: Is the bug serious enough that users must be warned and avoid using the software? Will disclosing the bug now allow attackers to exploit it because no fix has been made?
In this case, this bug (imo) is not serious enough to warrant a short disclosure time, especially if you consider *other* security notices that may have a bigger impact. The chances of an attacker finding this on their own and exploiting it are low, but now everybody is aware and you have to rush to update.
The timeline here is pretty long, and Google will provide an extension if you ask.
What do you believe would be an appropriate timeline?
>especially if you consider other security notices that may have a bigger impact.
This is a bug in the default config that is likely to result in RCE, it doesn’t get that much worse than this.
They're actively making open source projects less secure by publishing bugs that the projects don't have the volunteers to fix
I saw another poster say something about "buggy software". All software is buggy.
The bug exists whether or not google publishes a public bug report. They are no more making the project less secure than if some retro-game enthusiast had found the same bug and made a blog post about it.
Publishing bugs that the project has so that they can be fixed is actively making the project more secure. How is someone going to do anything about it if Google didn’t do the research?
Did you see how the FFMPEG project patched a bug for a 1995 console? That's not a good use for the limited amount of volunteers on the project. It actively makes it less secure by taking away from more pertinent bugs.
The codec can be triggered to run automatically by adversarial input. The irrelevance of the format is itself irrelevant when ffmpeg has it on by default.
Then they should mark it as low priority and put it in their backlog. I trust that the maintainers are good judges of what deserves their time.
Publicizing vulnerabilities is the problem though. Google is ensuring obscure or unknown vulnerabilities will now be very well known and very public.
This is significant when they represent one of the few entities on the planet likely able to find bugs at that scale due to their wealth.
So funding a swarm of bug reports, for software they benefit from, using a scale of resources not commonly available, while not contributing fixes and instead demanding timelines for disclosure, seems a lot more like they'd just like to drive people out of open source.
I think most people learned about this bug from FFmpeg's actions, not Google's. Also, you are underestimating adversaries: Google spends quite a bit of money on this, but not a lot given their revenue, because their primary purpose is not finding security bugs. There are entities that are smaller than Google but derive almost all their money from finding exploits. Their results are broadly comparable but they are only publicized when they mess up.
If it was a rendering bug it would be a waste of time. But they also wouldn't have any pressure to fix it.
An exploit is different. It can affect anyone and is quite pertinent.
> so Google is actively spending money on making open source projects better and more secure
It looks like they are now starting to flood OSS with issues because "our AI tools are great", but don't want to spend a dime helping to fix those issues.
xkcd 2347
According to the ffmpeg maintainer's own website (fflabs.eu) Google is spending plenty of dimes helping to fix issues in ffmpeg. Certainly they're spending enough dimes for the maintainers to proudly display Google's logo on their site as a customer of theirs.
Here's ffmpeg's site: https://www.ffmpeg.org
I fail to see a single Google logo. I also didn't know that Google sonehow had a contract with ffmpeg to be their customer.
Corporate Social Responsibility? The assumption is that the work is good for end users. I don't know if that's the case for the maintainers though.
The user is vulnerable while the problem is unfixed. Google publishing a vulnerability doesn't change the existence of the vulnerability. If Google can find it, so can others.
Making the vulnerability public makes it easy to find to exploit, but it also makes it easy to find to fix.
If it is so easy to fix, then why doesn't Google fix it? So far they've spent more effort in spreading knowledge about the vulnerability than fixing it, so I don't agree with your assessment that Google is not actively making the world worse here.
I didn't say it was easy to fix. I said a publication made it easy to find it, if someone wanted to fix something.
If you want to fix up old codecs in ffmpeg for fun, would you rather have a list of known broken codecs and what they're doing wrong; or would you rather have to find a broken codec first.
>If Google can find it, so can others.
What a strange sentence. Google can do a lot of things that nobody can do. The list of things that only Google, a handful of nation states, and a handful of Google-peers can do is probably even longer.
Sure, but running a fuzzer on ancient codecs isn't that special. I can't do it, but if I wanted to learn how, codecs would be a great place to start. (in fact, Google did some of their early fuzzing work in 2012-2014 on ffmpeg [1]) Media decoders have been the vector for how many zero interaction, high profile attacks lately? Media decoders were how many of the Macromedia Flash vulnerabilities? Codecs that haven't gotten any new media in decades but are enabled in default builds are a very good place to go looking for issues.
Google does have immense scale that makes some things easier. They can test and develop congestion control algorithms with world wide (ex-China) coverage. Only a handful of companies can do that; nation states probably can't. Google isn't all powerful either, they can't make Android updates really work even though it might be useful for them.
[1] https://security.googleblog.com/2014/01/ffmpeg-and-thousand-...
Nation-states are a very relevant part of the threat model.
> If Google can find it, so can others.
While true, Only Google has google infrastructure, this presupposes that 100% of all published exploits would be findable.
you'd assume that a bad actor would have found the exploit and kept it hidden for their own use. To assume otherwise is fundamentally flawed security practice.
> If Google can find it, so can others.
Not really. It requires time, ergo money.
which bad actors would have more of, as they'd have a financial incentive to make use of the found vulnerabilities. White hats don't get anything in return (financially) - it's essentially charity work.
In this world and the alternate universe both, attackers can also use _un_published vulnerabilities because they have high incentive to do research. Keeping a bug secret does not prevent it from existing or from being exploited.
As clearly stated, most users of ffmpeg are unaware of them using it. Even them knowing about a vulnerability in ffmpeg, they wouldn't know they are affected.
Really, the burden is on those shipping products that depend on ffmpeg: they are the ones who have to fix the security issues for their customers. If Google is one of those companies, they should provide the fix in the given time.
But how are those companies supposed to know they need to do anything unless someone finds and publicly reports the issue in the first place? Surely we're not advocating for a world where every vendor downstream of the ffmpeg project independently discovers and patches security vulnerabilities without ever reporting the issues upstream right?
If they both funded vulnerability scanning and vulnerability fixing (if they don't want to do it in-house, they can sponsor the upstream team), which is to me the obvious "how", I am not sure why you believe there is only one way to do it.
It's about accountability! Who really gets to do it once those who ship it to customers care, is on them to figure out (though note that maintainers will have some burden to review, integrate and maintain the change anyway).
They regularly submit code and they buy consulting from the ffmpeg maintainers according to the maintainer's own website. It seems to me like they're already funding fixes in ffmpeg, and really everyone is just mad that this particular issue didn't come with a fix. Which is honestly not a great look for convincing corporations to invest resources into contributing to upstream. If regular patches and buying dev time from the maintainers isn't enough to avoid getting grief for "not contributing" then why bother spending that time and money in the first place?
I have about 100x as much sympathy for an open source project getting time to fix a security bug than I do a multibillion dollar company with nearly infinite resources essentially blackmailing a small team of developers like this. They could -easily- pay a dev to fix the bug and send the fix to ffmpeg.
Since when are bug reports blackmail? If some retro game enthusiast discovered this bug and made a blog post about it that went to the front page of HN, is that blackmail? If someone running a fuzzer found this bug and dumped a public bug report into github is that blackmail? What if google made this report privately, but didn't say anything about when they would make it public and then just went public at some arbitrary time in the future? How is "heads up, here's a bug we found, here's the reproduction steps for it, we'll file a public bug report on it soon" blackmail?
They could be, and the chances of that increase immensely once Google publishes it.
In my case, yes, but my pipeline is closed. Processes run on isolated instances that are terminated without haste as soon as workflow ends. Even if uncaught fatal errors occur, janitor scripts run to ensure instances are terminated on a fast schedule. This isn't something running on my personal device with random content that was provided by unknown someone on the interwebs.
So while this might be a high security risk because it possibly could allow RCE, the real-world risk is very low.
Sure but how.
Let's say that FFMPEG has a 10 CVE where a very easy stream can cause it to RCE. So what?
We are talking about software commonly for end users deployed to encode their own media. Something that rarely comes in untrusted forms. For an exploit to happen, you need to have a situation where an attacker gets out a exploited media file which people commonly transcode via FFMPEG. Not an easy task.
This sure does matter to the likes of google assuming they are using ffmpeg for their backend processing. It doesn't matter at all for just about anyone else.
You might as well tell me that `tar` has a CVE. That's great, but I don't generally go around tarring or untarring files I don't trust.
AIUI, (lib)ffmpeg is used by practically everything that does anything with video, including such definitely-security-sensitive things as Chrome, which people use to play untrusted content all the time.
Then maybe the Google chrome devs should submit a PR to ffmpeg.
Chrome devs frequently do just that, Chrome just doesn’t enable this codec.
Sure. And fund them.
hmm, didn't realize chrome was using ffmpeg in the background. That definitely makes it more dangerous than I supposed.
Looks like firefox does the same.
Firefox has moved some parsers to Rust: https://github.com/mozilla/mp4parse-rust
Firefox also does a lot of media decoding in a separate process.
Pretty much anything that has any video uses the library (incl. youtube)
Ffmpeg is a versatile toolkit used in lot of different places.
I would be shocked if any company working with user generated video from the likes of zoom or TikTok or YouTube to small apps all over which do not have it in their pipeline somewhere.
There are alternatives such as gstreamer and proprietary options. I can’t give names, but can confirm at least two moderately sized startups that use gstreamer in their media pipeline instead of ffmpeg (and no, they don’t use gst-libav).
One because they are a rust shop and gstreamer is slightly better supported in that realm (due to an official binding), the other because they do complex transformations with the source streams at a basal level vs high-level batch transformations/transcoding.
There are certainly features and use cases where gstreamer is better fit than ffmpeg.
My point was it would be hard to imagine eschewing ffmpeg completely, not that there is no value for other tools and ffmpeg is better at everything. It is so versatile and ubiquitous it is hard to not use it somewhere.
In my experience there usually is always some scenarios in the stack where throwing in ffmpeg for a step is simpler and easier even if there no proper language binding etc, for some non-core step or other.
From a security context that wouldn't matter, As long it touches data, security vulnerabilities would be a concern.
It would be surprising, not that it would impossible to forgo ffmpeg completely. It would be just like this site is written Lisp, not something you would typically expect not impossible.
I wasn’t countering your point, I just wanted to add that there are alternatives (well, an alternative in the OSS sphere) that are viable and well used outside of ffmpeg despite its ubiquity.
Upload a video to YouTube or Vimeo. They almost certainly run it through ffmpeg.
ffmpeg is also megabytes of parsing code, whereas tar is barely a parser.
It would be surprising to find memory corruption in tar in 2025, but not in ffmpeg.
> On the other hand as an ffmpeg user do you care? Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it?
Yes, because publicly disclosing the vulnerability means someone will have enough information to exploit it. Without public disclosure, the chance of that is much lower.
If you use a trillion dollar AI to probe open source code in ways that no hacker could, you're kind of unearthing the vulnerabilities yourself if you disclose them.
This particular bug would be easy to find without any fancy expensive tools.
That is standard practice. It is considered irresponsible to not publicly disclose any vulnerability.
The X days is a concession to the developers that the public disclosure will be delayed to give them an opportunity to address the issue.
> That is standard practice.
It's standard practice for commercially-sponsored software, and it doesn't necessarily fit volunteer maintained software. You can't have the same expectations.
Vulnerabilities should be publicly disclosed. Both closed and open source software are scrutinized by the good and the bad people; sitting on vulnerabilities isn't good.
Consumers of closed source software have a pretty reasonable expectation that the creator will fix it in a timely manner. They paid money, and the (generally) the creator shouldn't put the customer in a nasty place because of errors.
Consumers of open source software should have zero expectation that someone else will fix security issues. Individuals should understand this; it's part of the deal for us using software for free. Organizations that are making money off of the work of others should have the opposite of an expectation that any vulns are fixed. If they have or should have any concern about vulnerabilities in open source software, then they need to contribute to fixing the issue somehow. Could be submitting patches, paying a contractor or vendor to submit patches, paying a maintainer to submit patches, or contributing in some other way that betters the project. The contribution they pick needs to work well with the volunteers, because some of the ones I listed would absolutely be rejected by some projects -- but not by others.
The issue is that an org like Google, with its absolute mass of technical and financial resources, went looking for security vulnerabilities in open source software with the pretense of helping. But if Google (or whoever) doesn't finish the job, then they're being a piece of shit to volunteers. The rest of the job is reviewing the vulns by hand and figuring out patches that can be accepted with absolutely minimal friction.
To your point, the beginning of the expectation should be that vulns are disclosed, since otherwise we have known insecure software. The rest of the expectation is that you don't get to pretend to do a nice thing while _knowing_ that you're dumping more work on volunteers that you profit from.
In general, wasting the time of volunteers that you're benefiting from is rude.
Specifically, organizations profiting off of volunteer work and wasting their time makes them an extractive piece of shit.
Stop being a piece of shit, Google.
why are the standards and expectation different for google vs an independent researcher? Just because they are richer, doesn't mean they should be held to a standard that isn't done for an independent researcher.
The OSS maintainer has the responsibility to either fix, or declare they won't fix - both are appropriate actions, and they are free to make this choice. The consumer of OSS should have the right to know what vulns/issues exist in the package, so that they make as informed a decision as they can (such as adding defense in depth for vulns that the maintainers chooses not to fix).
They are different because the independent researchers don't make money off the projects that they investigate.
Google makes money off ffmpeg in general but not this part of the code. They're not getting someone else to write a patch that helps them make money, because google will just disable this codec if it wasn't already disabled in their builds.
Also in general Google does investigate software they don't make money off.
> Also in general Google does investigate software they don't make money off.
An organization of this size might actually have trouble making sure they really don't use code from that project. Or won't do so in the future.
> independent researchers don't make money off the projects that they investigate
but they make money off the reputational increase they earn for having their name attached to the investigation. Unless the investigation and report is anonymous and their name not attached (which, could be true for some researchers), i can say that they're not doing charity.
That's a one-time bonus they get for discovering a bug, not from using the project on production. Google also gets this reward by the way. Therefore it's still imbalanced.
You disclose so that users can decide what mitigations to take. If there's a way to mitigate the issue without a fix from the developers the users deserve to know. Whether the developers have any obligation to fix the problem is up to the license of the software, the 90 day concession is to allow those developers who are obligated or just want to issue fixes to do so before details are released.
So no one should file public bug reports for open source software?
This is standard practice for Linux as well.
The entire conflict here is that norms about what's considered responsible were developed in a different context, where vulnerability reports were generated at a much lower rate and dedicated CVE-searching teams were much less common. FFmpeg says this was "AI generated bug reports on an obscure 1990s hobby codec"; if that's accurate (I have no reason to doubt it, just no time to go check), I tend to agree that it doesn't make sense to apply the standards that were developed for vulnerabilities like "malicious PNG file crashes the computer when loaded".
The codec is compiled in, enabled by default, and auto detected through file magic, so the fact that it is an obscure 1990s hobby codec does not in any way make the vulnerability less exploitable. At this point I think FFmpeg is being intentionally deceptive by constantly mentioning only the ancient obscure hobby status and not the fact that it’s on by default and autodetected. They have also rejected suggestions to turn obscure hobby codecs off by default, giving more priority to their goal of playing every media format ever than to security.
I think the discussion on what standard practice should be does need to be had. This seems to be throwing blame at people following the current standard.
If the obscure coded is not included by default or cannot be triggered by any means other than being explicitly asked for, then it would be reasonable to tag it Won't Fix. If it can be triggered by other means, such as auto file type detection on a renamed file, then it doesn't matter how obscure the feature is, the exploit would affect all.
What is the alternative to a time limited embargo. I don't particularly like the idea of groups of people having exploits that they have known about for ages that haven't been publicly disclosed. That is the kind of information that finds itself in the wrong hands.
Of course companies should financially support the developers of the software they depend upon. Many do this for OSS in the form of having a paid employee that works on the project.
Specifically, FFMPEG seems to have a problem that much of their limitation of resources comes from them alienating contributors. This isn't isolated to just this bug report.
FFMPEG does autodetection of what is inside a file, the extension doesn't really matter. So it's trivial to construct a video file that's labelled .mp4 but is really using the vulnerable codec and triggers its payload upon playing it. (Given ffmpeg is also used to generate thumbnails in Windows if installed, IIRC, just having a trapped video file in a directory could be dangerous.)
> CVE-searching teams
Silly nitpick, but you search for vulnerabilities not CVEs. CVE is something that may or may not be assigned to track a vulnerability after it has been discovered.
Most security issues probably get patched without a CVE ever being issued.
It is accurate. This is a codec that was added for archival and digital preservation purposes. It’s like adding a Unicode block for some obscure 4000 year old dead language that we have a scant half dozen examples of writing.
Here's the question:
Why is Google deliberately running an AI process to find these bugs if they're just going to dump them all on the FFmpeg team to fix?
They have the option to pay someone to fix them.
They also have the option to not spend resources finding the bugs in the first place.
If they think these are so damn important to find that it's worth devoting those resources to, then they can damn well pay for fixing them too.
Or they can shut the hell up and let FFmpeg do its thing in the way that has kept it one of the https://xkcd.com/2347/ pieces of everyone's infrastructure for over 2 decades.
Google is a significant contributor to ffmpeg by way of VP9/AV1/AV2. It's not like it's a gaping maw of open-source abuse, the company generally provides real value to the OSS ecosystem at an even lower level than ffmpeg (which is saying a lot, ffmpeg is pretty in-the-weeds already).
As to why they bother finding these bugs... it's because that's how Google does things. You don't wait for something to break or be exploited, you load your compiler up with santizers and go hunting for bugs.
Yeah this one is kind of trivial, but if the bug-finding infrastructure is already set up it would be even more stupid if Google just sat on it.
Then they can damn well pay for fixing them too.
So to be clear, if Google doesn't include patches, you would rather they don't make bugs they find in software public so other people can fix them?
That is, you'd rather a world where Google either does know about a vulnerability and refuses to tell anyone, or just doesn't look for them at all, over a world where google looks for them and lets people know they exist, but doesn't submit their own fix for it.
Why do you want that world? Why do you want corporations to reduce the already meager amounts of work and resources they put into open source software even further?
That's not a choice. You can decide if Google files bugs like this or not, you can't force them to fix them.
I can't force them. Doesn't mean I can't criticize them for not doing it.
Many people are already developing and fixing FFmpeg.
How many people are actively looking for bugs? Google, and then the other guys that don't share their findings, but perhaps sell them to the highest bidder. Seems like Google is doing some good work by just picking big, popular open source projects and seeing if they have bugs, even if they don't intend to fix them. And I doubt Google was actually using the Lucas Arts video format their latest findings were about.
However, in my mind the discussion whether Google should be developing FFmpeg (beyond the codec support mentioned elsewhere in the thread) or other OSS projects is completely separate from whether they should be finding bugs in them. I believe most everyone would agree they should. They are helping OSS in other ways though, e.g. https://itsfoss.gitlab.io/post/google-sponsors-1-million-to-... .
I would love to see Google contribute here, but I think that's a different issue.
Are the bug reports accurate? If so, then they are contributing just as if I found them and sent a bug report, I'd be contributing. Of course a PR that fixes the bug is much better than just a report, but reports have value, too.
The alternative is to leave it unfound, which is not a better alternative in my opinion. It's still there and potentially exploitable even when unreported.
The actual real alternative is that the ffmpeg maintainers quit, just like the libxml2 maintainer did.
A lot of these core pieces of infrastructure are maintained by one to three middle-aged engineers in their free time, for nothing. Meanwhile, billion dollar companies use the software everywhere, and often give nothing back except bug reports and occasional license violations.
I mean, I love "responsible disclosure." But the only result of billion dollar corporations drowning a couple of unpaid engineers in bug reports is that the engineers will walk away and leave the code 100% unmaintained.
And yeah, part of the problem here is that C-based data parsers and codecs are almost always horrendously insecure. We could rewrite it all in Rust (and I have in fact rewritten one obscure codec in Rust) or WUFFS. But again, who's going to pay for that?
The other alternative if the ffmpeg developers change the text on their "about" screen from "Security is a high priority and code review is always done with security in mind. Though due to the very large amounts of code touching untrusted data security issues are unavoidable and thus we provide as quick as possible updates to our last stable releases when new security issues are found." to something like "Security is a best-effort priority. Code review is always done with security in mind. Due to the very large amounts of code touching untrusted data security issues are unavoidable. We attempt to provide updates to our last stable releases when new security issues are found, but make no guarantees as to how long this may take. Priority will be given to reports including a proof-of-concept exploit and a patch that fixes the security bug."
Then point to the "PoC + Patch or GTFO" sign when reports come in. If you use a library with a "NO WARRANTY" license clause in an application where you're responsible for failures, it's on you to fix or mitigate the issues, not on the library authors.
But FFmpeg does not have the resources to fix these at the speed Google is finding them.
It's just not possible.
So Google is dedicating resources to finding these bugs
and feeding them to bad actors.
Bad actors who might, hypothetically have had the information before, but definitely do once Google publicizes them.
You are talking about an ideal situation; we are talking about a real situation that is happening in the real world right now, wherein the option of Google reports bug > FFmpeg fixes bug simply does not exist at the scale Google is doing it at.
A solution definitely ought to be found. Google putting up a few millionths of a percent of their revenue or so towards fixing the bugs they find in ffmpeg would be the ideal solution here, certainly. Yet it seems unlikely to actually occur.
I think the far more likely result of all the complaints is that Google simply completely disengages from ffmpeg and stops doing any security work on it. I think that would be quite bad for the security of the project - if Google can trivially find bugs at a high speed such that it overwhelms the ffmpeg developers, I would imagine bad actors can also search for them and find those same vulnerabilities Google is constantly finding, and if they know that those vulnerabilities very much exist, but that Google has simply stopped searching for them upon demand of the ffmpeg project, this would likely give them extremely high motivation to go looking in a place they can be almost certain they'll find unreported/unknown vulnerabilities in. The result would likely be a lot more 0-day attacks involving ffmpeg, which I do not think anyone regards as a good outcome (I would consider "Google publishes a bunch of vulnerabilities ffmpeg hasn't fixed so that everyone knows about them" to be a much preferable outcome, personally)
Now, you might consider that possibility fine - after all, the ffmpeg developers have no obligation to work on the project, and thus to e.g. fix any vulnerabilities in it. But if that's fine, then simply ignoring the reports Google currently makes is presumably also fine, no ?
I really don’t understand whole discourse us vs them? Why it is should be only Google fixing the bugs. Isn’t if volunteers not enough, so maybe more volunteers can step up and help FFMpeg. Via direct patches, or via directly lobbying companies to fund project.
In my opinion if the problem is money, and they cannot raise enough, then somebody should help them with that. Isn’t it?
If widely deployed infrastructure software is so full of vulnerabilities that its maintainers can't fix them as fast as they're found, maybe it shouldn't be widely deployed, or they shouldn't be its maintainers. Disabling codecs in the default build that haven't been used in 30 years might be a good move, for example.
Either way, users need to know about the vulnerabilities. That way, they can make an informed tradeoff between, for example, disabling the LucasArts Smush codec in their copy of ffmpeg, and being vulnerable to this hole (and probably many others like it).
> they shouldn't be its maintainers.
I mean, yes, the ffmpeg maintainers are very likely to decide this on their own, abandoning the project entirely. This is already happening for quite a few core open source projects that are used by multiple billion-dollar companies and deployed to billions of users.
A lot of the projects probably should be retired and rewritten in safer system languages. But rewriting all of the widely-used projects suffering from these issues would likely cost hundreds of millions of dollars.
The alternative is that maybe some of the billion-dollar companies start making lists of all the software they ship to billions of users, and hire some paid maintainers through the Linux or Apache Foundations.
> abandoning the project entirely
that is a good outcome, because then the people dependent on such a project would find it plausible to pay a new set of maintainers.
We'll see. Video codec experts won't materialize out of thin air just because there's money.
> But FFmpeg does not have the resources to fix these at the speed Google is finding them.
Google submitting a patch does not address this issue. The main work for maintainers here is making the decision whether or not they want to disable this codec, whether or not Google submits a patch to do that is completely immaterial.
> Why is Google deliberately running an AI process to find these bugs if they're just going to dump them all on the FFmpeg team to fix?
This is called fuzzing and it has been standard practice for over a decade. Nobody has had any problem with it until FFmpeg decided they didn’t like that AI filed a report against them and applied the (again, mostly standard at this point) disclosure deadline. FWIW, nobody would have likely cared except they went on their Twitter to complain, so now everyone has an opinion on it.
> They also have the option to not spend resources finding the bugs in the first place.
The Copenhagen interpretation of security bugs: if you don’t look for it, it doesn’t exist and is not a problem.
> My takeaway from the article was not that the report was a problem, but a change in approach from Google that they’d disclose publicly after X days, regardless of if the project had a chance to fix it.
That is not an accurate description? Project Zero was using a 90 day disclosure policy from the start, so for over a decade.
What changed[0] in 2025 is that they disclose earlier than 90 days that there is an issue, but not what the issue is. And actually, from [1] it does not look like that trial policy was applied to ffmpeg.
> To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
You clearly know that no actual demands or even requests for a fix were made, hence the scare quotes. But given you know it, why call it a "demand"?
[0] https://googleprojectzero.blogspot.com/2025/07/reporting-tra..., discussed at https://news.ycombinator.com/item?id=44724287
[1] https://googleprojectzero.blogspot.com/p/reporting-transpare...
When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline, you are simultaneously increasing the visibility of the vulnerability and name-calling the maintainers. All of this increases the pressure on the maintainers, and it's fair to call that a "demand" (quotes-included). Note that we are talking about humans who will only have their motivation dwindle: it's easy to say that they should be thick-skinned and ignore issues they can't objectively fix in a timely manner, but it's demoralizing to be called out like that when everyone knows you can't do it, and you are generally doing your best.
It's similar to someone cooking a meal for you, and you go on and complain about every little thing that could have been better instead of at least saying "thank you"!
Here, Google is doing the responsible work of reporting vulnerabilities. But any company productizing ffmpeg usage (Google included) should sponsor a security team to resolve issues in high profile projects like these too.
Sure, the problem is that Google is a behemoth and their internal org structure does not cater to this scenario, but this is what the complaint is about: make your internal teams do the right thing by both reporting, but also helping fix the issue with hands-on work. Who'd argue against halving their vulnerability finding budget and using the other half to fund a security team that fixes highest priority vulnerabilities instead?
> When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline
My understanding is that the bug in question was fixed about 100 times faster than Project Zero's standard disclosure timeline. I don't know what vulnerability report your scenario is referring to, but it certainly is not this one.
> and name-calling the maintainers
Except Google did not "name-call the maintainers" or anything even remotely resembling that. You just made it up, just like GP made up the the "demands". It's pretty telling that all these supposed misdeeds are just total fabrications.
"When you publicize... you are ... name-calling": you are taking partial quotes out of context, where I claimed that publicizing is effectively doing something else.
> When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline, you are simultaneously increasing the visibility of the vulnerability and name-calling the maintainers.
So how long should all bug reporters wait before filing public bugs against open source projects? What about closed source projects? Anyone who works in software knows to ship software is to always have way more things to do than time to do it in. By this logic, we should never make bug reports public until the software maintainers (whether OSS, Apple or Microsoft) has a fix ready. Instead of "with enough eyeballs, all bugs are shallow" the new policy going forward I guess will be "with enough blindfolds, all bugs are low priority".
It's funny you come up with that suggestion when I clearly offer a different solution: "make your internal teams do the right thing by both reporting, but also helping fix the issue with hands-on work".
It's a call not to stop reporting, but to equally invest in fixing these.
Hands on work like filing a detailed bug report with suspected line numbers, reproduction code and likely causes? Look, I get it. It would be nice if Google had filed a patch with the bug. But also not every bug report is going to get a patch with it, nor should that be the sort of expectation we have. It's hard enough getting corporations to contribute time and resources to open source projects as it is, to set an expectation that the only acceptable corporate contribution to open source is full patches for any bug reports is just going to make it that much harder to get anything out of them.
In the end, Google does submit patches and code to ffmpeg, they also buy consulting from the ffmpeg maintainers. And here they did some security testing and filed a detailed and useful bug report. But because they didn't file a patch with the bug report, we're dragging them through the mud. And for what? When another corporation looks at what Google does do, and what the response this bug report has gotten them, which do you think is the most likely lesson learned?
1) "We should invest equally in reporting and patching bugs in our open source dependencies"
2) "We should shut the hell up and shouldn't tell anyone else about bugs and vulnerabilities we discover, because even if you regularly contribute patches and money to the project, that won't be good enough. Our name and reputation will get dragged for having the audacity to file a detailed bug report without also filing a patch."
And if they choose 2), would they stop to consider what happens if everybody does that? What's their fallback plan once maintainers dump the project?
All I am saying is that you should be as mindful to open source maintainers as you are to the people at companies.
> would they stop to consider what happens if everybody does that?
It’s almost almost like bitching about the “free labor” open source projects are getting from their users, especially when that labor is of good quality and comes from a user that is actively contributing both code and money to the project is a losing strategy for open source fans and maintainers.
> All I am saying is that you should be as mindful to open source maintainers as you are to the people at companies.
And all I’m saying is there is nothing that’s “un-mindful” about reporting real bugs to an open source project, whether that report is public or not. And especially when that report is well crafted and actionable. If this report were for something that wasn’t a bug, is this report was a low quality “foo is broke, plz to fix” report with no actionable information, or if the report actually came with demands for responses and commitment timelines, then it would be a different matter. But ffmpeg runs a public bug tracker. To say then that making public bug reports is somehow disrespectful of the maintainers is ridiculous.
The fact that details of the issue _will_ be disclosed publicly is an implicit threat. Sure it's not an explicit threat, but it's definitely an implicit threat. So the demand, too, is implicit: fix this before we disclose publicly, or else your vulnerability will be public knowledge.
You should not be threatened by the fact that your software has security holes in it being made public knowledge. If you are, then your goals are fundamentally misaligned with making secure software.
I don't think that you understand the point of the delayed public disclosure. If it wasn't a threat, then there'd be no need to delay -- it would be publicly disclosed immediately.
Publishing the vulnerability is a demand to fix it. It threatens to cause harm to the reputation of the maintainer if left unfixed.
No, publishing the vulnerability is the right thing to do for a secure world because anyone can find this stuff including nation states that weaponize it. This is a public service. Giving the dev a 90 day pre warn is a courtesy.
Expecting a reporter to fix your security vulnerabilities for you is entitlement.
If your reputation is harmed by your vulnerable software, then fix the bugs. They didn’t create the hazzard they discovered it. You created it, and acting like you’re entitled to the free labor of those that gave you the heads up is insane, and trying to extort them for their labor is even worse.
This is all true(maybe not the extortion being worse hard to say), but it doesnt change the fact that publishing the CVE is a demand to fix it.
No, it is a request to fix it. How the maintainer feels about it is up to them.
A request to fix it would be privately telling the maintainers about the issue. Publicly releasing it is a demand.
This is not how filing issues against open source software works.
You dont get to decide that lmao. Telling everyone this project doesnt care about security if they ignore my CVE is obviously a demand and your traditions can not change that
> Telling everyone this project doesnt care about security
Google did nothing like this.
If people infer that a hypothetical project doesn't care about security because they didn't fix anything, then they're right. It's not google's fault they're factually bad at security. Making someone look bad is not always a bad action.
Drawing attention to that decision by publicly reporting a bug is not a demand for what the decision will be. I could imagine malicious attention-getting but a bug report isn't it.
Bullshit. That is exactly what google is doing. Demands aren’t necessarily malicious, but they’re certainly annoying for the person being demanded.
If merely publishing a bug they found, and doing nothing else, would qualify by your definition as "telling everyone this project doesn't care about security", then there is absolutely nothing wrong with doing that "telling".
If the FFmpeg team does not want people to file bug reports, then they should close their public issue tracker. This is not something that I decided but a choice that they made.
CVE!=vulnerability
These two terms are not interchangeable.
Most vulnerabilities never have CVEs issued.
No, it is a notice to others that your software as-is is insecure in some way. The pre notice is again a courtesy if you want to fix it.
What you do with the notice as a dev is up to you, but responsible ones would fix it without throwing a tantrum.
Devs need to stop thinking of themselves as the main character and things get a lot more reasonable.
Nobody is demanding anything. Google is just disclosing issues.
This opens up transparency of ffmpeg’s security posture, giving others the chance to fix it themselves, isolate where it’s run or build on entirely new foundations.
All this assuming the reports are in fact pointing to true security issues. Not talking about AI-slop reports.
From TFA:
> The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”
This doesn't feel like a medium-severity bug, and I think "Perhaps reconsider the severity" is a polite reading. I get that it's a bug either way, but this leaves me with a vague feeling of the ffmpeg maintainer's time being abused.
On the other hand, if the bug doesn't get filed, it doesn't get fixed. Sure Google could spend some resources on fixing it themselves, but even if they did would we not likely see a complaint about google flooding the maintainers with PR requests for obscure 30 year old codec bugs? And isn't a PR even more of a demand on the maintainer's time because now there's actual code that needs to be reviewed, tests that need to be run and another person waiting for a response on the other end?
"Given enough eyeballs, every bug is shallow" right? Well, Google just contributed some eyeballs, and now a bug has been made shallow. So what's the actual problem here? If some retro game enthusiast had filed the same but report would that be "abusing" the maintainer's time? I would think not, but then we're saying that a bug report can be "abusive" simply by the virtue of who submits it. And I'm really not sure "don't assign employees to research bugs in your open source dependencies and if you do certainly don't submit bug reports on what you find because that's abusive" is the message we want to be sending to corporations that are using these projects.
The vulnerability in question is being severely underestimated. There are many other comments in this thread going into detail. UAF = RCE.
Use-after-free bugs (such as the vulnerability in question, https://issuetracker.google.com/issues/440183164) usually can be exploited to result in remote code execution, but not always. It wouldn't be prudent to bet that this case is one of the exceptions, of course.
I think it’s exceedingly reasonable for a maintainer to dispute the severity of a vulnerability, and to ultimately decide the severity.
Maintainers rarely understand or agree with the severity of a bug until an exploit beats them over the head publicly in a way they are unable to sweep under the rug.
On the other hand, reporters giving a CVE a 10 for a bug in an obscure configuration option that is disabled by default in most deployments is bit over the top. I've seen security issues being reported as world ending, being there for years, without anyone being able to make an exploit PoC.
Yes, I think a defining aspect of vulnerability disclosure is how perverted the incentives structure is for all parties, including maintainers.
If it causes a crash, that's denial of service, so medium would be appropriate. But it's true that medium CVEs aren't that bad in most situations.
This bug can most likely lead to RCE, proving that it can’t is generally a very difficult problem.
There’s absolutely no reason to assume that it does not lead to RCE, and certainly no reason whatsoever to invest significant time to prove that one way or the other unless you make a living selling exploits.
If you need this kind of security, build ffmpeg with only decoders you find acceptable
That quote felt pretty disingenuous. OK, so the proof of concept was found in some minor asset of an old video game. But is it an exploitable vulnerability? If so, you will quickly find it on modern-day scummy advertising networks. Will it still be "medium severity"? Not clear either way to me, from the quote.
> But appearance is operative: a security issue is something that I (as the maintainer) would need to fix regardless of who reports it
I think this is the heart of the issue and it boils off all of the unimportant details.
If it's a real, serious issue, you want to know about it and you want to fix it. Regardless of who reports it.
If it's a real, but unimportant issue, you probably at least want to track it, but aren't worried about disclosure. Regardless of who reports it.
If it's invalid, or AI slop, you probably just want to close/ignore it. Regardless of who reports it.
It seems entirely irrelevant who is reporting these issues. As a software project, ultimately you make the judgment call about what bugs you fix and what ones you don't.
But if it's a real, serious issue without an easy resolution, who is the burden on? It's not that the maintainers wouldn't fix bugs if they easily could. FFmpeg is provided "as is"[0], so everyone should be responsible for their side of things. It's not like the maintainers dumped their software on every computer and forced people to use it. Google should be responsible for their own security. I'm not adamant that Google should share the patch with others, but it would hardly be an imposition to Google if they did. And yes, I really do intend that you could replace Google with any party, big or small, commercial or noncommercial. It's painful, but no one has any inherent obligations to provide others with software in most circumstances.
[0] More or less. It seems the actual language is shied from. Is there a meaningful difference?
But if no bug report is filed, then only google gets the ability to "be responsible for their own security", everyone else either has to independently discover and then patch the bug themselves, or wait until upstream discovers the bug.
In no reasonable reading of the situation can I see how anything Google has done here has made things worse:
1) Before hand, the bug existed, but was either known by no one, or known only by people exploiting it. The maintainers weren't actively looking at or for this particular bug and so it may have continue to go undiscovered for another 20 years.
2) Then Google was the only one that knew about it (modulo exploiters) and were the only people that could take any steps to protect themselves. The maintainers still don't know so everyone else would remain unprotected until they discover it independently.
3) Now everyone knows about the issue, and are now informed to take whatever actions they deem appropriate to protect themselves. The maintainers know and can choose (or not) to patch the issue, remove the codec or any number of other steps including deciding it's too low priority in their list of todos and advising concerned people to disable/compile it out if they are worried.
#3 is objectively the better situation for everyone except people who would exploit the issue. Would it be even better if Google made a patch and submitted that too? Sure it would. But that doesn't make what they have done worthless or harmful. And more than that, there's nothing that says they can't or won't do that. Submitting a bug report and submitting a fix don't need to happen at the same time.
It's hard enough convincing corporations to spend any resources at all on contributing to upstream. Dragging them through the mud for not submitting patches in addition to any bug reports they file is in my estimation less likely to get you more patches, and more likely to just get you less resources spent on looking for bugs in the first place.
I wasn't really thinking about the disclosure part, although I probably should have. I was focusing on the patching side of things. I think you're correct that disclosure is good, but in that case, I think it increases the burden of those with resources to collaborate to produce a patch.
Well, it's open source and built by volunteers, so nobody is obligated to fix it. If FFmpeg volunteers don't want to fix it or don't have the time/bandwidth to fix it, then they won't fix it. Like any other bug or CVE in any other open source project. The burden doesn't necessarily need to be on anyone.
They aren't obligated to fix CVEs until they're exploited, and then, suddenly, they very much were obligated to fix the CVEs, and their image as FLOSS maintainers and as a project are very much tarnished.
If they are unable to fix CVEs in a timely manner, then it is very reasonable for people to judge them (accurately!) as being unable to fix CVEs in a timely manner. Maybe some people might even decide to use other projects or chip in to help out! However, it is dishonest to hide reports and pretend like bugs are being fixed on time when they are not.
I would like them to publicly state that there are not enough hours in their day to fix this, therefore it will have to wait until they get to it.
Please don’t use “CVE” as a stand-in for “vulnerability”, you know much better than this :)
Most vulnerabilities never get CVEs even when they’re patched.
Only using it because the comment I replied to is, of course I agree that most vulnerabilities are patched without one
While this feels like it’s perhaps bordering on somewhat silly nitpicking, the trend of conflating vulnerabilities with CVEs is probably at least mildly harmful. It’s probably good to at least try not to let people get away with this all the time.
The way many (perhaps most) people think of CVEs is badly broken. The CVE system is deeply unreliable, resulting in CVEs being issued for things that are neither bugs nor vulnerabilities while at the same time most things that probably should have CVEs assigned do not have them. Not to even mention the ridiculous mess that is CVSS.
I’m just ranting though. You know all this, almost certainly much better than me.
I don't think anyone can force them to fix cve. Software is provided as-is. Can't be more straightforward as that.
This is technically true, but not plausible in the social sense.
Why not. I can't force you to do your volunteering work.
So what is Google gonna do if security fixes don't happen in time and the project takes a "reputational hit"? Fork it and maintain it themselves? Why not send in patches instead?
Maintaining a reputation might be enough reward for you, but not everyone is happy to work for free for a billion dollars corporation breathing down their necks. It's puzzling to me why people keep defending their free lunch.
If ffmpeg maintainers cannot keep up, downstream customers should know so they can help.
FFmpeg is developed almost entirely by volunteers. We have no "customers".
There are people who use and depend on ffmpeg. Maintainers seem to go out of their way to solve issues these folks face. If you don't care, then ignore the bug reports and force them to solve their own problems by contributing.
These people are not customers though. The maintainers do their best, but overall the project seems to be understaffed though, so customers (for example Google, as it seems they occasionally chip in) get priority.
Then you and your security friends will create lots of FUD about FFmpeg being "insecure" with lots of red and the word "critical" everywhere.
Why complain about pressure from customers then?
if you've ever read about codependency, "need" is a relative term.
codependency is when someone accepts too much responsibility, in particular responsibility for someone else or other things out of their control.
the answer is to have a "healthy neutrality".
The issue at hand is that Google has a policy of making the security issue public regardless of whether a fix has been produced.
Typically disclosures happen after a fix exists.
This isn’t true at all in my experience: disclosures happen on a timeline (60 to 90 days is common), with extensions provided as a courtesy based on remediation complexity and other case-by-case considerations. I’ve been party to plenty of advisories that went public without a fix because the upstream wasn’t interested in providing one.
For OSS projects or commercial ones? I feel it's not the same when one has trillion in market cap and the other has a few unpaid maintainers.
The norm is the same for both. Perhaps there’s an argument that it should be longer for OSS maintainers, but OSS maintainers also have different levers at their disposal: they can just say “no, I don’t care” because nobody’s paying them. A company can’t do that, at least not without a financial hit.
To my original comment, the underlying problem here IMO is wanting to have it both ways: you can adhere to common notions of security for reputational reasons, or you can exercise your right as a maintainer to say “I don’t care,” but you can’t do both.
OTOH they could disclose security issues AND send patches to close them.
I wonder if language plays a large role in the burden imposed on the maintainers.
Sure, but this is about Google funding FFmpeg not providing bug fixes.
True - if we're talking about actual security bugs, not the "CVE slop"
P.S. I'm an open source maintainer myself, and I used to think, "oh, OSS developers should just stop whining and fix stuff." Fast forward a few years, and now I'm buried under false-positive "reports" and overwhelmed by non-coding work (deleting issue spam, triage, etc.)
P.P.S. What's worse, when your library is a security component the pressure’s even higher - one misplaced loc could break thousands of apps (we literally have a million downloads at nuget [1] )
[1]: https://www.nuget.org/packages/AspNetSaml
Please speak openly about that on your dev page Manage expectations.
I feel this comment is far to shallow a take. I would expect that you know better than most of HN, exactly how much a reputation security has as a cost center. Google uses ffmpeg internally, how many millions would they have to spend if they were required to not only create, but maintain ffmpeg themselves? How significant would that cost be at Google's scale?
I dont agree the following framing is accurate, but I can mention it because you've already said the important part (about how this issue exists, and mearly knowing about it doesn't create required work.) But here announcing it, and registering a CVE, Google is starting the clock. By some metrics, it was already running, but the reputational risk clearly was not. This does change priorities, and requires as urgent context switch. neither are free actions, especially not within FOSS.
To me, being someone who believes everyone, individuals and groups, have a responsibility to contribute fairly. I would frame it as Google's behavior gives the appearance weaponizing their cost center externally, given this is something Google could easily fix, but instead they shirked that responsibility to unfunded volunteers.
To be clear, I think Google (Apple, Microsoft, etc.) can and should fund more of the OSS they depend on. But this doesn’t change the fact that vulnerability reports don’t create work per se, they just reveal work that the project can choose to act on or not.
Hopefully, until that changes, more people with influence will keep saying it, and always say it until it stops being true, and important.
So thank you for saying the important thing too! :)
I see you didn't read the article.
The problem isn't Google reporting vulnerabilities. It's Google using AI to find obscure bugs that affect 2 people on the planet, then making a CVE out of it, without putting any effort into fixing it themselves or funding the project. What are the ffmpeg maintainers supposed to do about this? It's a complete waste of everybody's time.
> The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”
I don't think that's an accurate description of the full scope of the problem. The codec itself is mostly unused but the code path can possibly be triggered from file fuzzing that ffmpeg uses so a maliciously crafted payload (e.g. any run of ffmpeg that touches user input without disabling this codec) could possibly be exploited.
Why does google simply build their own ffmpeg from source without the codec?
They almost certainly do. But it's also in the public interest to responsibly disclose vulnerabilities in components that don't directly affect you.
> that affect 2 people on the planet
Wrong. The original files only affect 2 people. A malicious file could be anywhere.
Do you remember when certain sequences of letters could crash iphones? The solution was not "only two people are likely to ever type that, minimum priority". Because people started spreading it on purpose.
Mark it low, and estimate by when it can be fixed (3, 4 months from now). Advise google of it. Google does not need to disclose this bug that fast. If I do something one the side or as hobby and big corp comes by to tell me to hurry up I feel inclined to say no thanks.
Close the bug, if they don’t care. They don’t want to do that because then people will yell at them for not caring.
The problem lies in the fact that these companies are generating work for volunteers on a different time-scale and binding them to it by giving them X days before disclosing vulnerabilities. No one wants their project to have security vulnerabilities that might affect a lot of users, which creates pressure in dealing with them.
The open source model is broken in this regard, licenses need to address revenue and impose fees on these companies, which can be used as bug bounties. Game engines do this and so should projects like FFMPEG, etc. The details are complex of course, but the current status quo is abusing people's good will.
Fully on FFmpeg team side, many companies approach to FOSS is only doing so when it sounds good on their marketing karma, leech otherwise.
Most of them would just pirate in the old days, and most FOSS licences give them clear conscience to behave as always.
Google is, at no cost to FFMPEG:
1) dedicating compute resources to continuously fuzzing the entire project
2) dedicating engineering resources to validating the results and creating accurate and well-informed bug reports (in this case, a seriously underestimated security issue)
3) additionally for codecs that Google likely does not even internally use or compile, purely for the greater good of FFMPEG's user base
Needless to say, while I agree Google has a penny to spare to fund FFMPEG, and should (although they already contribute), I do not agree with funding this maintainer.
Google is:
- choosing to do this of their own volition
- are effectively just using their resources to throw bug reports over the wall unprompted.
- benefiting from the bugs getting fixed, but not contributing to them.
> - benefiting from the bugs getting fixed, but not contributing to them.
I would be very surprised if Google builds this codec when they build ffmpeg. If you run a/v codecs (like ffmpeg) in bulk, the first thing to do is sandbox the hell out of it. The second thing you do is strictly limit the containers and codecs you'll decode. Not very many people need to decode movies from old LucasArts games, for video codecs, you probably only want mpeg 1-4, h.26x, vp8, vp9, av1. And you'll want to have fuzzed those decoders as best you can too.
Nobody should be surprised that there's a security problem in this ancient decoder. Many of the eclectic codecs were written to mimic how the decoders that shipped with content were written, and most of those codecs were written assuming they were decoding a known good file, because why wouldn't they be. There's no shame, that's just how it is... there's too much to proactively investigate, so someone doing fuzzing and writing excellent reports that include diagnosis, specific location of the errors, and a way to reproduce are providing a valuable contribution.
Could they contribute more? Sure. But even if they don't, they've contributed something of value. If the maintainers can't or don't want to address it, that'd be reasonable too.
FFmpeg and a thousand fixes:
https://j00ru.vexillium.org/2014/01/ffmpeg-and-the-tale-of-a...
"While reading about the 4xm demuxer vulnerability, we thought that we could help FFmpeg eliminate many potential low-hanging problems from the code by making use of the Google fleet and fuzzing infrastructure we already had in place"
Google is a contributor to FFMPEG.
FFMPEG, at no cost to Google, provided a core piece of their infrastructure for multiple multi-billion dollar product lines.
"At no cost to Google" seems difficult to substantiate, given that multiple sources indicate that Google is sponsoring FFmpeg both with engineering resources (for codec development) and cold hard cash (delivered to the FFmpeg core team via their consulting outfit[1]).
This is excellent, to be clear. But it's not compatible with the yarn currently being spun of a purely extractive relationship.
[1]: https://fflabs.eu
Yes, according to the license selected by ffmpeg. And google, according to this license selected by ffmpeg, paid them nothing. And then do some additional work, beneficial to ffmpeg.
And this is why Google contributes back.
Then they can surely also provide a pull request for said CVE.
They could, but there is really no requirement on them to do so. The security flaw was discovered by Google, but it was not created by them.
Equally there is no requirement on ffmpeg to fix these CVEs nor any other.
And, of course, there is no requirement on end-users to run software from projects which do not consider untrusted-input-validation bugs to be high priority.
> They could, but there is really no requirement on them to do so.
I see this sort of sentiment daily. The sentiment that only what is strictly legal or required is what matters.
Sometimes, you know, you have to recognise that there are social norms and being a good person matters and has intrinsic value. A society only governed by what the written law of the land explicitly states is a dystopia worse than hell.
What's "strictly legal or required" of Google here is absolutely nothing. They didn't have to do any auditing or bug hunting. They certainly didn't have to validate or create a proper bug report, and there's no requirement whatsoever that they tell anyone about it at all. They could have found the bug, found it was being actively exploited, made their own internal patch and sat quietly by while other people remained vulnerable. All of that is well within what is "strictly legal or required".
Google did more than what is "strictly legal or required", and what they did was submit a good and valid bug report. But for some reason we're mad because they didn't do even more. Why? The world is objectively a better place for having this bug report, at least now people know there's something to address.
Google did more than what is "strictly legal or required", and what they did was submit a good and valid bug report. But for some reason we're mad because they didn't do even more. Why?
The Copenhagen Interpetation of Ethics is annoyingly prevalent (https://forum.effectivealtruism.org/posts/QXpxioWSQcNuNnNTy/...)
"I noticed your window was broken, so I took the liberty of helping you, working for free, by posting a sign that says UNLOCKED WINDOW HERE with exact details on how it was broken. I did lots of gratis work for you which you do not need to do yourself now. The world is safer now. Why are you not grateful?"
I mean if we’re going to do sloppy analogies, a bug report for open source software as widely used as ffmpeg is more like “I noticed the trees in the back corner of your free apple orchard are producing apples with trace amounts of heavy metals. I did some literal digging and sent some soil off to the labs and it looks like your back corner field may be contaminated. Here’s a heads up about that, and also just FYI in 90 days, if you haven’t been able to get your soil remediated, I’m going to put up a sign so that people can know to avoid those apples and not get poisoned by your free orchard while it’s getting fixed.”
Yes, this is a good illustration why The Copenhagen Interpretation of Ethics makes sense when Ffmpeg is allowed to criticise the manner of actions of Google.
You're correct, but it's the social norms -- or at least, the norms as I perceive them -- that I am talking about here.
If you find yourself with potentially serious security bugs in your repo, then the social norm should be for you to take ownership of that because, well, it's your repo.
The socially unacceptable activity here should be treating security issues as an irritation, or a problem outside your control. If you're a maintainer, and you find yourself overwhelmed by genuine CVE reports, then it might be worth reflecting on the root cause of that. What ffmpeg did here was to shoot the messenger, which is non-normative.
It seems to me that they are not treating the security issue as an irritation, but instead the manner at which it was presented to them that is the problem.
What about the presentation was wrong? What is the correct presentation for a pure bug report?
Justice is more than just following laws.
> And, of course, there is no requirement on end-users to run software from projects which do not consider untrusted-input-validation bugs to be high priority.
What's this even saying?
Then they're free to fork it and never use the upstream again.
Where do you draw the line? Do you want Google to just not inspect any projects that it can't fully commit to maintaining?
Providing a real CVE is a contribution, not a burden. The ffmpeg folks can ignore it, since by all indications it's pretty minor.
Personally, I want the $3.5 Trillion company to do more. So the line should be somewhere else.
So you don't have a line, you just want to move the goalposts and keep moving them?
It is my understanding that the commenters in FFMPEG's favor believe that Google is doing a disservice by finding these security vulnerabilities, as they require volunteer burden to patch, and that they should either:
1) allow the vulnerabilities to remain undiscovered & unpatched zero-days (stop submitting "slop" CVEs.)
2) supply the patches (which i'm sure the goalpost will move to the maintainers being upset that they have to merge them.)
3) fund the project (including the maintainers who clearly misunderstand the severity of the vulnerabilities and describe them as "slop") (no thank you.)
This entire thread defies logic.
location of goalposts scales with market cap
What is the mission of Project Zero? Is it to build a vulnerability database, or is it to fix vulnerabilities?
If it's to fix vulnerabilities, it seems within reason to expect a patch. If the reason Google isn't sending a patch is because they truly think the maintainers can fix it better, then that seems fair. But if Google isn't sending a patch because fixing vulns "doesn't scale" then that's some pretty weak sauce.
Maybe part of the solution is creating a separate low priority queue for bug reports from groups that could fix it but chose not to.
It's neither. WP says:
> After finding a number of flaws in software used by many end-users while researching other problems, such as the critical "Heartbleed" vulnerability, Google decided to form a full-time team dedicated to finding such vulnerabilities, not only in Google software but any software used by its users.
It did that but it did not decide to form a team dedicated to fixing issues in software that it uses? That's the misallocation of funds that's at play here.
The ideal outcome is that Project Zero sends its discoveries off to a team who triage and develop patches for the significant vulnerabilities, and then the communication with the project is a much more helpful one.
The security and privacy org is much large than just GPZ, but the security and privacy org does not have a general remit to fix all vulns everywhere. GPZ is also not the only part of the org that finds bugs in open source software but is not generally obligated to fix them. Projects like ossfuzz are similar.
Google could staff a team that is responsible for remediating vulns in open source software that doesn't actually affect any of Google's products. Lord knows they've got enough money. I'd prefer it if they did that. But I don't really see the reasoning behind why they must do this or scrap all vuln research on open source software.
Project Zero is an offensive security team. Its job is to find vulnerabilities.
In their own words:
> Our mission is to make the discovery and exploitation of security vulnerabilities more difficult, and to significantly improve the safety and security of the Internet for everyone.
> We perform vulnerability research on popular software like mobile operating systems, web browsers, and open source libraries. We use the results from this research to patch serious security vulnerabilities, to improve our understanding of how exploit-based attacks work, and to drive long-term structural improvements to security.
Correct.
To build on that - if it "doesn't scale" for one of the wealthiest companies in the world, it certainly doesn't scale for a volunteer project...
If you are deliberately shipping insecure software, you should stop doing that. In ffmpeg's case, that means either patching the bug, or disabling the codec. They refused to do the latter because they were proud of being able to support an obscure codec. That puts the onus on them to fix the bug in it.
I can tell you with 100% certainty that there are undiscovered vulnerabilities in the Linux kernel right now. Does that mean they should stop shipping?
I do think that contributing fuzzing and quality bug reports can be beneficial to a project, but it's just human nature that when someone says "you go ahead and do the work, I'll stand here and criticize", people get angry.
Rather than going off and digging up ten time bombs which all start counting down together, how about digging up one and defusing it? Or even just contributing a bit of funding towards the team of people working for free to defuse them?
If Google really wants to improve the software quality of the open source ecosystem, the best thing they could do is solve the funding problem. Not a lot of people set out to intentionally write insecure code. The only case that immediately comes to mind is the xz backdoor attempt, which again had a root cause of too few maintainers. I think figuring out a way to get constructive resources to these projects would be a much more impressive way to contribute.
This is a company that takes a lot of pride in being the absolute best of the best. Maybe what they're doing can be justified in some way, but I see why maintainers are feeling bullied. Is Google really being excellent here?
You will note the Linux kernel is not crying on Twitter when Google submits bugs to them. They did long ago, then realized that the bugs that Google reported often showed up exploited in the wild when they didn’t fix them, and mostly decided that the continuous fuzzing was actually a good thing. This is despite not all the bugs being fixed on time (there are always new OSSFuzz bugs in the queue for fixing).
The Linux kernel instead decided to become a CVE authority, so that they have control over what is officially reported as a CVE.
There are other CVE numbering authorities you can report a vulnerability to and apply for a CVE, or appeal, but this does possibly have a chilling effect if the vendor's CNA refuses valid vulns. (Like with MS in https://news.ycombinator.com/item?id=44957454 )
There's an appeals process: https://www.cve.org/Resources/General/Policies/CVE-Record-Di...
And of course CVE is not the only numbering system, there's OSV DB, GHSA, notcve.org etc.
> this does possibly have a chilling effect if the vendor's CNA refuses valid vulns
The Linux kernel went in the opposite direction: Every bugfix that looks like it could be relevant to security gets a CVE[1]. The number of CVEs has increased significantly since it became a CNA.
[1]: https://lwn.net/Articles/978711/
Thanks. They seem to be pretty proactive indeed if you look at the feed: https://lore.kernel.org/linux-cve-announce/
>If Google really wants to improve the software quality of the open source ecosystem, the best thing they could do is solve the funding problem.
Google is not a monolith. If you asked the board, or the shareholders of google what they thought of open source software quality they would say they don't give a rat's ass about it. Someone within google who does care has been given very limited resources to deal with the problem, and are approaching it in the most efficient way they can.
>it's just human nature that when someone says "you go ahead and do the work, I'll stand here and criticize", people get angry
Bug reports are not criticism, they are in fact contributions, and the human thing to do when someone contributes to your project is to thank them.
>This is a company that takes a lot of pride in being the absolute best of the best.
There was an era when people actually believed that google was the best of the best, rather than saying it as a rhetorical trick, and during that era they never would have dreamed of making such self centered demands of google. This project zero business comes across as the last vestige of a dying culture within google. Why do people feel the need to be so antagonitic towards it?
>I can tell you with 100% certainty that there are undiscovered vulnerabilities in the Linux kernel right now. Does that mean they should stop shipping?
Hence why I qualified "deliberately".
Nah, ffmpeg volunteers dont owe you or Google anything. They are giving you free access to their open project.
The ffmpeg authors aren't "shipping" anything; they're giving away something they make as a hobby with an explicit disclaimer of any kind of fitness for purpose. If someone needs something else, they can pay an engineer to make it for them.
This has nothing to do with payment. Not deliberately infecting your users with vulnerabilities is simply the right thing to do. Giving something away for free doesn't absolve you of certain basic ethical responsibilities.
They're not deliberately infecting users with anything. There effectively saying "here's example code showing how to deal with these video formats. NOTE THAT THESE ARE EXAMPLES THAT I WROTE FOR FUN. THEY ARE NOT MEANT FOR SERIOUS USE AND MAY NOT HANDLE ALL CORNER CASES SAFELY. THIS SHOULD BE OBVIOUS SINCE WE HAVE NO COMMERCIAL RELATIONSHIP AND YOU'RE DOWNLOADING RANDOM CODE FROM SOMEONE YOU DON'T KNOW ON THE INTERNET".
If someone goes on to use that code for serious purposes, that's on them. They were explicitly warned that this is not production commercial code. It's weekend hobby work. There's no ethical obligation to make your hobby code suitable for production use before you share it. People are allowed to write and share programs for fun.
Deliberate malware would be something like an inbuilt trojan that exfiltrates data (e.g. many commercial applications). Completely different.
They are not effectively saying that. The way they talk about the library everywhere else makes it clear that they do expect serious use. Disclaimers in the license don't override that, especially when 99% of software has a disclaimer like that. Those words are there for legal reasons only.
If they wanted to market ffmpeg as a toy project only, not to be trusted, they could do that, but they are not doing that.
Except the very idea that they owe you anything is so absurd that even if they had a contract document stating that they'd do work for you, they still wouldn't have an obligation to do so because society has decided that contracts without some consideration from both sides are not valid. Similarly, even if something you buy comes with a piece of paper saying they don't owe you anything if it breaks, the law generally says that's not true. Because you paid for it.
But they don't say they warrant their work. They have a notice reminding you that you are receiving something for free, and that thing comes with no support, and is not meant to be fit for any particular use you might be thinking of, and that if you want support/help fulfilling some purpose, you can pay someone (maybe even them if you'd like) for that service. Because the way the world works is that as a general principle, other people don't owe you something for nothing. This is not just some legal mumbo jumbo. This is how life works for everyone. It's clear that they're not being malicious (they're not distributing a virus or something), and that's the most you can expect from them.
Computer security is always contextual, but as a general rule, if you're going to be accepting random input from unknown parties, you should have an expert that knows how to do that safely. And as mentioned elsewhere in these comments, such an expert would already be compiling out codecs they don't need and running the processing in a sandboxed environment to mitigate any issues. These days even software written in-house is run in sandboxed environments with minimal permissions when competent professionals are making things. That's just standard practice.
So they should be proud that they support obscure codecs, and by default the onus is on no one to ensure it's free from bugs. If an engineer needs to make a processing pipeline, the onus is always on them to do that correctly. If they want to use a free, unsupported hobby tool as part of their serious engineering project, it's on them to know how to manage any risks involved with that decision. Making good decisions here is literally their job.
> the very idea that they owe you anything
All I'm asking for right here is consistency about whether the library is mostly secure. The ethical requirement is to follow through on your claims and implications, while making claims and implications is completely optional.
> Computer security is always contextual, but as a general rule, if you're going to be accepting random input from unknown parties, you should have an expert that knows how to do that safely. And as mentioned elsewhere in these comments, such an expert would already be compiling out codecs they don't need and running the processing in a sandboxed environment to mitigate any issues.
Sandboxing is great defense in depth but most software should not require sandboxing. And expecting everyone to have an expert tweaking compilation is not realistic. Defaults matter, and security expectations need to be established between the site, the documentation, and the defaults, not left as a footgun for only experts to avoid.
The library probably is mostly secure, and it might even be the best library out there for what it does. That still leaves them with no ethical requirement at all.
People are allowed to make secure, robust software for fun. They can take pride in how good of a job they do at that. They can correctly point out that their software is the best. That still leaves them with no obligations at all for having shared their project for free.
If you are not an expert in hardening computers, don't run random untrusted inputs through it, or pay someone to deliver a turnkey hardened system to you. That someone might be Adobe selling their codecs/processing tools, or it might be an individual or a company like Redhat that just customizes ffmpeg for you. In any case, if you're not paying someone, you should be grateful for whatever goodwill you get, and if you don't like it, you can immediately get a full refund. You don't even have to ask.
The person doing serious things in a professional context is always the one with the obligation to do them correctly. When I was at IBM, we used exactly 1 external library (for very early processor initialization) and 1 firmware blob in the product I worked on, and they were paid deliverables from hardware vendors. We also paid for our compiler. Everything else (kernel, drivers, firmware, tools) was in-house. If companies want to use random free code they found on the Internet without any kind of contract in place, that's up to them.
> The library probably is mostly secure
It is if they fix bugs like this. Status quo everything is fine with their actions, they don't need to do anything they aren't already doing.
If they decide they don't want to fix bugs like this, I would say they have the ethical obligation to make it clear that the software is no longer mostly secure. This is quite easy to accomplish. It's not a significant burden in any way.
Basically, if they want to go the less-secure route, I want it to be true that they're "effectively saying" that all caps text you wrote earlier. That's all. A two minute edit to their front page would be enough. They could edit the text that currently says "A complete, cross-platform solution to record, convert and stream audio and video." I'll even personally commit $10 to pay for those two minutes of labor, if they decide to go that route.
The idea that things can be right and wrong seems to be lost on you.
> Providing a real CVE is a contribution, not a burden.
Isn't a real CVE (like any bug report) both a contribution and a burden?
Only if you subscribe to the "if we stop testing, the number of cases will drop!" philosophy.
It's not fair to classify it as a burden; it existed anyways, but now you have the work/issue identified.
> Providing a real CVE is a contribution, not a burden. The ffmpeg folks can ignore it, since by all indications it's pretty minor.
Re-read the article. There's CVEs and then there's CVEs. This is the former, and they're shoving tons of those down the throats of unpaid volunteers while contributing nothing back.
What Google's effectively doing is like a food safety inspection company going to the local food bank to get the food that they operate their corporate cafeteria on just to save a buck, then calling the health department on a monthly basis to report any and all health violations they think they might have seen, while contributing nothing of help back to the food bank.
I have read the article. The expectation for a tool like ffmpeg is that regardless of what kind of file you put into it, it safely handles it.
This is an actual bug in submitted code. It doesn't matter that it's for some obscure codec, it's technically maintained by the ffmpeg project and is fair game for vulnerability reports.
Given that Google is also a major contributor to open-source video, this is more like a food manufacturer making sure that grocery stores are following health code when they stock their food.
Mind you, the grocery store has no obligation to listen to them in this metaphor and is free to just let the report/CVE sit for a while.
I am unsure why this untruth is being continuously parroted. It is false.
This is exploitable on a majority of systems as the codec is enabled by default. This is a CVE that is being severely underestimated.
This is why many have warned against things like MIT licence. Yes, it gives you source code and does easily get incorporated into a lot of projects but it comes at the cost of potential abuse.
Yes, GPL 3 is a lot ideologically but it was trying to limit excessive leeching.
Now that I have opened the flood gates of a 20 year old debate, time to walk away.
Google Project Zero just looks for security issues in popular open source packages, regardless of if Google itself even uses those packages or not.
So I'm not sure what GPLv3 really has to do with it in this case, if it under was a "No billion dollar company allowed" non-free-but-source-available license, this same thing would have happened if the project was popular enough for Project Zero to have looked at it for security issues.
The difference is that Google does use it, though. They use it heavily. All of us in the video industry do - Google, Amazon, Disney, Sony, Viacom, or whoever. Companies you may have never heard of build it into their solutions that are used by big networks and other streaming services, too.
Right, Google absolutely should fund ffmpeg.
But opening security issues here is not related to that in any way. It's an obscure file format Google definitely doesn't use, the security issue is irrelevant to Google's usages of it.
The critique would make sense if Google was asking for ffmpeg to implement something that Google wanted, instead of sending a patch. But they don't actually care about this one, they aren't actually asking for them to fix it for their benefit, they are sending a notice of a security issue that only affects people who are not Google to ffmpeg.
Google absolutely does fund ffmpeg via SPI.
"and Google provided substantial donations to SPI's general fund".
The amounts don't appear to be public (and what is enough!?)
Opening a security issue is not the problem. A public disclosure so soon when there are so many machine-assisted reports for such obscure issues is the problem.
If Google wants to force a faster turnaround on the fixes, they can send the reports with patches or they can pay for prioritization.
Three months is "soon"? What do you think is reasonable?
And like so many posters in this thread, you seem to be under the impression that Google needed this fixed at some specific timeline. In reality the fix timeline, or even a total lack of a fix, makes no impact to them. They almost certainly already disable these kinds of codecs in their build. They reported this for the good of the ecosystem and the millions of users who were vulnerable.
Google does not "want this fixed", this isn't a bug report from a team using ffmpeg, it's a security analysis from a whitehat security project.
I think really if there's all these machine generated meaningless security reports, wasting time with that sounds like a very sensible complaint, but then they should point at that junk as the problem.
But for the specific CVE discussed it really looks to me like they are doing everything right: it's a real, default-configuration exploitable issue, they reported it and ffmpeg didn't fix or ask for any extension then it gets publicly disclosed after 90 days per a standard security disclosure policy.
What in GPL3 or MIT mandates that Google fix this bug and submit a PR or simply sends a bug report and walks away? I don't see how this applies at all.
AGPL, with no CLA that lets the owners relicense. Then we'll see if the using corporation fully believes in open source.
There's a reason Google turned into year 2000 Microsoft "it's viral!" re. the AGPL. They're less able to ignore the intent of the license and lock away their changes.
There is nothing whatsoever that the GPL would do to change this situation. Bringing up the permissive license debate is a non-sequitur here.
ffmpeg is already LGPL / GPLv2. How does the license choice factor into this at all?
As much as I hate to be on Google's side I think they're doing a reasonable thing here. Valid bug reports are valuable contributions on their own, and disclosing security issues is standard practice. Not disclosing them doesn't help anyone. Security through obscurity is not security. If neither FFMPEG nor Google dedicate resources to fixing the issue in 90 days, making it public at least ensures that it gets visibility and thus has a higher chance to get fixed by a third party.
I'm sure Google could (and probably should) do even more to help, but FFMPEG directing social media rage at a company for contributing to their project is a bone-headed move. There are countless non-Google companies relying on FFMPEG that do much less for the project, and a shit show like this is certainly not going to encourage them to get involved.
As someone who worked as a software engineer at Google on a service that heavily depended on FFmpeg, its absurd that Google posts security bugs (which have the obvious potential outcome of driving more free work) vs just paying an engineer to fix the bug.
I promise they are spending more on extra compute for resiliency and redundancy for FFMPEG issues than it would cost for a single SWE to just write a fix and then shepherd through the FFmpeg approval process.
As someone who was on a project that stalled for a year because our patchset wasn't accepted by a different open source project (not Linux either), I can tell you from experience that it's not as easy as folks here make it out to be. Some maintainers (and Googlers) really want you to study at their mountaintop monastery before your code is worthy, and scrutiny is even higher now due to AI, as we can see from the complaints about this bug report. Now, I've merged enough open source patches on my personal time to know that most projects aren't like that, but based on this interaction, I seriously wonder if Google's patch would've been accepted without incident.
Maybe AmaGoogSoft deserves this, but then what's the threshold? If I'm in charge of Zoom or Discord and one of my engineers finds a bug, should I let them report it and risk a public blow-up? Or does my company's revenue need to be below $1B? $100M? This just poisons the well for everyone.
Bonus comment: I was present for conversations about how Google should just write an internal version because of all the stability issues, but that that work would never get prioritized or be considered valuable because it wouldn't get anyone promoted (to be fair, given how widely FFmpeg is used, it would have gotten an L4 or L5 promoted, but it would have been a near sisyphean task over years to get to the point where you could demostrate the ridiculously high XXm-XXXm returns that would come from just helping to improve FFmpeg).
That was exactly my thought. To be fair, I probably end up thinking that because this article is written is this trashy dumbass style, which portrays it as "Google bug reports vs. ffmpeg bug fixes", which is simply unfair: as you've said, a bug report is a contribution, not some kind of a demand. That being said, I kinda do understand if someone from ffmpeg said something snarky about that on Twitter, since surely if Google (of all things) as an organization sees it valuable to contribute by sending bug reports, it surely isn't less feasible (logistically or economically) for them to also work on a patch, than it is for random people within ffmpeg mailing list itself.
I get the idea of publicly disclosing security issues to large well funded companies that need to be incentivized to fix them. But I think open source has a good argument that in terms of risk reward tradeoff, publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.
In addition to your point, it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days. The security issue should certainly be disclosed - when its responsible to do so.
Now, if Google or whoever really feels like fixing fast is so important, then they could very well contribute by submitting a patch along with their issue report.
Then everybody wins.
> ...then they could very well contribute by submitting a patch along with their issue report.
I don't want to discourage anyone from submitting patches, but that does not necessarily remove all (or even the bulk of) the work from the maintainers. As someone who has received numerous patches to multimedia libraries from security researchers, they still need review, they often have to be rewritten, and most importantly, the issue must be understood by someone with the appropriate domain knowledge and context to know if the patch merely papers over the symptoms or resolves the underlying issue, whether the solution breaks anything else, and whether or not there might be more, similar issues lurking. It is hard for someone not deeply involved in the project to do all of those things.
> it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days
This is very far from obvious. If google doesn't feel like prioritising a critical issue, it remains irresponsible not to warn other users of the same library.
If that’s the case why give the OSS project any time to fix at all before public disclosure? They should just publish immediately, no? Warn other users asap.
Because it gives maintainers a chance to fix the issue, which they’ll do if they feel it is a priority. Google does not decide your priorities for you, they just give you an option to make their report a priority if you so choose.
Why do you think it has to be all or nothing? They are both reasonable concerns. That's why reasonable disclosure windows are usually short but not zero.
Full (immediate) disclosure, where no time is given to anyone to do anything before the vulnerability is publicly disclosed, was historically the default, yes. Coordinated vulnerability disclosure (or "responsible disclosure" as many call it) only exists because the security researchers that practice it believe it is a more effective way of minimizing how much the vulnerability might be exploited before it is fixed.
Part of the problem is that many of the issues are not really critical, no?
Unless the maintainers are incompetent or uncooperative this does not feel like a good strategy. It is a good strategy on Google's side because it is easier for them to manage
> In addition to your point, it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days.
So when the xz backdoor was discovered, you think it would have been better to sit on that quietly and try to both wrest control of upstream away from the upstream maintainers and wait until all the downstream projects had reverted the changes in their copies before making that public? Personally I'm glad that went public early. Yes there is a tradeoff between speed of public disclosure and publicity for a vulnerability, but ultimately a vulnerability is a vulnerability and people are better off knowing there's a problem than hoping that only the good guys know about it. If a Debian bug starts tee-ing all my network traffic to the CCP and the NSA, I'd rather know about it before a patch is available, at least that way I can decide to shut down my Debian boxes.
The XZ backdoor is not a bug but a malicious payload inserted by malicious actors. The security vulnerability would immediately been used as it was created by attackers.
This bug is almost certainly too obscure to be found and exploited in the time the fix can be produced by Ffmpeg. On the other hand, this vuln being public so soon means any attacker is now free to develop their exploit before a fix is available.
If Google's goal is security, this vulnerability should only be disclosed after it's fixed or a reasonable time (which, according to ffmpeg dev, 90 days is not enough because they receive too many reports by Google).
A bug is a bug, regardless of the intent of the insertion. You have no idea if this bug was or wasn't intentionally inserted. It's of course very likely that it wasn't, but you don't and can't know that, especially given that malicious bug insertion is going to be designed to look innocent and have plausible deniability. Likewise, you don't know that the use of the XZ backdoor was imminent. For all you know the intent was to let it sit for a release or two, maybe with an eye towards waiting for it to appear in a particular down stream target, or just to make it harder to identify the source. Yes, just like it is unlikely that the ffmpeg bug was intentional, it's also unlikely the xz backdoor was intended to be a sleeper vulnerability.
But ultimately that's my point. You as an individual do not know who else has access or information about the bug/vulnerability you have found, nor do you have any insight into how quickly they intend to exploit that if they do know about it. So the right thing to do when you find a vulnerability is to make it public so that people can begin mitigating it. Private disclosure periods exist because they recognize there is an inherent tradeoff and asymmetry in making the information public and having effective remediations. So the disclosure period attempts to strike a balance, taking the risk that the bug is known and being actively exploited for the benefit of closing the gap between public knowledge and remediation. But inherently it is a risk that the bug reporter and the project maintainers are forcing on other people, which is why the end goal must ALWAYS be public disclosure sooner rather than later.
A 25 years old bug in software is not the same as a backdoor (not a bug, a full on backdoor..). The bug is so old if someone put it there intentionally, well congrats on the 25yo 0day.
Meanwhile the XZ backdoor was 100% meant to be used. I didn't say when and that doesn't matter, there is a malicious actor with the knowledge to exploit it. We can't say the same regarding the bug in a 1998 codec that was found by extensive fuzzing, and without obvious exploitation path.
Now, should it be patched? Absolutely, but should the patch be done asap at the cost of other maybe more important security patches? Maybe, maybe not. Not all bugs are security vulns, and not all security vulns are exploitable
> publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.
You can never be sure that you're the only one in the world that has discovered or will discover a vulnerability, especially if the vulnerability can be found by an LLM. If you keep a vulnerability a secret, then you're leaving open a known opportunity for criminals and spying governments to find a zero day, maybe even a decade from now.
For this one in particular: AFAIK, since the codec is enabled by default, anyone who processes a maliciously crafted .mp4 file with ffmpeg is vulnerable. Being an open-source project, ffmpeg has no obligation to provide me secure software or to patch known vulnerabilities. But publicly disclosing those vulnerabilities means that I can take steps to protect myself (such as disabling this obscure niche codec that I'm literally never going to use), without any pressure on ffmpeg to do any work at all. The fact that ffmpeg commits themselves to fixing known vulnerabilities is commendable, and I appreciate them for that, but they're the ones volunteering to do that -- they don't owe it to anyone. Open-source maintainers always have the right to ignore a bug report; it's not an obligation to do work unless they make it one.
Vulnerability research is itself a form of contribution to open source -- a highly specialized and much more expensive form of contribution than contributing code. FFmpeg has a point that companies should be better about funding and contributing to open-source projects that they rely on, but telling security researchers that their highly valuable contribution is not welcome because it's not enough is absurd, and is itself an example of making ridiculous demands for free work from a volunteer in the open-source community. It sends the message that white-hat security research is not welcome, which is a deterrent to future researchers from ethically finding and disclosing vulnerabilities in the future.
As an FFmpeg user, I am better off in a world where Google disclosed this vulnerability -- regardless of whether they, FFmpeg, or anyone else wrote a patch -- because a vulnerability I know about is less dangerous than one I don't know about.
But if open source is reliant on public contributors to fix things, then the bug should be open so anyone can take a stab at fixing it, rather than relying on the closed group of maintainers
> publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.
Not publicly disclosing it also carries risk. Library users get wrong impression that library has no vulnerabilities, while numerous bugs are reported but don't appear due to FOSS policy.
You are missing the tiny little fact that apparently a large portion of infosec people are of the opinion that insecure software must not exist. At any cost. No shades of gray.
A bunch of people who make era-defining software for free. A labor of love.
Another bunch of people who make era-defining software where they extract everything they can. From customers, transactionally. From the first bunch, pure extraction (slavery, anyone?).
> (slavery, anyone?)
It’s hard to take any comment seriously that tries to use “slavery” for situations where nobody is forced to do anything for anyone.
[flagged]
Not how the terms slavery and taxation are usually defined no.
If you choose to reduce them to such a level you ignore all their differences and focus on some carefully termed similarities you could make the case they're the same for that specific definition I suppose.
Anyone comparing normal adulthood stuff to slavery needs to spend some time reading some history books.
Anyone who actually read some history books would know slavery was considered "normal stuff" until it wasn't.
So, no answer?
No we can't use such strong words here, theft is more appropriate
Irrespective of what Google does, security research is still useful for all of us.
They could adopt a more flexible policy for FOSS though.
Or they could contribute solutions to said bugs? Its not like they would distract that much from their bottom line
Exactly. The call-out is not "please stop doing security research". It is, "if you have a lot of money to spend on security research, please spend some of it on discovering the bugs, and some on fixing them (or paying us to fix them), instead of all of it on discovering bugs too fast for us to fix them in time".
Google is a major contributor to open-source video, to the point where it would not be viable without them.
[flagged]
Look, I know you're being snarky, but YES. All of the viable open-source video codecs of the past 10 years would not have happened without Google. Not just for technical reasons, but for expensive patent-related legal reasons too.
Given that ffmpeg is an open-source video transcoding tool, I don't think you can easily just dismiss this as "big company abuses open source."
The ffmpeg devs are volunteers or paid to work on specific parts of the tool. That's why they're unimpressed. What Google is doing here is pretty reasonable.
I don't think ffmpeg is terribly affected by whether a codec is patent-encumbered or not
It would certainly be a less useful tool if all the videos it produced got you legal threats every time you tried to share them :)
Is it? I’ve gotten nothing but headaches from these automated CVE-seeking teams.
You got lower chances of getting hacked by a random file on the internet. At Project Zero level they're also not CVE seeking - it doesn't even matter at that scale, it's not an independent trying to become known.
I have yet to see one on any project I’ve been attached to that was actually exploitable under real circumstances. But the CVE hunting teams treat them all as if they were.
You should honestly consider not responding if you are unaware of Project Zero.
TFA is about Project Zero getting uppity about an unexploitable non-issue in ffmpeg.
Project Zero hasn't reported any vulnerabilities in any software I maintain. Lots of other security groups have, some well respected as well, but to my knowledge none of these "outside" reports were actual vulnerabilities when analyzed in context.
You are welcome to view the report however you like, but a world where an easily reproducible OOB read and UAF in the default configuration is an "unexploitable non-issue" is not reality.
For a codec that isn't configured by default, and only used and maintained by a hobbyist video game content preservation group. Yeah it's a non-issue.
> a codec that isn't configured by default
Where did you get that idea?
It's used by exploit authors, too.
It's as useful as brute forcing one of your neighbor's 100 online passwords every day and writing it on the door of a random supermarket.
It's hard to find an easier good vs evil distinction than between Google and literally anybody else.
What is happening to hacker news? I'm not a fan of Google but this discourse is so tribal and reductive.
Microsoft ♥ Linux?
Vim vs. Emacs
Facebook?
Probably a tie!
> era-defining software for free
chill, nobody knows what ffmpeg is
Is this sarcasm? While it may be true that my mother does not know what ffmpeg is I'm almost positive she interacts with stuff that uses it literally every single day.
...every media post on IG/FB/X/YT/news sites/AI.
So does a Siemens transformer, but it's era defining.
Why there's such a weird toxic empathy around ffmpeg?
Why do you shit on it? This software is literally holding the near entirety of video transcoding *worldwide* on its shoulders.
Good on them, can they and their groupies a bit more chill when facing valid criticism.
If this wasn't google but lone developer by now he'd be doxxed, fired and receiving death threats. It's not first time ffmpeg strikes like this.
Here's a thread by Google's head of security that notes the ways they've contributed to FFmpeg over the years:
https://x.com/argvee/status/1986194852669964528
To help those without access to X, the PR thread linked appears to give 2014 as the last time they reported solid contributions to helping fix security issues in ffmpeg.
https://security.googleblog.com/2014/01/ffmpeg-and-thousand-...
While technically accurate, this is selective quoting, leaving out other contributions.
Here's an archive link:
https://xcancel.com/argvee/status/1986194852669964528
Unfortunately, nobody not on Twitter can see anything except the first message, which just appears to be some corporate-deflection-speak.
It’s a trillion dollar company. I’m sure they could find rifle through their couch cushions and find more than enough money and under-utilised devs to contribute finance or patches.
Try xcancel instead: https://xcancel.com/argvee/status/1986194852669964528
Google is one of the largest contributors to OSS in the world, so they're already throwing literal millions at OSS?
They have thousands of highly paid employees working on open source; they are spending at least 1 billion per year.
I’m sure they can take a moment of their _busy schedules_ to send a patch then.
Especially if they’ve just spent all this cash on some ai tool.
I'm sure you can as well. I had no issues contributing to ffmpeg under my company payroll before, why not you?
Nice to see coming from Google. Some other current and form Google employees were unfortunately not as professional in their interactions on Twitter in regards to this issue.
If you rely on it, pay for it. So easy. Especially if you are a BIIG company. Money gives the maintainers joy, free time and appreciation for their work.
Wakey wakey people, FOSS is here to F you. It can be free, while those, who rely on it and using it, also pay for it.
"They could shut down three product lines with an email"
If you (Amazon, in this case) can put it that way, it seems like throwing them 10 or 20 thousand a year would simply be a good insurance policy! Any benefits you might get in goodwill and influence are a bonus.
How do you think Jeff got a 500 million dollars yacht? Not by writing checks.
But on a more serious note, it is crazy that between Google and Amazon they can not fund them with 50k each per year, so that they can pay people to work on this.
Specially Google, with Youtube, they can very easily pay them more. 100k~200k easily.
What's wild is the importance and impact of the work/tool. And for google and Amazon, $50k-$100k/yr isn't even a single engineer salary to them ...
And they get the tool + community good will, all for a rounding error on any part of their budgets...
Exactly.
That is why I said easily 100~200k. It will be a rounding error for them.
It is actually crazy that Google is not already hiring the main dev to work on ffmpeg with all the use they give it on Youtube.
I also wonder if it is maybe used by Netflix also.
I'm just imagining them having 4 or 5 250K a year security workers pumping out endless bug reports for one guy in Norway who works on ffmpeg at night and weekends for free because he loves open source and demanding him meet their program deadlines lol
> I also wonder if it is maybe used by Netflix also.
They do and it is.
https://netflixtechblog.com/the-making-of-ves-the-cosmos-mic...
https://netflixtechblog.com/for-your-eyes-only-improving-net...
It is really sad that none of Netflix or Google had hired a couple of devs to work full time on ffmpeg.
I'd be amazed if any of the FAANG's isn't putting real money through ffmpeg somewhere.
So yeah it'd be nice if they put some real money (to ffmpeg - wouldn't be their coffee allowance to any of them) into it.
Almost everything that touches video at some point uses it.
Double funny considering new-grads who may polish up some UI features or rewrite components for the 10th time will get paid 200-400K TC at these same companies. Evidently these companies value something other than straight labor.
Yeah, sadly crazy.
If I had built up 500 million in a savings account to buy a yacht, giving 50k of that to FFmpeg devs would put off my ability to buy a yacht by nearly a whole day.
Boltzmann brain-wise it clearly doesn't make sense to wait that long.
Cheating on your wife would put off your ability to buy that yacht off by 20 years, and yet here we are.
> How do you think Jeff got a 500 million dollars yacht? Not by writing checks.
A rising tide lifts all yachts. If he had written the check, my instinct tells me, he would have enough for two yachts. Goodwill is an actual line item on 10Q's and 10K's. I don't know why companies think it's worth ignoring.
Should Google be doing more to support ffmpeg? Yes.
Should Google stop devoting resources to identifying and reporting security vulnerabilities in ffmpeg?
I cannot bring myself to a mindset where my answer to this question is also "yes".
It would be one thing if Google were pressuring the ffmpeg maintainers in their prioritization decisions, but as far as I can tell, Google is essentially just disclosing that this vulnerability exists?
Maybe the CVE process carries prioritization implications I don't fully understand. Eager to be educated if that is the case.
So clearly there's only one correct answer here? Which is what the ffmpeg folks are driving at
What is the point of Google's Project Zero?
I'm not being dismissive. I understand the imperetive of identifying and fixing vulnerabilities. I also understand the detrimental impact that these problems can potentially have on Google.
What I don't understand is the choice to have a public facing project about this. Can anyone shine a light on this?
Honestly it seems kind of weird that HN comments are becoming so hostile toward a company hiring security researchers and having them do free security research on popular projects, then giving away the results for free.
There are many groups and companies that do security research on common software and then sell the resulting vulnerabilities to people who don’t have your best interests in mind. Having a major company get ahead of this and share the results is a win for all of us.
A lot of people in this comment section don’t understand the broader security ecosystem. There are many vendors who will even provide patched versions of software to work around security issues that aren’t yet solved upstream. Some times these patches disable features or break functionality, or simply aren’t at a level where upstream is interested yet. But patching known issues is valuable.
Getting patches accepted upstream in big open source projects isn’t always easy. They tend to want things done a certain way or have a high bar to clear for anyone submitting work.
PR.
And pushing forward the idea that "responsible disclosure" doesn't mean the software creator can just sit on a bug for as long as they want and act superior and indignant when the researcher gives up and publishes anyway because the creator is dragging their ass.
Project Zero's public existence came out of the post-Snowden period where Google was publicly pissed at the NSA/etc for spying on them (e.g. by tapping their fiber links).
A lot of their research involves stuff they personally benefit from if they were secure. ffmpeg, libxml2, various kinds of mobile device firmware, Linux kernels and userspace components, you name it.
Their security team gaining experience on other projects can teach them some more diversity in terms of (malware) approaches and vulnerability classes, which can in turn be used to secure their own software better.
For other projects there's some vanity/reputation to be gained. Having some big names with impressive resumes publicly talk about their work can help attract talent.
Lastly, Google got real upset that the NSA spied on them (without their knowledge, they can't help against warrants of course).
Then again, there's probably also some Silicon Valley bullshit money being thrown around. Makes you wonder why they don't invest a little bit more to pay someone to submit a fix.
I would imagine it's mostly a PR/marketing thing. That way the researchers can point to being part of something other people know about, and Google gets positive PR (though maybe not in this case) for spending resources on making software in general more secure.
you could not imagine and just read sources like https://en.wikipedia.org/wiki/Project_Zero
They should set up a foundation/LLC if they don't have one already and require a support contract for fixing any bugs in niche codecs. Target the 95%-98% use cases for "free" work. If someone gives them a CVE on something ancient, just note it with an issue tracker and that the problem is currently unsponsored. Have default build flags to omit all the ancient/buggy stuff. If nobody is willing to pay to fix all the ancient crap, then nobody should be using it. But if someone is willing to pay, then it gets fixed.
https://fflabs.eu
Edit: Notably, Google is a paying client of FFmpeg's consulting entity.
I understand ffmpeg being angry at the workload but this is how it is with large open source projects. Ffmpeg has no obligation to fix any of this. Open source is a gift and is provided as is. If Google demanded a fix I could see this being an issue. As it is right now it just seems like a bad look. If they wanted compensation then they should change the model, there's nothing wrong with that. Google found a bug, they reported it. If it's a valid bug then it's a valid bug end of story. Software owes it to its users to be secure, but again it's up to the maintainers if they also believe that. Maybe this pushes Google to make an alternative, which I'd be excited for.
> Software owes it to its users to be secure.
There is no such obligation.
There is no warranty and software is provided AS-IS explicitly by the license.
I disagree, as software engineers we owe it to the craft to create correct software especially when we intend to distribute. Anything less is poor taste.
You bring up licensing. I’m not talking about legally I’m talking about a social contract.
The choice of license is also a a partial descriptor of the social contract. If I wanted to work on it for “customers” I would sell it. I don’t owe you anything otherwise.
The social contract is “here is something I’ve worked on for free, and it is a gift. Take it or leave it.”
You want me to work on something for it? FYPM
For GP's sake, even before you make it to FYPM levels of angry, you will be in over your head. It's too much work. I remember being very early in my career and feeling like GP does. This is very easily more than a full-time job. The demands people will make of you and the attitudes they will use to do it will make you crazy.
> Google found a bug
That does not impact their business or their operations in any way whatsoever.
> If it's a valid bug then it's a valid bug end of story.
This isn't a binary. It's why CVEs have a whole sordid scoring system to go along with them.
> Software owes it to its users to be secure
ffmpeg owes me nothing. I haven't paid them a dime.
> ffmpeg owes me nothing. I haven't paid them a dime.
That is true. At the same time Google also does not owe the ffmpeg devs anything either. It applies both ways. The whole "pay us or we won't fix this" makes no sense.
> Google also does not owe the ffmpeg devs anything either.
Then they can stop reporting bugs with their assinine one size fits all "policy." It's unwelcome and unnecessary.
> It applies both ways.
The difference is I do not presume things upon the ffmpeg developers. I just use their software.
> The whole "pay us or we won't fix this" makes no sense.
Pay us or stop reporting obscure bugs in unused codecs found using "AI" scanning, or at least, if you do, then change your disclosure policy for those "bugs." That's the actual argument and is far more reasonable.
> Then they can stop reporting bugs with their asinine one size fits all "policy." It's unwelcome and unnecessary.
Right, they should just post the 0days on their blog.
I for one welcome it. I want to know if there are some vulnerabilities in the software I use.
It doesn’t matter if it affects their business or not. They found an issue and they reported it. Ffmpeg could request that they report it privately perhaps. Google has a moral duty to report the bug.
Software should be correct and secure. Of course this can’t always be the case but it’s what we should strive for. I think that’s baseline
> That does not impact their business or their operations in any way whatsoever.
I don't know what tools and backends they use exactly, but working purely by statistics, I'm sure some place in Google's massive cloud compute empire is relying on ffmpeg to process data from the internet.
And they're processing old LucasArts codec videos with it? Which is the specific bug report in question.
It's unlikely the specific codec that is the issue but the bug report suggests that the code path could be hit by a maliciously crafted payload since ffmpeg does file fuzzing. They almost certainly have ffmpeg stuff that touches user submitted videos.
They're probably not manually selecting which codecs and codec parameters to accept and sticking to the default ones instead.
Plus, this bug was reported by AI, so it was as much a proof of concept/experiment/demonstration of their AI security scanner as it was an attempt to help secure ffmpeg
>Ffmpeg has no obligation to fix any of this
I read this as nobody wants CVEs open on their product, so you might feel forced to fix them. I find it more understandable if we talk about web frameworks: Wordpress don't want security CVEs open for months or years, or users would be upset they introduce new features while neglecting safety.
I am a nobody, and whenever I found a bug I work extra to attach a fix in the same issue. Google should do the same.
Why is there an onus on Google to fix this? Bug bounty hunters aren’t required to submit a patch even when the target is open source.
Now should Google? Probably, it would be nice but no one has to. The gift from Google is the discovery of the bug.
[dead]
I think the glaring issue underlying this is that the big companies are not investing enough in the tools they rely on.
I agree with some of the arguments that patching up vulnerabilities is important, but it's crazy to put that expectation on unpaid volunteers when you flood them with CVE's some completely irrelevant.
Also the solution is fairly simple: Either, you submit a PR instead of an issue. Or, you send a generous donation with the issue to reward and motivate the people who do the work.
The amount of revenue they generate using these packages will easily offset the cost of funding the projects, so I really think it's a fair expectation for companies to contribute either by delivering work or funds.
The solution is even simpler. The project puts the bug report in its triage backlog. It works through it in its own time, and decides on severity and priority. That's the time-honored method.
The compounding factor here is the automated reporting and disclosure process of Google's Project Zero. GPZ automatically discloses bugs after 90 days. Even if Google does not expect bugs to be fixed within this period, the FFmpeg devs clearly feel pressure.
But it is an open source project, basically a hobby for most devs. Why accept pressure at all? Continue to proceed in the time-honored method. If and when Youtube explodes because of a FFmpeg bug, Google has only itself to blame. They could have done something but decided to freeload.
I really don't see the issue.
Is it legal to disclose security vulnerabilities within such a short period?
It certainly does not seem ethically correct.
I don't understand the rational for announcing that a vulnerability in project X was discovered before the patch is released. I read the project zero blogspot announcement but it doesn't make much sense to me. Google claims this is help downsteam users but that feels like a largely non-issue to me.
If you announce a vulnerability (unspecified) is found in a project before the patch is released doesn't that just incentivize bad actors to now direct their efforts at finding a vulnerability in that project?
The reason for this policy is that if you don’t keep a deadline upstream can just sit on the report forever while bad actors can find and exploit the vulnerabilities, which harms downstream users because they are left entirely unaware that the vulnerability even exists. The idea behind public disclosure is that downstream is now made aware of the bug and can take appropriate action on their side (for example, by avoiding the software, sponsoring a fix, etc.)
"Don't announce an unpatched vulnerability ever" used to be the norm. It caused a massive problem: most companies and organizations would never patch security vulnerabilities, so vulnerabilities would last years or sometimes decades being actively exploited with nobody knowing about it.
Changing the norm to "We don't announce unpatched vulnerabilities but there is a deadline" was a massive improvement.
Maybe for a small project? I think the difference here is rather minimal. Everybody "knows" code often has security bugs so this announcement wouldn't technically be new information. For a large project such as ffmpeg, I doubt there is a lack of effort in finding exploits in ffmpeg given how widely it is used.
I don't see why actors would suddenly reallocate large amounts of effort especially since a patch is now known to be coming for the issue that was found and thus the usefulness of the bug (even if found) is rather limited.
There are some rhetorical slights of hand. If Google does the work to find and report vulnerabilities, great. Nice contribution regardless of who provides it. The OSS developer can ignore it by accepting the consequences that will be. They are not forced to fix it except by themselves.
The many large corporations should be funding these tools they depend on to increase time allocations and thus ability to be responsive but this isn't an either/or. These type of thinking erode the communities of such projects and minds of the contributors.
FWIW I've totally been that developer trapped in that perspective so I empathize, there are simply better mental stances available.
The vulnerability in question is a Use After Free. Google used AI to find this bug, it would've taken them 3 seconds to fix it.
Burning cash to generate spam bug reports to burden volunteer projects when you have the extra cash to burn to just fix the damn issue leaves a very sour taste in my mouth.
Use After Free takes 3 seconds to fix if you defer free until the end of the program. If you have to do something else, or you don't want to leak memory, then it probably takes longer than 3 seconds.
Probably the right solution is to disable this codec. You should have to make a choice to compile with it; although if you're running ffmpeg in a context where security matters, you really should be hand picking the enabled codecs anyway.
If it takes 3 seconds to fix it, then how is this some massive burden on the maintainers? The bug report pointed to the relevant lines, the maintainers are the most familiar with the code and it probably would have taken them 1.5 seconds to not only fix it, but validate the fix. It probably took more time to write up the complaint about the bugs than to implement the fix.
I’m not on the beck and call of Google’s robot.
Maybe if it was an actual engineer from Google doing this they would have gotten a better response. Don’t expect people to treat AIs the same way we treat people.
But if you send me an automated report and then tell me to jump I’m telling you to f*ck off.
The report had a bunch of human effort in it, and didn't tell ffmpeg to do anything at all.
What happens if I send you an automated report that tells you of a meaningful problem you didn't know about, and don't tell you to jump?
It takes more time to read and understand the bug report, than to fix it. Instead of using googles time, they used ffmpegs voluntary time.
If this happens another 1000 times (easily possible with AI) google just got free labour and free publicity for "discovering 1000 critical bugs (but not fixing them even so they were easy to do)"
It takes even more time to read and understand a patch. Not only to you have to do all of the work of reading and understanding the bug report for which the patch is relevant, but you now also have to read and understand the submitted code, which until just this moment you didn't even know was necessary and have no specific context for. Then you have to validate whether or not the code in the patch is sufficient to fix the issue or whether those changes have any additional knock on effects. Yes, you could hope that the Google coders did this, but since you already have such a low opinion of Google's efforts and behavior on this front, I would argue that trusting their submission without validation would be insane.
Then if there's any changes or additional work to be done, you now have to spend time communicating with the patch sumbmitter, either getting them to make the requested changes, or rejecting their patch outright and writing it on your own.
And after all that we'd be right back here, only instead of the complain being "we don't have enough time to review all your bug reports" it would be "we don't have enough time to review all your PRs"
Notably, the vulnerability is also in a part which isn't included by default and nobody uses. I'm not sure that even warrants a CVE? A simple bug report would have probably been fine. If they think this is really a CVE, a bug fix commit would have been warranted.
One problem here is that CVE scoring is basically entirely bugged, something scored 8.7 could be an RCE exploit or a "may be able to waste CPU" issue.
That's the difference between "it may or may not be that there's someone who cares" versus "no one should be running this software anywhere in the general vicinity of untrusted inputs".
> One problem here is that CVE scoring is basically entirely bugged, something scored 8.7 could be an RCE exploit or a "may be able to waste CPU" issue.
+100000
My favorite 8.x or higher CVEs are the ones where you would have to take untrusted user input, bypass all the standard ways to ingest and handle that type of data, and pass it into some internal function of a library. And then the end result is that a regex call becomes more expensive.
If you think that's bad, you should look at Linux kernel CVEs. They're basically gone rogue when it comes to CVEs. Every minor bug gets flagged as a CVE, regardless of impact. Often, exploitation requires root access. If you have root, you've already won and can do whatever the hell you want. No need to exploit a bug to cause problems.
You’re right about scoring, at least largely. Let’s not conflate the CVE system and the CVSS system, though. They are related but distinct. CVE is just an identifier system.
It is included in most builds of ffmpeg, for example in most Linux packages or in Windows build linked to on ffmpeg.org that I use. But yeah, it's a very niche format that nobody uses.
It is included by default
AIUI there's no such thing as "really a CVE". A CVE is merely a standardized identifier for a bug so you can call it "CVE-2025-XXXXX" rather than "that use-after-free Google found in ffmpeg with AI." It doesn't imply anything else about the bug, except that it may impact security. The Linux kernel assigns one to every bugfix that may impact security (which is most kernel bugs) to avoid controversy about whether they should be assigned.
Not only is it included by default, but you can trigger this with a file that looks like a mp4 to the user.
Yes - more than a sour taste. This is hideous behavior. It is the polar opposite of everything intelligent engineers have understood regarding free-and-open software for decades.
FFmpeg is a great project but their twitter is embarrassing and should not be news
Why, their Twitter seems reasonable and combative towards companies who want to exploit their results without contribution ?
This take sounds great until you realize this is literally Google using their resources to help an open source project (by reporting issues in it that ALREADY EXIST and NEED to be fixed OR users made aware if not fixed) AND they also help them by upstreaming patches (just not for this specific issue) regularly AND with monetary support.
So...your entire premise is patently false and wrong.
google is a customer of fflabs and has enrolled them in summer of code. They also provide free fuzzing. ffmpeg is a foss, gpl-licensed project. nobody has any obligation to contribute, thus it isn't exploitation.
It’s a reproducible use-after-free in a codec that ships by default with most desktop and server distributions.
The recent iOS zero-day (CVE-2025-43300) targeted the rarely used DNG image format. How long before this FFMPEG vulnerability is exploited to compromise legacy devices in the wild, I wonder?
I’m not a fan of this grandstanding for arguably questionable funding. (I surely would not fund those who believe these issues are slop.) I’d like to think most contributors already understand the severity and genuinely care about keeping FFMPEG secure.
Bugs in little-used corners of the project are a massive red flag, that's how some of the most serious OpenSSL bugs have emerged. If the code is in there, and someone can trigger it with a crafted input, then it is as bad as any other bug.
So what if ffmpeg has open CVEs? What is Google going to do? Swap it? Leave them open, let Google ship products with CVE'd dependencies, and then they'll be forced to act.
Why would Google act if they got smart guys working for them for free? Stop fixing Google-reported bugs.
> So what if ffmpeg has open CVEs?
Part of the issue is that FFmpeg is almost a meta-project. It contains so many possible optional dependencies. Which is great for features, nit so great if you quickly want to know if you're exposed to the latest CVE.
Honestly, I kind of think that ffmpeg should just document that it's not secure and that you're expected to run it in a sandbox if you plan to use it on possibly-malicious input. All the big cloud users and browsers are doing this already, so it would hardly even change anything.
ffmpeg is complaining that security bugs are such a drag that it's driving people away from their hobby/passion projects. Well, if fixing security bugs isn't your passion, why not just say that? Say it's not your priority, and if someone else wants it to be a priority, they can write the patches. Problem solved?
Why be reasonable when you can just grandstand (about people that do actually provide you with funding) on Twitter? Surely that's more fun, right?
I'm amazed any open source volunteers aren't more upfront with "lol yeah I'll get to it, maybe, someday" more often.
God bless you guys.
Does Google seriously not have a whole team of people who help maintain ffmpeg?
https://github.com/search?q=repo%3AFFmpeg%2FFFmpeg+google.co...
As in another comment, adding code/fixes only adds more work to the existing ffmpeg team as they need to review and maintain it forever. It's not good enough. Even security fixes in the style of "drive-by patching" are derided in security-oriented open source projects.
Adding code/fixes is a tiny fraction of work compared to reviewing and maintaining.
The only reasonable way is for Google and other big corps to either sponsor members of the existing team or donate money to the project. And making it long term not one-shotting for publicity.
Yes. But they don’t upstream. Why would they?
Don't they?
https://git.ffmpeg.org/gitweb/ffmpeg.git?a=search&h=HEAD&st=...
Great, they can fix the bugs being filed by another part of their company
So would you rather Google have a secure ffmpeg while us plebian individual users continue to have an insecure ffmpeg?
It's frustrating to me how many people are siding with FFmpeg here considering how unprofessional and generally asshole-ish they are being.
I feel that this is mostly a kneejerk reaction to AI and Google in general, with people coming up with arguments to support their reaction after already forming an opinion.
Because we've worked as open-source maintainers and experienced the same frustrating asshole-ish behavior from companies before.
It's a volunteer project, they have no requirement to be 'professional'. That's basically the root of the whole issue. A hobby project is not a product, and its developers are not vendors. Free software is not a supply chain.
> A hobby project is not a product, and its developers are not vendors
But it's developers do offer paid consulting as ffmpeg maintainers, which Google does pay for.
The word "unprofessional" here is muddying the comment more than it helps.
Let's just saying they're being asshole-ish, which is a problem for volunteer projects just as much as non-volunteer ones.
The ffmpeg twitter sucks.
They probably fixed in the internal version.
Is it time for FFmpeg to relicense as AGPL? That'd be fun to witness.
To be clear, what does relicensing to AGPL do here? Does the AGPL include licensing terms that forbid filing bug reports without also including code patches? Or does it just make ffmpeg that much less appealing to projects and cut off the steady stream of contributions that it has gotten from google since 2009? https://git.ffmpeg.org/gitweb/ffmpeg.git/search/HEAD?pg=3;s=...
AGPL is banned from many BigCorps IIRC.
Right, but how is that a benefit here? The bug report was a valid report, ffmpeg is objectively better for it having been filed. Google contributes to ffmpeg on a regular basis according to the git history. They also buy consulting services from the ffmpeg maintainers according to the maintainer's own website. If ffmpeg was banned from Google, all of that would probably stop.
So not only would ffmpeg have multiple uncovered vulnerabilities, they would have less contributions and patches and less money for funding the maintainers. And for what? To satisfy the unfocused and mistaken rage of the peanut gallery online?
> To satisfy the unfocused and mistaken rage of the peanut gallery online?
That's only one possible benefit.
Another could be to gain leverage on big tech companies via dual licensing. If Google, Amazon, etc want to continue using FFmpeg without forking, they could do so by paying for the old LGPL license. It would likely be cheaper than maintaining a fork. They'd also have to release their changes anyways due to LGPL if they ever ship it client side.
So the incentive to contribute rather than fork would remain, and the only difference is that they have to pay a licensing fee.
Ofc this is probably just a fantasy. Relicensing FFmpeg like this probably isn't easy or possible.
Spite I guess.
Watch places like Amazon and Google suddenly stop updating and trying to find alternatives.
Like how Apple stopped using up to date the GNU tools in 2008 because of GPL3. That moved showed me then that Apple did not want you to use your computer as your computer.
Well, to continue that timeline. "Big Tech" freezes their version to the last gpl'ed version, and each commences their own (non-trivial effort) to make their own version (assuming the last gpl'ed version was not feature-complete for all their future uses).
And of course, they won't share with each other. So another driver would be fear of a slight competitive disadvantage vs other-big-tech-monstrosity having a better version.
Now, in this scenario, some tech CEO, somewhere has this brilliant bright spark.
"Hey, instead of dumping all these manhours & resources into DIYing it, with no guarantee that we still won't be left behind - why don't we just throw 100k at the original oss project. We'll milk the publicity, and ... we won't have to do the work, and ... my competitors won't be able to use it"
I quite like this scenario.
Imagine Google paying only 100k to ffmpeg though. That's like a single 3-month top-SWE pay at Google.
I think this would be hard. It also makes not a whole lot of sense IMO.
People need to think about what licence they want to use for a project.
It'll just get forked (see terraform)
Terraform didn't get licensed as AGPL, just some weird proprietary license.
It'll still cause Google and many others to panic, but weird and custom licenses are even worse for attracting business than open source ones.
I think the main issue is the 90 day disclosure. Maybe Google can allow the maintainers to extend that deadline for low / medium impact issues.
I see no reason to follow the 90 day timeline if the maintainer is working on it. Especially considering it is very possible for Google to overwhelm a project with thousands of vulnerability reports.
Otherwise, I don't think Google should issue a patch, just like in this case, only FFmpeg people know it is an obscure codec that nobody really uses, and maybe the reasonable approach would be to simply remove it. Google don't know that, unless they somehow take over the project.
And AFAIK Google is one of the biggest sponsor for SPI, which is the fiscal sponsor for ffmpeg. So not sure where the not paying thing came from.
Never work for free. It's a complete market distortion and leads to bad actors taking advantage of you and your work.
Sometimes it's hard: for many kinds of projects, I don't think anyone would use them if they were not open source (or at least source-available). Just like I wouldn't use a proprietary password manager, and I wouldn't use WhatsApp if I had a choice. Rather I use Signal because it's open source.
How to get people to use your app if it's not open source, and therefore not free?
For some projects, it feels better to have some people use it even if you did it for free than to just not do it at all (or do it and keep it in a drawer), right?
I am wondering, I haven't found a solution. Until now I've been open sourcing stuff, and overall I think it has maybe brought more frustration, but on the other hand maybe it has some value as my "portfolio" (though that's not clear).
You can just use the GPL, then it's free, but your labour cannot be so easily profited from by big corps
But it can be profited for not-so-big corps, so I'm still working for free.
Also I have never received requests from TooBigTech, but I've received a lot of requests from small companies/startups. Sometimes it went as far as asking for a permissive licence, because they did not want my copyleft licence. Never offered to pay for anything though.
I love the spirit of working for free on a project of passion. But yes it only takes a few bad actors to totally exploit it.
use GPL
AGPL at a minimum. Some kind of copyfarleft license if a corporation exploiting your code is a serious concern.
That's fine. Are they required to work for Google? I mean, they are independent and can decide on their own.
Corporations extract a ton of value from projects like ffmpeg. They can either pay an employee to fix the issues or setup some sort of contract with members of the community to fix bugs or make feature enhancements.
There is precedent for this: https://sqlite.org/consortium.html
Nearly everyone here probably knows someone who has done free labor and "worked for exposure", and most people acknowledge that this is a scam, and we don't have a huge issue condemning the people running the scam. I've known people who have done free art commissions because of this stuff, and this "exposure" never translated to money.
Are the people who got scammed into "working for exposure" required to work for those people?
No, of course not, no one held a gun to their head, but it's still kind of crappy. The influencers that are "paying in exposure" are taking advantage of power dynamics and giving vague false promises of success in order to avoid paying for shit that they really should be paying for.
Yep.
[dead]
I've grown a bit disillusioned with contributing to Github.
I've said this on here before, but a few months ago I wrote a simple patch for LMAX Disruptor, which was merged in. I like Disruptor, it's a very neat library, and at first I thought it was super cool to have my code merged.
But after a few minutes, I started thinking: I just donated my time to help a for-profit company make more money. LMAX isn't a charity, they're trading company, and I donated my time to improve their software. They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.
I'm not very upset over this particular example since my change was extremely simple and didn't take much time at all to implement (just adding annotations to interfaces), so I didn't donate a lot of labor in the end, but it still made me think that maybe I shouldn't be contributing to every open source project I use.
I understand the feeling. There is a huge asymmetry between individual contributors and huge profitable companies.
But I think a frame shift that might help is that you're not actually donating your time to LMAX (or whoever). You're instead contributing to make software that you've already benefited from become better. Any open source library represents many multiple developer-years that you've benefited from and are using for free. When you contribute back, you're participating in an exchange that started when you first used their library, not making a one-way donation.
> They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.
This can easily be flipped: you wouldn't have contributed if their software didn't add value to your life first and so you should pay them to use Disruptor.
Neither framing quite captures what's happening. You're not in an exchange with LMAX but maintaining a commons you're already part of. You wouldn't feel taken advantage of when you reshelve a book properly at a public library so why feel bad about this?
Now count how many libraries you use in your day to day paid work that are opensource and you didn't have to pay anything for them. If you want to think selfishly about how awful it is to contribute to that body of work, maybe also purge them all from your codebase and contact companies that sell them?
Maybe those people shouldn’t be doing free labor to give me free libraries either.
Maybe such sociopath ideas should be shunned in any healthy society.
This feels like getting upset when someone tells you your shoe is untied.
Similar event happened 2-years ago, but with Microsoft
https://news.ycombinator.com/item?id=39912916
That was not similar. The Microsoft dev was demanding things and rightfully shamed over it. Everyone giving Google the same shame over reporting an exploitable bug with no expectations is being ridiculous.
The message sounds like a human warning of someone is being fed-up with feeling taken advantage of. They see the profit being made, and they're not even getting drops of it.
If this isn't addressed, it makes this repo a target for actors that don't care about the welfare of Amazon, Google etc.
It seems quite predictable that someone will see this as a human weakness and try to exploit it, my question is whether we'll get them the support they need before the consequence hits, or whether we'll claim this was a surprise after the fact.
> They see the profit being made, and they're not even getting drops of it.
Doesn't google routinely contribute to ffmpeg? Certainly there's a lot of commits from `@google.com` email addresses here: https://git.ffmpeg.org/gitweb/ffmpeg.git?a=search&h=HEAD&st=...
So to me it seems notifications of bugs is good, you want to create visibility of problems.
The problem is the pressure to fix the bugs in x amount of time otherwise they will be made public. Additionally flooding the team with so many bugs that they can never keep up.
Perhaps the solution is not in do or don't submit but in the way how the patches are submitted. I think a solution would be to have a separate channel for these big companies to submit "flood" bug reports generated by AI, and if those reports won't be disclosed in x amount of time that would also take the pressure of the maintainers, the maintainers can set priories to most pressing issues and keep the backlog of smaller issues that may require attention in the future (or be picked up by small/new contributors).
I used to be paid for a decade to work on open source software - by my employer. We always upstreamed fixes. This is the only way.
Filing bugs, etc, is also has some value, but if a big company uses a piece of open source software and makes money with it (even indirectly), they can contribute engineering time (or money).
How about a hybrid open source license, where the software is free for anyone to use unless they're a commercial entity with $1B or greater in annual revenue?
Google might even prefer this deal, if it means more maintainer activity and fewer vulnerabilities.
Only accepting bugs with a fix is not a solution. Because who is going to vet the patches? Are you going to accept a Chinese patch for some obscure security issue? This is how real security problems are introduced.
Why not? The three letters are not going to send their backdoored patches under a pseudonym people like you would find suspicious. They would send it (and very likely are doing that already) under the name of "James Smith".
You really should check out much much code in e.g. the Linux kernel is written outside of "the West". It's not the 90s anymore.
This is not the first article I've seen where developers say they're getting overwhelmed by AI-generated bug reports. Maybe this is a new way people can volunteer to help open source.
If anyone is struggling to triage bug reports in a Rust open source project, please contact me and I will see if this is something I can donate some recurring time to.
> “The position of the FFmpeg X account is that somehow disclosing vulnerabilities is a bad thing. Google provides more assistance to open source software projects than almost any other organization, and these debates are more likely to drive away potential sponsors than to attract them.”
This position likely to drive away maintainers. Generally the maintainers need these projects less than the big companies that use them. I'm not sure what Google's endgame is
I doubt there's an endgame in mind. It's probably small teams trying to optimize their quarterly KPIs
> FFmpeg X account is that somehow disclosing vulnerabilities is a bad thing
I mean, I follow that account and never got this impression from them at all.
Looks like this was a security issue.
I don't consider a security issue to be a "standard bug." I need to look at it, and [maybe] fix it, regardless of who reported it.
But in my projects, I have gotten requests (sometimes, demands) that I change things like the published API (a general-purpose API), to optimize some niche functionality for one user.
I'll usually politely decline these, and respond with an explanation as to why, along with suggestions for them to add it, after the fact.
It’s a security issue for a stream type almost nobody uses. It’s a little like saying your graphics program in 2025 is exploitable by a malformed PCX file, or your music player has a security bug only when playing an Impulse Tracker module.
Sure, triage it. It shouldn’t be publicly disclosed within a week of the report though, because the fix is still a relatively low priority.
Security is adversarial. It doesn't matter whether the users intentionally use the vulnerable codec. What matters is whether an adversary can make the users to use it. Given the codec is compiled in by default on Ubuntu, and given that IIUC the bug would already be triggered by ffmpeg's file format probing, it seems pretty likely that the answer to that is yes.
Yes, security is by definition adversarial. Thanks for the most basic lesson.
How are you getting ffmpeg to process a stream or file type different from the one you’re expecting? Most use cases of ffmpeg are against known input and known output types. If you’re just stuffing user-supplied files through your tools, then yes you have a different threat model.
> How are you getting ffmpeg to process a stream or file type different from the one you’re expecting?
... That is how ffmpeg works? With default settings it auto-detects the input codec from the bitstream, and the output codec from the extension. You have to go out of your way to force the input codec and disable the auto-detection, and I don't think most software using ffmpeg as a backend would force the user to manually do it, because users can't be trusted to know those details.
In the industry I think folks generally know what they’re feeding into it and what they’re wanting out of it. When there’s a handoff between companies the stream encoding, bitrate, and resolution are generally part of the project spec. Within a company, your teams should know what they’re feeding into a tool and it’s probably not some obscure LucasArts game codec.
If it’s a potential problem for home users, yeah, that’s an issue but it’s not every use of the tool.
If no one uses the stream type, then not fixing the bug won't hurt.
The people who do use the stream type are at risk, and have been at risk all along. They need to stop using the stream type, or get the bug fixed, or triage the but as not exploitable.
It'd be a silly license or condition, but, a license that says employees of companies in the S&P500 cant file bugs without an $X contribution, and cant expect a response in under Y days without a larger one, would be a funny way to combat it. Companies have no problem making software non-free or AGPL when it becomes inconvenient so maybe they can put up or shut up.
Where in this bug report was there any expectation for a response? They filed a private bug report and have a policy of making private reports public in 90 days whether or not they get a response. How did the OSS world go from "with enough eyes all bugs are shallow" to "filing a bug report is demanding I respond to you"?
> cant file bugs without an $X contribution, and cant expect a response in under Y days
A license to define that nobody can expect a response? Or file bugs?
None of this has anything to do with the issue. They can just turn off Google’s access to the bug tracker. No license needed.
However Google is free to publish security research they find.
It would be most concerning if projects started including “Nobody is allowed to do security research on this project” licenses. Who would benefit from that?
Given the cost of discovering these issues, and the massive risk of exploitation, it’s likely that Google/Amazon/etc have them fixed in their private forks.
Fixing a private fork takes 1/5-1/10 the time of shepherding a PR to meet the maintainers expectations. And why spend 5x dev time to contribute fixes to your competitor?
If ffmpeg doesn't want to recieve bug reports that's their right. They better not complain when google goes full disclosure on them. They are essentially asking for it at this point.
Related https://news.ycombinator.com/item?id=45785291
Sidenote from the article, but TIL Mark Atwood is no longer at Amazon.
I remember Rebel Assault 2... how did an AI bot even discover a specific vulnerability in ASM that surfaces when playing the first 10-20 frames of some video from that?
We're well past the point that any serious security team should be able to submit a fix along with a bug report.
Is it me or the linked article seems to be heavily co-authored by AI. The cadence is very monotonic and dull.
Not too fond of maintainers getting too uppity about this stuff. I get that it can be frustrating to receive bug report after bug report from people who are unwilling or unable to contribute to the code base, or at the very least to donate to the team.
But the way I see it, a bug report is a bug report, no matter how small or big the bug or the team, it should be addressed.
I don’t know, I’m not exactly a pillar of the FOSS community with weight behind my words.
When you already work 40+ hours a week and big companies suddenly start an AI snowblower that shoots a dozen extra hours of work every week at you without doing anything to balance that (like, for instance, also opening PRs with patches that fix the bugs), the relationship starts feeling like being an unpaid employee of their project.
What's the point of just showering these things with bug reports when the same tool (or a similar one) can also apparently fix the problem too?
The problem with security reports in general is security people are rampant self-promoters. (Linus once called them something worse.)
Imagine you're a humble volunteer OSS developer. If a security researcher finds a bug in your code they're going to make up a cute name for it, start a website with a logo, Google is going to give them a million dollar bounty, they're going to go to Defcon and get a prize and I assume go to some kind of secret security people orgy where everyone is dressed like they're in The Matrix.
Nobody is going to do any of this for you when you fix it.
Except that the only people publicizing this bug were the people running the ffmpeg Twitter account. Without them it would have been one of thousands of vulnerabilities reported with no fanfare, no logos, and no conference talks.
Doesn't really fit with your narrative of security researchers as shameless glory hounds, does it?
How do they know that next week it's not going to be one of those 10 page Project Zero blog posts? (Which like all Google engineer blog posts, usually end up mostly being about how smart the person who wrote the blog post is.)
Note FFmpeg and cURL have already had maintainers quit from burnout from too much attention from security researchers.
> Not too fond of maintainers getting too uppity about this stuff.
I suppose you'd prefer they abandon their projects entirely? Because that's the real alternative at this point.
> it can be frustrating to receive bug report after bug report from people
As the article states, these are AI-generated bug reports. So it's a trillion-dollar company throwing AI slop over the wall and demanding a 90-day turn around from unpaid volunteers.
Do you have evidence of ai slop, or are you just spreading fud? The linked bug was acknowledged as real.
That is completely irrelevant, the gross part is that (if true) they are demanding them to be fixed in a given time. Sounds like the epitome of entitlement to me, to say the least.
No one is demanding anything, the report itself is a 90 day grace period before being publicly published. If the issues are slop then what exactly is your complaint?
google literally tells them it's an ai generated report
That is not the definition of slop.
if it's unwanted then it is
and the ffmpeg maintainers say it's not wanted
so it's slop
It’s a reproducible use-after-free in a codec that ships by default with most desktop and server distributions. It can be leveraged in an exploit chain to compromise a system.
I'm not a Google fan, but if the maintainers are unable to understand that, I welcome a fork.
It’s not bug reports. It’s CVE.
There is a convergence of very annoying trends happening: more and more are garbage found and written using AI and with an impact which is questionable at best, the way CVE are published and classified is idiotic and platform founding vulnerability research like Google are more and more hostile to projects leaving very little time to actually work on fixes before publishing.
This is leading to more and more open source developers throwing the towel.
CVEs aren't caused by bugs?
You could argue that, but I think that a bug is the software failing to do what it was specified, or what it promised to do. If security wasn't promised, it's not a bug.
Which is exactly the case here. This CVE is for a hobby codec written to support digital preservation of a some obscure video files from the 90’s that are used nowhere else. No security was promised.
Not always, there have been a plenty of CVEs issued for completely absurd reasons.
They are not published in project bug trackers and are managed completely differently so no, personally, I don't view CVE as bug reports. Also, please, don't distrort what I say and omit part of my comment, thank you.
Some of them are not even bugs in the traditional sense of the world but expected behaviours which can lead to unsecure side effects.
It seems like you might misunderstand what CVEs are? They're just identifiers.
This was a bug, which caused an exploitable security vulnerability. The bug was reported to ffmpeg, over their preferred method for being notified about vulnerabilities in the software they maintain. Once ffmpeg fixed the bug, a CVE number was issued for the purpose of tracking (e.g. which versions are vulnerable, which were never vulnerable, which have a fix).
Having a CVE identifier is important because we can't just talk about "the ffmpeg vulnerability" when there have been a dozen this year, each with different attack surfaces. But it really is just an arbitrary number, while the bug is the actual problem.
I'm not misunderstanding anything. CVE involves a third party and it's not just a number. It's a number and an evaluation of severity.
Things which are usually managed inside a project now have a visibility outside of it. You might justify it as you want like the need to have an identifier. It doesn't fundamentally change how that impacts the dynamic.
Also, the discussion is not about a specific bug. It's a general discussion regarding how Google handles disclosure in the general case.
The lowered lead times are because devs have an entitled additude that others fix their code when they discover bugs in it.
The 90 day period is the grace period for the dev, not a demand. If they don't want to fix it then it goes public.
> The lowered lead times are because devs have an entitled additude that others fix their code when they discover bugs in it.
That’s how open source works.
It is super strange to say that who devoted their time and effort and then gives away their work for free is somehow entitled.
If this keeps up, there won't be anyone willing to maintain the software due to burn out.
In today's situation, free software is keeping many companies honest. Losing that kind of leverage would be a loss to the society overall.
And the public disclosure is going to hurt the users which could include defense, banks and other critical institutions.
The bug that ignited this was discovered by an AI security analyst agent.
I think it’s more than reasonable to demand that if the AI finds bugs, then the AI should spend a couple cents and output patches.
Is it unreasonable to ask that if a massive company funds someone to find a CVE in an open source project, they should also submit a patch? Google is a search company. Seems kind of... evil... to pay your devs to find holes in something with nothing to do with searching, then refuse to pay them to fix the problem they noticed.
Google contributes to ffmpeg on a fairly regular basis https://git.ffmpeg.org/gitweb/ffmpeg.git/search/HEAD?s=@goog...
No it's not "unreasonable" to ask for patches along with bug fixes, but it is unreasonable to be mad if they don't. They could just not file the bug reports at all, and that is an objectively worse outcome.
Note that most open source contributions by Googlers are, as recommended by policy, done under their own personal accounts. There's a required registry internally mapping from their personal account to their @google.com identity.
The nice thing is that the open source contributions done by a Googler aren't necessarily tied to their Google identity.
>No it's not "unreasonable" to ask for patches along with bug fixes, but it is unreasonable to be mad if they don't
Your stance seems to be is that it is unreasonable to be annoyed by someone who is being unreasonable.
When I searched for synonyms for "unreasonable" in a major English language thesarus, the following synonyms were listed:
indefensible, mindless, reasonless, senseless, unjustified, untenable, unwarranted
So yes, it absolutely is valid for the FFMPEG crew to feel trolled by Project Zero.
No, my stance is that it is reasonable for ffmpeg to ask for patches along with bug fixes and that is it simultaneously reasonable for Google to submit bug reports without those patches. Just like it would be reasonable for Google to ask for a feature in ffmpeg and it's equally reasonable for the ffmpeg maintainers to decline to implement the feature. Reasonableness is not a binary thing.
> Your stance seems to be is that it is unreasonable to be annoyed by someone who is being unreasonable.
No, that's not their stance.
You talked about whether ffmpeg was reasonable. They talked about whether ffmpeg was unreasonable.
You never accused google of being unreasonable, and they never mentioned it either.
So this idea of "responding to google being unreasonable" is a brand new premise. And I'm pretty sure they would disagree with that premise.
Are you on the autistic spectrum and/or not a native speaker of English? If we are discussing if FFMPEG's stance is reasonable, then it follows we are discussing of Google's actions are unreasonable.
Google is absolutely being unreasonable here -- they should instruct their engineers to submit a patch when submitting CVEs, and FFMPEG is perfectly valid to engage in a little activism to nudge them along.
> [...]then it follows we are discussing[...]
It's all connected but... Here, I'll phrase it more simply:
They didn't agree that google is being unreasonable. You are not interpreting them right.
I don't care how confident you are that google is being unreasonable. The "your stance seems to be" statement in your previous comment is wrong.
Let's pull back up the core line here:
>it's not "unreasonable" to ask for patches along with bug fixes, but it is unreasonable to be mad if they don't
So the ask (make a patch for your CVEs) is reasonable. It follows that to fail to do so is unreasonable. Whether the poster agrees Google is unreasonable or not is up for debate, but if they choose to espouse that the request is reasonable and that Google is reasonable, they're putting forth an irrational belief not rooted in their own logic.
But hey, lots of folks on HN are biased towards Google for financial reasons, so I totally get it.
But either their stance is how I said, or if their stance differs they are a hypocrite, there really is no middle ground here.
> So the ask (make a patch for your CVEs) is reasonable. It follows that to fail to do so is unreasonable.
Ah, that's where the confusion happens. It's your stance that it follows, but tpmoney was directly disagreeing with that logic.
tpmoney's stance, and my stance, is that it's reasonable to ask and it's also reasonable to say no.
It's not irrational to say that you can reasonably decline a reasonable request. Jeez.
(Also even if it was irrational, that wouldn't make tpmoney a hypocrite. That claim is just weird.)
I would suggest that FFmpeg spin up a commercial arm that gets support contracts with Google, Amazon, etc, but with a tight leash so that it does not undermine the open source project. Would need clean guidance as to what the commercial arm does and does not.
Probably could pull in millions per year.
According to https://xcancel.com/argvee/status/1986194861855478213#m, google is a customer of fflabs.eu (https://fflabs.eu) which is just such a "support contracts" arm. Certainly the fflabs.eu site claims at the bottom that Netflix, Google and Meta are all customers of theirs and the site also claims that their team is comprised of a number of ffmpeg contributors including the "lead maintainer" (https://fflabs.eu/about/).
Additionally, a search of the git commits shows a regular stream of commits from `google.com` addresses. So as near as I can tell, Google does regularly contribute code to the project, they are a customer of the project maintainer's commercial support company (and that fact is on the support company's website) and they submit high quality bug reports for vulnerabilities. What are we mad about again?
In general it is not good when companies get too much power. See how shopify is eating away the ruby infrastructure right now after RubyCentral panicked when shopify blackmailed the community by taking away funding. Of course it is their money, but the whole ecosystem suddenly submitting to one huge company, really really sucks to no ends. Hobbyists are at the weakest here.
If FFmpeg had a commercial operation it would constantly be getting sued for IP issues.
While I don't think FFmpeg's response is a great one ("fund us or stop submitting bugs"); I think Google is being pretty shitty here. For a company that prides itself in its engineering prowess and contributions to the OSS community (as they like to remind me all the time) to behave this way is just all around shitty.
Submit the bug AND the patch and be done with it; don't make it someone else's problem when it's an OSS library/tool. A for-profit vendor? Absolutely. But this? Hell naw.
If it just were that simple. The reality is that this is a very slippery slope and you won’t get a support contract just like that with a “tight leash”
why don't they just ignore Goole and work at their own pace?
What happens if ffmpeg ignores (after triage) the minor CVEs?
Life goes on.
FFmpeg should just dual license at this point. If you're wanting shit fixed. You pay for it (based on usage) or GTFO. Should solve all of the current issues around this.
You mean, Google reports a bug, and ffmpeg devs say "GTFO"? Let's assume this is a real bug: is that what you would the ffmpeg developers to say to Google?
I absolutely understand the issue that a filthy-rich company tries to leech off of real unpaid humans. I don't understand how that issue leads to "GTFO, we won't fix these bugs". That makes no sense to me.
People would rather spitefully stub their toe after being warned of the table's location by someone they don't like rather than take heed.
I am fairly confident that this article is largely AI-generated. More generally, the whole site appears to be heavy on AI slop, e.g.: https://thenewstack.io/how-ai-is-pushing-kubernetes-storage-...
And maybe it's fine to have AI-generated articles that summarize Twitter threads for HN, but this is not a good summarization of the discussion that unfolded in the wake of this complaint. For one, it doesn't mention a reply from Google security, which you would think should be pretty relevant here.
Like the bug report in question... poetic.
The bug report in question is obviously written by a human:
https://issuetracker.google.com/issues/440183164?pli=1
It's of excellent quality. They've made things about as easy for the FFmpeg devs as possible without actually fixing the bug, which might entail legal complications they want to avoid.
AI was only used to find the bug.
"obviously". You have 0 evidence for your comment. Big sleep is credited as reporter.
Thinking about it, "obviously written by a human" is not actually true. It's more accurate to say "no obvious signs of being written by AI".
bug reports are still good... but yeah they really should contribute to development too
> it is unreasonable for a trillion-dollar corporation like Google, which heavily relies on FFmpeg in its products, to shift the workload of fixing vulnerabilities to unpaid volunteers. They believe Google should either provide patches with vulnerability reports or directly support the project’s maintenance.
This is so basic it shouldn't even have to be said.
The first part is technically true but doesn't apply to this situation. "Shift the workload"? This isn't google's bug, and google doesn't need it fixed. It was never their workload, and has not been shifted.
The last part is just wrong. Google does directly support the project's maintenance.
It wasn't paying the people who it was expecting to fix the bug.
Google should just fork and leave ffmpeg alone
The Tragedy of the Bazaar.
This is dumb. Obscurity doesn’t create security. It’s unfortunate if ffmpeg doesn’t have the money to fix reported bugs but that doesn’t mean they should be ignorant of them. I don’t see any entitlement out of Google either - I expected this article would have a GH issue thread with a whiny YouTube engineer yelling at maintainers.
The first thing you can do is actually read the article. The question is not about the security reports but Google's policy on disclosing the vulnerability after x days. It works for crazy lazy corps. But not for OSS projects.
In practice, it doesn't matter all that much whether the software project containing the vulnerability has the resources to fix it: if a vulnerability is left in the software, undisclosed to the public, the impact to the users is all the same.
I, and I think most security researchers do too, believe that it would be incredibly negligent for someone who has discovered a security vulnerability to allow it to go unfixed indefinitely without even disclosing its existence. Certainly, ffmpeg developers do not owe security to their users, but security researchers consider that they have a duty to disclose them, even if they go unfixed (and I think most people would prefer to know an unfixed vulnerability exists than to get hit by a 0-day attack). There's gotta be a point where you disclose a vulnerability, the deadline can never be indefinite, otherwise you're just very likely allowing 0-day attacks to occur (in fact, I would think that if this whole thing never happened and we instead got headlines in a year saying "GOOGLE SAT ON CRITICAL VULNERABILITY INVOLVED IN MASSIVE HACK" people would consider what Google did to be far worse).
To be clear, I do in fact think it would be very much best if Google were to use a few millionths of a percent of their revenue to fund ffmpeg, or at least make patches for vulnerabilities. But regardless of how much you criticize the lack of patches accompanying vulnerability reports, I would find it much worse if Google were to instead not report or disclose the vulnerability at all, even if they did so at the request of developers saying they lacked resources to fix vulnerabilities.
> I, and I think most security researchers do too, believe that it would be incredibly negligent for someone who has discovered a security vulnerability to allow it to go unfixed indefinitely without even disclosing its existence.
Because security researchers want to move on from one thing to another. And nobody said indefinitely. Its about a path that works for OSS project.
Its also not about security through obscurity. You are LITERALLY telling the world check this vuln in this software. Oooh too bad the devs didnt fix it. Anybody in the sec biz would be following Google's security research.
Putting you in a spotlight and telling it doesn't make any difference is silly.
Agreed that obscurity is not security. However we don't want to make it easy for hackers to get a catalog of vulnerabilities to pick and choose from. I think the issue is public disclosure of vulnerabilities after a deadline. The hobbyists can't just keep up.
And this is why Stallman invented copyleft.
Stallman was right.
ffmpeg is copyleft by Stallmans design.
The death of ffmpeg has been foretold.
They should consider switching to Affero GPL. Now _that_ would fuck the industry up…
I’m only half joking by the way.
This whole thing is sure a tangle.
Open source projects want people to use their work. Google wants bugs -- especially security ones -- found and fixed fast. Both goals make sense. The tension starts when open source developers expect payment for fixes, and corporations like Google expect fixes for free.
Paying for bug fixes sounds fair, but it risks turning incentives upside down. If security reports start paying the bills, some maintainers might quietly hope for more vulnerabilities to patch. That's a dangerous feedback loop.
On the other hand, Google funding open source directly isn't automatically better. Money always comes with strings. Funding lets Google nudge project priorities, intentionally or not -- and suddenly the "open" ecosystem starts bending toward corporate interests.
There's no neat solution. Software wants to be used. Bugs want to be found and fixed. But good faith and shared responsibility are supposed to be the glue that holds the open source world together.
Maybe the simplest fix right now is cultural, not technical: fewer arguments on Twitter, more collaboration, and more gratitude. If you rely on open source, donate to the maintainers who make your life easier. The ecosystem stays healthy when we feed it, not when we fight over it.
Amusing. I suppose the intended optional behavior is for Google to fix internally then run the public PR. Less optimal for us normal users since the security issue will be visible publicly in the PR until merging, though it won't affect Google (who will carry the fixed code before disclosure).
Unfortunately, and now even more with GenAI, a lot of Open Source boils down to doing free Labor to rich capitalists get even more rich while making sure we kill a lot of small business our professional class could have started.
is this a fundamental problem with the open source model, though? if we work within a model of continuous refinement and improvement but also accept the constraint that there's a fundamental limit to the resources someone is willing to give up in exchange for nothing (whether the resource in question is dev effort for no money or money for no ownership stake in the final product) then you see where something infinite is running up against something infinite and there's just no way to square that.
Wouldn't they just fork it, fix their own bugs and stop contributing at all?
Forking puts you in another hell as Google. Now you have to pay someone to maintain your fork! Maybe for a project that’s well and fully complete that’s OK. But something like FFmpeg is gonna get updated all the time, as the specs for video codecs are tweaked or released.
Their choice becomes to: - maintain a complex fork, constantly integrating from upstream. - Or pin to some old version and maybe go through a Herculean effort to rebase when something they truly must have merges upstream. - Or genuinely fork it and employ an expert in this highly specific domain to write what will often end up being parallel features and security patches to mainline FFmpeg.
Or, of course, pay someone in doing OSS to fix it in mainline. Which is the beauty of open source; that’s genuinely the least painful option, and also happens to be the one that benefits the community the most.
If you're going to fix the bug, why not in the main project?
Any time I have tried to fix a bug in an open source project I was immediately struck down with abusive attitudes about how I didn't do something exactly the way they wanted it that isn't really documented.
If that's what I have to expect, I'd rather not even interact with them at all.
I don't think this is what typically happens. Many of my bug reports were handled.
For instance, I reported to the xorg-bug tracker that one app behaved oddly when I did --version on it. I was batch-reporting all xorg-applications via a ruby script.
Alan Coopersmith, the elderly hero that he is, fixed this not long after my report. (It was a real bug; granted, a super-small one, but still.)
I could give many more examples here. (I don't remember the exact date but I think I reported this within the last 3 years or so. Unfortunately reporting bugs in xorg-apps is ... a bit archaic. I also stopped reporting bugs to KDE because I hate bugzilla. Github issues spoiled me, they are so easy and convenient to use.)
I feel you, and that's a different issue than the one in this thread, which is in general if maintainers ignore bug reports, their projects will be forked and the issue fixed anyway but not in the main project.
If you really care, I would suggest helping with documenting how the process should work for others to reference going forward.
I would if people were not abusive to me in the first place, but that attitude just turns me off to the entire project.
That costs cash and the big tech companies are a little short at the moment.
They undoubtedly already maintain a fork of the project, considering that they have a private compression accelerator that nobody else has access to.
Probably what they want to do once the original project burns out
Google internally maintaining a fork that attempts to track upstream has a ongoing cost that increases over time
vs. spamming OSS maintainers with slop reports costs Google nothing
Is there really slop here though? It sounds like the specific case identified was a real use after free in an obscure file format but which is enabled by default.
If it was slop they could complain that it was wasting their time on false or unimportant reports, instead they seem to be complaining that the program reported a legitimate security issue?
If a maintainer complains about slop bug reports, instead of assuming the worst of the maintainer, it'll often be more productive to put yourself in their shoes and consider the context. An individual case may simply be the nth case in a larger picture (say, the straw that broke the camel's back). Whenever this nth case is observed, if you only consider that single case, a response also informed by detailed personal consideration of the preceding (n-1) cases may appear grossly and irrationally disproportionate, especially when the observer isn't personally that involved.
For a human, generating bug reports requires a little labor with a human in the loop, which imposes a natural rate limit on how many reports are submitted, which also imposes a natural triaging of whether it's personally worth it to report the bug. It could be worth it if you're prosocially interested in the project or if your operations depend on it enough that you are willing to pay a little to help it along.
For a large company which is using LLMs to automatically generate bug reports, the cost is much lower (indeed it may be longer-term profitable from a standpoint like marketing, finding product niches, refining models, etc.) This can be asymmetric with the maintainer's perspective, where the quality and volume of reports matter in affecting maintainer throughput and quality of life.
People volunteer to make billionaires even more profit - crazy world. Who even have the time and resources to volunteer so much??? I don't get all this at all.
The mixture of motivations includes a desire to contribute something to humanity, a desire to make a mark or achieve recognition, or a desire to gain experience that might enhance your ability to get hired.
It can be a hobby like model trains and it can be a a social context like joining a club or going to church.
But it's safe to say that nobody is volunteering "to make billionaires even more profit."
Hmmmmmm. I can understand both points but ...
Google is under no obligation to work on FFmpeg.
Google leveraging AI to spam ffmpeg devs with bugs that range from real to obscure to wrongly reported may be annoying. But even then I still don't think Google is to be held accountable for reporting bugs nor is it required to fix bugs. Note: I do think Google should help pay for costs and what not. If they were a good company they would not only report bugs but also have had developers fix the bugs, but they are selfish and greedy, everyone knows that. Even then they are not responsible for bugs in ffmpeg. And IF the bug report is valid, then I also see no problem.
The article also confuses things. For instance:
"Many in the FFmpeg community argue, with reason, that it is unreasonable for a trillion-dollar corporation like Google, which heavily relies on FFmpeg in its products, to shift the workload of fixing vulnerabilities to unpaid volunteers"
How could Google do that? It is not Google's decision. That is up to volunteers. If they refuse to fix bug reports reported from Google then this is fine. But it is THEIR decision, not Google.
"With this policy change, GPZ announces that it has reported an issue on a specific project within a week of discovery, and the security standard 90-day disclosure clock then starts, regardless of whether a patch is available or not."
Well, many opinions here. I think ALL bugs and exploits should be INSTANTLY AND WITHOUT ANY DELAY, be made fully transparent and public. I understand the other side of the medal too, bla bla we need time to fix it bla bla. I totally understand it. Even then I believe the only truthful, honest way to deal with this, is 100% transparency at all times. This includes when there are negative side effects too, such as open holes. I believe in transparency, not in secrecy. There can not be any compromise here IMO.
"Many volunteer open source program maintainers and developers feel this is massively unfair to put them under such pressure when Google has billions to address the problem."
So what? Google reports issues. You can either fix that or not. Either way is a strategy. It is not Google's fault when software can be exploited, unless they wrote the code. Conversely, the bug or flaw would still exist in the code EVEN IF GOOGLE WOULD NOT REPORT IT. So I don't understand this part. I totally understand the issue of Google being greedy, but this here is not solely about Google's greed. This is also how a project deals with (real) issues (if they are not real then you can ask Google why they send out so much spam).
That Google abuses AI to spam down real human beings is evil and shabby. I am all for ending Google on this planet - it does so much evil. But either it is a bug, or not. I don't understand the opinion of ffmpeg devs "because it is Google, we want zero bug reports". That just makes no sense.
"The fundamental problem remains that the FFmpeg team lacks the financial and developer resources to address a flood of AI-created CVEs."
Well, that is more an issue in how to handle Google spamming down people. Sue them in court so that they stop spamming. But if it is a legit bug report, why is that a problem? Are ffmpeg devs concerned about the code quality being bad? If it is about money then even though I think all of Google's assets should be seized and the CEOs that have done so much evil in the last 20 years be put to court, it really is not their responsibility to fix anything written by others. That's just not how software engineering works; it makes no sense. It seems people confuse ethics with responsibilities here. The GPL doesn't mandate code fixing to be done; it mandates that if you publish a derivative etc... of the code, that code has to be published under the same licence and made available to people. That's about it, give or take. It doesn't say corporations or anyone else HAS to fix something.
"On the other hand, security experts are certainly right in thinking that FFmpeg is a critical part of the Internet’s technology framework and that security issues do need to be made public responsibly and addressed."
I am all for that too, but even stricter - all security issues are to be made public instantly, without delay, fully and completely. I went to open source because I got tired of Microsoft. Why would I want to go back to evil? Not being transparent here is no valid excuse IMO.
"The reality is, however, that without more support from the trillion-dollar companies that profit from open source, many woefully underfunded, volunteer-driven critical open-source projects will no longer be maintained at all."
Wait - so it is Google's fault if projects die due to lack of funding? How does that explanation work?
You can choose another licence model. Many choose BSD/MIT. Others choose GPL. And so forth.
"For example, Wellnhofer has said he will no longer maintain libxml2 in December. Libxml2 is a critical library in all web browsers, web servers, LibreOffice and numerous Linux packages. We don’t need any more arguments; we need real support for critical open source programs before we have another major security breach."
Yes, that is a problem - the funding part. I completely agree. I still don't understand the "logic" of trying to force corporations to have to do so when they are not obliged. If you don't want corporations to use your code, specify that in the licence. The GPL does not do that. I am confused about this "debate" because it makes no real sense to me from an objective point of view. The only part that I can understand pisses off real humans is when Google uses AI as a pester-spam attack orgy. Hopefully a court agrees and spits up any company with more than 100 developers into smaller entities the moment they use AI to spam real human beings.
> If they were a good company they would not only report bugs but also have had developers fix the bugs, but they are selfish and greedy, everyone knows that.
Google has a pretty regular stream of commits in the ffmpeg git history, and is proudly declared to be a customer of `fflabs.eu`, which appears to be the ffmpeg lead maintainer's private consulting company. Two of the maintainers on ffmpeg's "hire a dev" page[1] are also listed as employees of fflabs[2]. Honestly, Google seems like they're being a model for how corporations can give back to OSS better and benefit from that, but instead everyone is up in arms because they don't give patches in all of their bug reports.
[1]: https://ffmpeg.org/consulting.html [2]: https://fflabs.eu/about/
[dead]
So Chromium is going to drop video support because ffmpeg is full of bugs, just like libxlst ?
/s
[flagged]
Popped in that context means hacked. Like in the same usage of pop in "popping a shell".
I do not think he meant it as any kind of physical/death threat.
why would you post such a patently absurd accusation.
I saw that tweet and thought it looked crazy. To me, it sounds like a death threat. Especially given that it's posted on X.com, after a really heated argument, with no clarification afterwards.
If you didn't actually post that tweet, that's great! I'm happy to be corrected
If you hear a rumor that sounds too crazy to be true on social media, maybe don't repeat it as fact. Imagine how you would feel reading something like that.
It's not a rumor, you literally wrote that on X. The average person would understand it as a threat.
You felt the need to make a new account for that bad take? Jeez.
Its a special kind of irony to post AI slop complaining about someone's ai slop that isn't actually ai slop just devs whining about being expected to maintain their code instead of being able to extort the messengers to do the work for them.
If they're not being paid, they're under no obligation to "maintain their code". If you don't like it, don't use ffmpeg.
It's not "whining" to refuse to do unpaid labor for the benefit of someone else - especially when the someone else is as well-resourced as Google.
So many of these threads recently. The abstract pattern is: people are surprised that many other people who one might reasonably expect should give you money or resources, or otherwise behave in a cooperative reasonable way, are psychopaths and don't do that.
“How dare ffmpeg be so arrogant! Don’t they know who we are? Fork ffmpeg and kill the project! I grant a budget of 30 million to crush this dissent! Open source projects must know who’s boss! I’ll stomp em like a union!”
…. overheard at a meeting of CEO and CTO at generic evil mega tech corp recently.
They probably want to drown you in CVEs to force deprecation on the world and everybody into their system like they do with everything else they touch.
2025 is the year where I finally have to ask myself: "What is Google actually doing for me?" and the answer is "Nothing". I am no longer able to search with Lynx. I had to move to startpage.com for that. I never used, and never will, gmail or any other of their oservices, except for YouTube (which they bought). YouTube search is finally totally unusable, pushing short form content onto me which I really really don't care about. I have to click the "Show me less shorts" button on my iPhone every few weeks, because the stuff keeps coming back. I don't care about their AI summaries, I was always happy (when I was still able to use Google in my daily workflow) with the excerpt they were giving me since what, 2002? I don't have an Android phone, because their Accessibility was lacking behind for almost two decades. They dropped the "Don't be evil" motto. ... Where do I even stop? Google used to be great. Now, all they do for me is to maintain storage for my TV-replacement, YouTube. I say maintain storage, because that is all they do. I can't even meaningfully search that content. I have given up on them. It is over.
They obviously need to be reminded that the only reason Google has to care about FLOSS projects is when they can effectively use them to create an advertising panopticon under the company's complete control.
Just mark CVEs as bugs and get to them when you can. In this case, if Google doesn't like it, then so be it. It'll get fixed eventually. Don't like how long it takes? Pay someone to contribute back. Until then, hurry up and wait.
That’s how you get your open source software removed from distributions and eventually forked.
Forked by people who are quicker at fixing security vulnerabilities than the original maintainers?
Sure, for some definition of “vulnerability.” And only doing that, nothing more.
And that's a problem?
Please bro, please, fix our bugs bro, just this one bug bro, last one I swear, you and I will make big money, you are the best bro, I love you bro. -- big tech companies
FFmpeg should stop fixing security bugs reported by Google, MS, Amazon, Meta etc. and instead wait for security patches from them. If FFmpeg maintainers will leave it exposed, those companies will rush to fixing it, because they'd be screwed otherwise. Every single one of them is dependent on FFmpeg exactly as shown in https://xkcd.com/2347/
I understand the problem of corporations leeching off of the community here.
I still fail to see why "ffmpeg is not allowed to fix bugs reported by corporations" is a good strategy. To me this sounds not logical.
Because they are making more money in profit than some mid-sized American cities' economies do in a year while contributing nothing back. If they don't want massive security vulnerabilities in their services using FFmpeg, maybe they need to pony up about .1 seconds' worth of their quarterly earnings to the project either in cash or in manpower.
It's not FFmpeg's problem if someone uses a vulnerability to pwn YouTube, it's Google's problem.
Also, in the article, they mention that Google's using AI to look for bugs and report them, and one of them that it found was a problem in the code that handles the rendering of a few frames of a game from 1995. That sort of slop isn't helping anyone. It's throwing the signal-to-noise ratio of the bug filings way the hell off.
If they contribute nothing back, what are all the `google.com` email addresses in the git history doing? If they contribute nothing back, why are they listed as a customer of `fflabs.eu` which is apparently a private consulting company for ffmpeg run by some of the ffmpeg lead maintainers?
What do we think the lesson corporations are going to take from this is?
1) "You should file patches with your bug reports"
2) "Even when you submit patches and you hire the developers of OSS projects as consultants, you will still get dragged through the mud if you don't contribute a patch with every bug report you make, so you might as well not contribute anything back at all"
The text and context of the complaint can be used to steelman it, adopting the principle of charity.
From that perspective, the most likely problem is not that bugs are being reported, nor even that patches are not being included with bug reports. The problem is that a shift from human-initiated bug reports to large-scale LLM generation of bug reports by large corporate entities generates a lot more work and changes the value proposition of bug reports for maintainers.
Even if you use LLMs to generate bug reports, you should have a human vet and repro them as real and significant and ensure they are written up for humans accurately and concisely, including all information that would be pertinent to a human. A human can make fairly educated decisions on how to combine and prioritize bug reports, including some degree of triage based on the overall volume of submissions relative to their value. A human can be "trained" to conform to whatever the internal policies or requirements are for reports.
Go ahead and pay someone to do it. If you don't want to pay, then why are you dumping that work on others?
Even after this, managing the new backlog entries and indeed dealing with a significantly larger archive of old bug reports going forward is a significant drag on human labor - bug reports themselves entail labor. Again, the old value proposition was that this was outweighed by the value of the highest-value human-made reports and intangibles of human involvement.
Bug reports are, either implicitly or explicitly, requests to do work. Patches may be part of a solution, but are not necessary. A large corporate entity which is operationally dependent on an open source project and uses automation to file unusually large volumes of bug reports is not filing them to be ignored. It isn't unreasonable to ask them to pay for that work which they are, one way or another, asking to have done.
> Even if you use LLMs to generate bug reports, you should have a human vet and repro them as real and significant and ensure they are written up for humans accurately and concisely, including all information that would be pertinent to a human.
Look at the report that's the center of this controversy. It's detailed, has a clear explanation of the issue at hand, has references and links to the relevant code locations where the submitter believes the issue is and has a minimal reproduction of the issue to both validate the issue and the fix. We can assume the issue is indeed valid and significant as ffmpeg patched it before the 90 day disclosure window. There is certainly nothing about it that screams low effort machine generated report without human review, and at least one commenter in this discussion claims to have inside knowledge that all these reports are written by verified and written by humans before submission to the projects.
I won't pretend that it's a perfect bug report, but I will say if every bug report I got for the rest of my career was of this caliber, I'd be a quite happy with that.
> It isn't unreasonable to ask them to pay for that work which they are, one way or another, asking to have done.
Google quite literally hires some of the ffmpeg maintainers as consultants as attested to by those same maintainer's own website (fflabs.eu). They are very plainly putting cold hard cash directly into the funds of the maintainers for the express purpose of them maintaining and developing ffmpeg. And that's on top of the code their own employees submit with some regularity. As near as I can tell, Google does everything people complaining about this are saying they don't do, and it's still not enough. One has to wonder then what would be enough?
Google is evil.
Google might be aiming to replace ffmpeg as the world's best media professor. Remember how Jia Tan (under different names) flooded xz with work before stepping up as a maintainer.
Google, through YouTube and YouTube TV, already runs one of the most significant video processing lines of business in the world. If they had any interest in supplanting FFmpeg with their own software stack, they wouldn't need to muck around with CVEs to do so.
Not for themselves, but for everyone else for some reason.
Though it's more likely there's just some team tasked with shoving AI into vulnerability hunting.
this is why you should release your opensource project with the license of being free only for individual, not for enterprises.
enterprise must pay.
If it's not free for enterprises then it's not open source, according to the commonly accepted definition.
Being open source and being free are entirely different things though.
You can view, read the code = open source.
The latter is about money.
Open source is not only about being able to read the code: the open source definition includes "Free Redistribution" (anyone who has the software can give away copies, and get paid if they want) and "No Discrimination Against Fields of Endeavor", among other requirements.
These two requirements combined make it impossible to distribute open source software with the provision that it is only free for individuals.
> Many in the FFmpeg community argue, with reason, that it is unreasonable for a trillion-dollar corporation like Google, which heavily relies on FFmpeg in its products, to shift the workload of fixing vulnerabilities to unpaid volunteers.
That's capitalism, they need to quit their whining or move to North Korea. /s The whole point is to maximize value to the shareholders, and the more work they can shove onto unpaid volunteers, the move money they can shove into stock buybacks or dividends.
The system is broken. IMHO, there outta be a law mandating reasonable payments from multi-billion dollar companies to open source software maintainers.
There is incentive for people like Dan from ChainGuard to point out these issues. It makes what his company does seem more important