@lxo
Do you genuinely honestly actually audit the source code of every single piece of software running on your system and compile it all yourself, including web code?
Either you have a lot of time on your hands and a lot of skill, or you're running a very minimal system, or you actually don't.
security doesn't work in absolutes, and auditability is an imperfect deterrent, but it's infinitely better than the moves to prevent auditability that hostile vendors adopt
I do audit the rare cases of web blobs that are imposed on me, because I can't count on community for those, and my security depends on it even when my freedom has been unjustly taken away
@lxo
And even if you do, most people* can't. So for them, they need third-party audits, which as I have previously pointed out, can be done without source code. Or otherwise they try to get their software from sources they trust.
*For example, rocket scientists and brain surgeons
>but delivering them in the form of binary blobs mean that the one who accepts them has to trust them blindly and to give up any pretense of security from the vendor
It's the biggest cognitive dissonance in the computer security field and it is very tiresome.
>there is a significant risk in granting the vendor (just like to anyone else) a new round of control over your computer
Just found a good analogy for some range of people.
Security in proprietary computing is like a rubiks cube without any colors. It's still a reference to a black box but the rearranging nature of it, an update, means that you only have the illusion that the vulnerability you had at one specific square is just somewhere else in the unknown.
@lxo
Then you personally know other programmers that you trust to audit it for you. Again, most people don't have that.
@lxo The nature of side channel attacks is that CPU-imposed barriers are no longer as strict as they should be. This means that hypervisor boundaries are porous - it's possible to extract material from other virtual machines. I understand the perspective that simply avoiding executing any untrusted software removes that risk, but I do not control the software running in other VMs on the same hardware. How do you solve that?
@lxo so I can't run a service that allows others to run software of their choosing on my hardware?
@lxo Real world evidence suggests that from a practical perspective and with appropriate mitigations, the boundaries are solid. But if we're assuming that the security boundaries are porous, why do we bother with the security boundaries at all? Why not allow ptrace() to read memory across user boundaries?
the way I see them, the boundaries are most useful to avoid accidental data corruption and leakage. the information and tactical asymmetries and the general quality of software make it so that an intelligent and resourceful opponent who gains some access can likely find ways to escalate that access, and presuming otherwise is likely self-defeating.
@lxo Do they fix them for good? I doubt it - any more than I doubt any security update fixes all bugs of that category. But they do fix the vulnerabilities that are publicly known and could otherwise be trivially exploited, and the fact that we haven't seen them exploited in the wild despite the relatively low cost and potential high gain strongly suggests that they work well enough.
Could a state-sponsored attacker still win? Possibly! But that's the all-or-nothing argument again.
@lxo What you're doing here is protecting against a theoretical attack (Intel providing backdoored microcode updates) and leaving yourself open to a known attack (sidechannel data exfiltration). You may well have a use case where that's not a concern to you - you may be the only user on your system, there may be no secret data on the system, that kind of thing - but that's not everyone's case, and people should be able to make an informed choice about that.
They shouldn't have to, but that's the choice that exists in the real world - anyone using an Intel CPU is placing trust in Intel not to have backdoored it in some way (which is true even for non-microcoded CPUs). The threat you're describing is one where Intel is initially trustworthy but becomes untrustworthy - we have no evidence to say that's ever happened, so you're protecting against a theoretical threat while leaving yourself open against a demonstrated threat
@lxo deployment of a back door via CPU microcode update is a theoretical event. Some people will have that in their threat model, and will want to avoid those updates as a result. Absolutely legitimate choice. As you say, those people should also be ensuring every other avenue of untrustworthy software in their system is closed off. But that's not everyone, and that's not a policy decision that should be imposed without ensuring people understand the tradeoffs.
I get it that you don't consider that a threat for your freedom or your security, and so you wish to overlook it.
@lxo I think we're using inconsistent terminology. I'm using "backdoor" to describe CPU behaviour that alters its security properties in an attacker controlled way. With that definition the ability to update microcode is not in itself a backdoor, as making use of it is under the user's control. Obviously it can be used to deliver a backdoor, but that is an event that has never been observed.
sure, you have to open that door for it to sneak its blob in, unlike other vendor-backdooring systems at higher levels of enshittifiability, and it's presumably not universal, unlike other vendor-backdooring systems, but it seems specious to not consider it a backdoor.
but I get that you're speaking of a theoretical backdoor they could conceivably install if you open the preexisting backdoor to them. that amounts to dismissing the known, actual backdoor to distract yourself with a theoretical one.
@lxo It's an advertised feature that does nothing unless the operating system actively engages with it. A backdoor is something that's hidden from the user, and which directly gives someone else access to something they shouldn't. Introducing hardcoded credentials into sshd would be a backdoor - an advertised feature that has no security impact unless someone actively makes use of it isn't.
see, in your post you show you trust Intel to not be an attacker, because you imply Intel should have access to the innards of your computer. well, not mine. if I'm not allowed to change those bits, nobody should.
and if it didn't have any security impact, why do we seem to always end up talking about security when the topic is microcode?
but, sure, if you don't want to call it a backdoor, what kind of door do you want to call it? sneakydoor? sidedoor? bottomdoor? frontdoor? masterdoor?
@lxo it's advertised in the same way as paging is, even if CR3 is never mentioned in user-facing adverts. It's not hidden. If you want to argue that we should do more to educate users on the tradeoffs of using proprietary blobs, then I would absolutely agree with you - but so far we have a track record of Intel shipping updates that do block the demonstrated attacks, and not of them violating existing security assumptions in the process. The available evidence is that they improve security.
but what else do these changes do? can we even tell? can we even look into what other holes and backdoors they may bring about? are these affordances-vs-prohibitions advertised anywhere?
to me, good engineering is about finding balances between conflicting requirements, not about believing and blindly trusting vendors' salespeople and marketing departments. there are far too many examples of vendors abusing "for security" (without as much as mentioning whose security they speak of) for private gains for me to trust them or go along with them without questioning. "what are the ingredients?" "nevermind that, jussst eat it, it'ssssafe, trust ussss" isn't exactly inspiring of confidence to me.
@lxo the majority of the performance loss isn't in the microcode updates, it's in the OS making use of new functionality in those updates - if you pass mitigations=off you regain the performance even with the new microcode, and you can choose the set of mitigations applied to fit the particular threat model you have. By removing the ability to update it you remove the ability for users to make that choice, without reducing the quantity of non-free blobs the system depends on.
@lxo and I completely understand you making the choice not to trust an opaque update! For a bunch of threat models it's probably the right choice. What I object to is you making that choice on behalf of all of your users, and not making it clear to them what the impact may be.
as for the fallacy that we remove that ability, we've already covered it: we don't. it's free software, people can change it so that it does whatever they wish, and they don't even have to use our fork if they don't want to. our policy makes sense to the self-selected set of users, and it's entirely unlike the 'install it or else' model that Intel and you promote.
as for "without reducing", my count is that avoiding the new blob halves the blob count even in your twisted interpretation.
explain to me as if I were 50: why does it make sense for you to grant Intel new powers over your computer, just because they had some power over it at the time they made the component?
@lxo Brings no benefit for you, brings significant benefit for others. And, clearly, the CPU is running non-free microcode whether an update is loaded or not - replacing one blob with another doesn't increase the number of blobs the running system depends on.
But "fallacy"? Obviously it's removed. https://www.fsfla.org/ikiwiki/selibre/linux-libre/ uses the word "removed" several times. You removed the code that allowed someone to update the microcode. The fact that it can be added back doesn't mean it wasn't removed.
@lxo If I don't trust Intel to avoid introducing deliberate security backdoors via microcode updates, I should also not buy any new Intel CPUs - they might have introduced a backdoor. I shouldn't buy an old one either - the old one might have a backdoor that my current one doesn't. Either Intel is trustworthy, in which case the microcode updates are as safe as the microcode the CPU ships with, or they're not, in which case I should never trust any Intel CPUs at all.
or, in your twisted way of perceiving the preinstalled microcode, would double the blob count in the processor.
while the logic is neutered in the kernel, in the process of making the kernel stop demanding or requesting the blob, the ability to load the blob is not gone from the processor, obviously, and you know that. if you wish to deprive yourself of control over your processor, you can load and run a kernel that will help transfer your control over it to the vendor, whether it's the upstream blob-ridden and -pushing kernel, or a further modified version of Linux-libre that (re)implements your wish. that's your sovereign choice, and we're not taking it away, we're just refusing to entice you to "sacrifice freedom for a little security" (you know how that ends, yet you insist on it).
shall we start over, back from the top?
@lxo putting non-free code on a read-only optical disk doesn't stop it being non-free code. Putting it in read-only memory doesn't stop it being non-free code. It's code. You've come up with an entirely arbitrary definition to stop having to care about it.
@lxo Intel has the power over how your CPU behaves, whether you load new microcode or not. If you trust your existing CPU but don't trust future Intel you shouldn't load new microcode and you shouldn't buy new CPUs. The power dynamic has an impact on a number of things, but not your ability to determine whether your CPU is trustworthy.
Today, there is no way to run a computer without non free firmware. The good and proper way to handle that would have been to accept that (as with the non free operating systems in the early 80s) and to fund/promote/encourage projects to produce free replacements.
Instead, the fsf chose to put their heads in the sand and pretend non free firmware doesn't exist when it's burned to ROM.
@mjg59
CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social
gnu was built on nonfree unix as part of making a replacement thereof, one of the few acceptable uses of nonfree software. it's acceptable because once the replacement is made, the problem is solved, and we've escaped the prison. making it unacceptable would keep us in prison forever.
(the other acceptable use is for reverse engineering; also acceptable because it solves the problem)
CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social
CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social
Hardware is typically a black box. It's inescapable that, at some level, the machine will do what its designer made it to do, and there's nothing inherently wrong (as in unethical) about that.
Moreover, there's no expectation that you could be able to change it at that level, for it could be all hardware circuits, that are impossible to modify. Even if you could build another machine or component with the desired change, that wouldn't modify the original machine or component. There's no ethical imperative for that.
Even when you have access to its specifications and source code, which parts got compiled to hardware circuits and which were compiled into instructions for a general- or special-purpose programmable component is immaterial and irrelevant to tell whether the machine is usable.
It's a black box. It could range from all hardware to a qemu layer on top of all hardware or of a qemu layer on top of... You get the idea.
As with AGPLed software on a remote server, even with specifications and source code, you can't generally tell whether there are undisclosed malicious or undesirable features omitted from the sources. That would be unethical, but you can't generally tell, just as you can't generally tell whether friends really like you or just pretend to. It's the nature of black boxes, and if you worry too much about it, you may end up without friends, and without hardware.
Sure, if it exhibits malicious behaviors, you probably don't want it in your life.
For purposes of freedom and ethics, what matters for programmable hardware is whether the machine is faithful to its programming model. If it takes your instructions and carries them out, you can use it as a black box for your computing in freedom, whether it's on-premises hardware or a remote virtual machine.
Now, if you can tell that it takes instructions and commands from others, or send information to others, it's not a black box, and there are grounds for suspicion that those behaviors may be malicious, even if they don't directly interfere with the exposed programming model.
Software components outside the hardware black box bring with them an ethical issue that is not present in components inside the black box: they are visibly and indisputably software, and as such, you deserve control over what they do to your machine.
Being clearly outside the black box, they're not covered by the inescapable nature of hardware, not even theoretically: they're indisputably software, and software is modifiable unless someone prevents you from modifying it by unethical means.
This post is about ethics, the core issue for free software, not about security. For security, opening the black box matters, whether it's software or hardware.
CC: @mjg59@nondeterministic.computer
@lxo does sticking a copy of Linux on a CD and locking the player and attached computer in a black box mean that the owner of that box should have no expectations of being able to modify what is very clearly code? From an external perspective the operation of the box may be indistinguishable from a hardcoded CPU, but if we *know* that it contains free software, why is it ethical to prevent the owner from performing any modifications they desire?
what's incoherent to me is to talk about security while promoting black boxes of all sorts, including unethical ones.
it's specious at best to speak of the children who could be reverse engineering firmware for hardware components that will obsolete by the time they are finished. someone resourceful enough to carry out such a project in a more timely fashion can also replace or reflash an EEPROM. that argument is bullshit to promote the blobs and their vendors' control over users.
it deflects the anger that should be directed at the component vendors, for not offering free firmware, towards those who point out their unethical stance and come up with an ethical compromise that denies them the control they sought.
CC: @wouter@pleroma.debian.social
@lxo @wouter you encourage users to buy hardware containing software they will never be able to free instead of buying hardware that a sufficiently driven user may be able to free. But even if it's never freed, it is easier in many cases to examine and audit that non-free software if it's loadable and very hard if not impossible if it's embedded in ROM in the device. I have personally done so for various devices I own, and have identified security issues that were rectified by the manufacturer.
if you're asking whether it's legally compliant with the GPL, I'll refer you to a lawyer, which I'm not, but my understanding is that nothing in the GPLv2 prevents this if you otherwise satisfy the GPLv2 requirements; GPLv3 might require another reading to make sure this wouldn't trip the requirement for installation instructions, but I'm pretty sure it wouldn't. a CDROM is not modifiable to begin with, and there's nothing illegal about burning a copy of the kernel Linux onto one. I find it very unlikely that anything in any GPL would break the principle of equivalence to a hardware circuit; it's long been established.
@lxo I'm somewhat bewildered to have an FSF board member say that I should have no ethical expectation to be able to modify GPLed software running on something I own as long as the vendor does a good enough job of nailing the box shut.
CC: @wouter@pleroma.debian.social
again, crack the black box open and you can replace the CD, or the PROM or whatever.
it would be insane to prohibit recording in read-only media just because you can't modify it.
hardware equivalence removes the ethical imperatives normally associated with software, because the software nature ceases to be relevant.
it doesn't necessarily remove copyright issues in as much as they apply, but they don't normally apply to hardware.
@lxo except it's clearly *not* equivalent to a hardware circuit, that's just an assertion you've made. And in your repeated mentioning of replacing ROMs I'm becoming concerned that you don't actually know much about hardware.
When there was no free GNU system yet, most people believed that Emacs was a nice editor but there is no chance they'll ever succeed in writing a free os.
When there was a free compiler and a free libc, must people were like, this is a nice user space but nobody will ever make it a fully free os.
At every stage, the GNU project proved them wrong.
Why would the situation be different for non free firmware replacements?
@mjg59
It would be one thing if you advocated against firmware that can't be changed without a signature by the hardware manufacturer, but that's not the case here. You're generalizing that all hardware requires signed firmware blobs.
I agree that verifying firmware signatures in hardware is evil and should be outlawed. But hardware that does no verification, or that verifies only a checksum, in hardware? That's perfectly fine.
@mjg59
And although I don't think it works for me, I can understand the argument for Linux-Libre. If there is no free firmware, and you prefer to keep your hardware unmodified, sure.
What I'm saying though is that forbidding any form of update, ever, of the firmware, and doing so in hardware, is wrong, because it makes it equally impossible to replace the non free firmware with a free one.
@mjg59
That's an opinion, not a fact, and one I very much disagree with.
Software is software. It doesn't matter whether the software is burned in ROM, it's still software.
To claim otherwise means you're fine with running non free software.
I'm not. I accept that it's not possible in today's world, but it's still not a good thing.
@mjg59
Let me put it this way.
There are only very few cases in which firmware really needs to be embedded in the hardware and can't be put elsewhere.
The initial few opcodes of a CPU are a good example: those can't be outside the CPU because it still needs to initialise its components and so there is absolutely no possible way it can be outside the CPU die, and it's small enough that it doesnt matter if it's not updatable. @mjg59
Anything that can theoretically be updated or loaded from outside the die though is software. It doesn't matter whether it is "indistinguishable" from hardware, the fact is that it *is* software. And as it is software, the only ethical thing for it is to be free.
@mjg59 @lxo
@lxo
@mjg59
Instead, the fsf caved to the people who push non free embedded software and told them it's allowed, as long as the fsf can pretend the embedded software is not there.
I find this sad, and a betrayal of everything the free software movement stands for.
@lxo
@mjg59
now, you want to label free software core philosophy as an assertion of mine, fine, what could I do but accept the honor? thank you!
@lxo yes, it's a fantastical example that's intended to demonstrate that your argument is non-sensical. Your position seems to be that if the box is closed then it's not software, but if someone were to figure out how to open it it would become software. That's clearly not how any of this works.
- resistance from hardware vendors, who were used to having all the pieces strictly in their hands, but were finding advantage in moving some pieces out of the black box, so they started coming up with various tricks to have the cake and eat it too. it looks like we may have just been lucky in blindsiding the OS vendors, who couldn't mount such a resistance in a timely fashion, even when they also made the hardware. now they have restricted boot, walled app stores and whatnot, and they're coming for our primary computing devices too.
- lack of resistance from the community, that welcomed this move and didn't counter it, out of ignorance about the difference between bits inside the black box and those outside, of rationalization over the hardware vendors gambit, and of evidently not caring enough to go even as far as reverse engineering the bits that weren't guarded by crypto dogs, preferring to ship the binary blobs until someone else came up with an alternative
and firmware restricted in this very way is what got this thread started, maybe you missed that.
I think we are in violent agreement that tivoizing hardware is unethical, and that hardware that does no such thing is acceptable. I guess we even agree that, if such hardware demands blobs to be loaded onto it, those blobs are unethical and shouldn't exist either. it seems to me that our differences are limited to the rarer and rarer case in which a former piece of hardware circuit has been replaced by a programmable circuit with a preloaded program inside the black box. that doesn't strike me as reason for so much heat.
now, I see you're associated with debian, that holds a stance that promotes very heavily, seemingly welcomes the growth of, and forces onto users the very blobs you oppose. you must be very strong to survive in that atmosphere. I hope you're aware of and supportive of such efforts as debian-libre.
CC: @mjg59@nondeterministic.computer
rejecting hardware that requires blobs with signatures sounds like a decent compromise for a certification program that isn't aligned with free software philosophy but that wants to push in that general direction.
how about launching a debian certification program along these lines?
CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social
I find that the notion of tolerating components inside the hardware black box makes ethical sense, and that's the line that the free software movement has drawn from the beginning. there's no caving or betrayal involved, it's just a consistent ethical stance, even if you disagree with it and wish it wasn't so. if it comes across as betrayal to you, that follows from misalignment between our philosophy and your expectations. I hope you'll eventually share our values, or at least come to stop hating them and aiming for destruction.
CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social
if someone were to figure out how to open it, one would find out whether it's hardware or software, but it would still be of no ethical consequence. to you, AFAICT it retroactively makes a difference, and that makes no sense to me. I think this is where we differ.
your argument comes across to me as if the designer's choice of a programmable circuit and a program for a machine component, instead of a dedicated hardware circuit, were somehow meant to unethically make it harder for others to control what the machine does for them. that seems backwards and, TBH, completely bonkers to me. can you elucidate how your reasoning differs from this, or perhaps make it make sense?
@lxo it makes no retroactive difference - it is software, it always was software, all the normal ethical considerations should apply. Now, in the same way that free software published in a book can't be modified in place, there may be practical considerations that would limit exercise if those freedoms - in which case we should argue that implementations that make their exercise easier are preferable to ones that don't
@lxo yes, you have come up with an incorrect model in order to avoid admitting you're running non-free code.
@lxo if you're willing to call them programs, why do the four freedoms not apply? At minimum, why do you not deserve the right to know what these programs are actually doing?
@lxo (the program in your hard drive can, by the way, be updated by the vendor - but it's different to the microcode case because it's in mutable storage and never in ROM and so the update is permanent)
@lxo the firmware in your WiFi card isn't doing your computing, but RYF insists that the program running there must either be in ROM or free. Why is it different to your hard drive?
@lxo so why is it not relevant to RYF but WiFi is?
I don't think it's a trend. Even so, you said, and I quote, "a piece of software that you don't stand a chance of modifying yourself, because it's digitally signed by the vendor so that you can't".
Not all non free firmware is like that. Yet the ryf campaign requires that no firmware be updatable as though it were.
So, when a replacement free firmware is built, a device that has the RYF badge will be less free than one that doesn't.
@mjg59
Also, no, I'm not just associated with Debian, I've been a Debian Developer for just over 25 years now, and have been a DPL candidate thrice. Trust me when I say that we don't welcome the non free blobs. Our strategy however is one of pragmatism: if people need to buy expensive hardware to run free software, that's a barrier to adoption. We keep the barrier low in hopes that it will convince some people.
@mjg59
This is not the strategy that the fsf chose to make, and that's fine. It also won't work every time, and that's also fine. But some people will be convinced, and become a member of the community, and that's a good thing.
I do think that having blobs is fine if there is no alternative, for very much the same reasons as why the GNU project started off by accepting non free kernels while replacements were being written.
@mjg59
Oh come now. The ryf program is a hardware certification badge. Why would you create a hardware certification badge if you weren't trying to influence hardware manufacturers?
I vehemently disagree with you that your stance is consistent. If it's theoretically updatable, it's software. Artificially crippling your hardware so that you can't update it anymore it's like putting your fingers in your ears and going "La La La can't hear you"
@mjg59
It's modifiable, therefore it's software, therefore it should be free.
That's consistent and makes sense.
Yes that's difficult to reach today. The GNU project has accepted similar compromises in similar situations in the past, and a hardware certification program that encourages that could go a long way into fixing that.
But sure, tell me I need to go cripple the Debian kernel instead of admitting you were wrong.
@mjg59
hard drive, wifi, keyboard, all that firmware is tolerated if it's within the black box.
but if it's something that would have to be installed and loaded, in a user-visible way, that's no longer part of the black box, and it's known to be software.
@lxo That's not what RYF says:
"The exception applies to software delivered inside auxiliary and low-level processors and FPGAs, within which software installation is not intended after the user obtains the product"
Hard drive firmware is intended to be installed after the user obtains the product. Vendors routinely ship bug fix and reliability updates and won't provide support unless you install it. Hard drives don't meet the RYF guidelines.
@lxo this is important, because people have absolutely reverse engineered this and identified security issues that wouldn't be known if the code was invisible or ignored
but the topic of the thread that you're attempting to hijack was nonfree firmware that is like that.
I don't see that the device will be any less free. it is free when it ships, just as free as a piece of hardware can be. it will just be slightly less flexible, because that component will behave like a hardware circuit even if, deep inside the black box, it turns out to be software.
the point is that there's not a loss of freedom, nor unethical behavior, just the nonrealization of a noncritical but yeah, desirable affordance.
CC: @mjg59@nondeterministic.computer
@lxo It's intended that the software be updated and so the exception doesn't apply, and so it needs to be free software to meet RYF. It's not, so doesn't. Sorry, I didn't write the rules.
GNU used nonfree programs to develop their replacements. that, taken to the end, solved the problem that the nonfree programs posed, and eventually they no longer had to be used.
you're justifying using nonfree programs because there's no alternative. that doesn't carry any kind of expectation of solving the problem, because the alternatives won't come into existence by that kind of acceptance. I pose that this unquestioning acceptance demotes, rather than promotes, the development of alternatives.
CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social
I get it that you disagree, if you even understand it. but your claiming that it's not consistent suggests to me that you don't understand it. making hardware behave like hardware is by no means crippling it, that's what hardware has always been supposed to be like. demanding users to install nonfree software would make the hardware incomplete and dependent on nonfree software, and that would be insulting to users who don't want to deal with nonfree software, but who don't mind hardware's being hardware.
CC: @mjg59@nondeterministic.computer
CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social
by twisting the rules you are effectively rewriting them.
anyhow, this is also off topic for the thread, and it's already too big for me to deal with.
@lxo what? The device provides an interface to update the software included in it, and it is intended that this occur after the user purchases the device. It's the extremely clear and plain reading of the language. The guideline doesn't say "It's fine if the user chooses not to do this".
except they do, but your twisted reading is somehow right and the intended one isn't. sure.
I've had enough of this. I hope you had fun too.
@lxo or the reviewers were unaware of the update interfaces? The exception doesn't apply to the CPU in any case.
I'm not sure you're getting what I'm saying, so I'm going to give it one more try and then just give up.
I'm not going to insult your intelligence by doing a socratean dialogue here. I know you know that non free firmware is software, and that as it is not free, it would be better if it were made free.
@mjg59
You (the fsf, not you personally) came up with a rule that allows you to ignore the fact that it is there so you can live your live with computing infrastructure that is under your control for as much as possible. I get that.
But I know that you know, deep down, if you are honest with yourself, that the non free software is still there and that the rule is an illusion.
@mjg59
- replies
- 1
- announces
- 0
- likes
- 0
So your goal, as the fsf, should be to come up with a plan to eradicate that non free software. I know it will be an uphill battle, but the whole GNU project was an uphill battle and that never stopped you before. It shouldn't stop you now.
I already made a few suggestions as to how you could use the ryf program to make that happen. You dismissed some of those options for reasons that I agree make sense.
@mjg59
But there are more things you could do. Here are some more suggestions:
Make the ryf program a multi tiered program, with the lower tier being the current situation and the higher tier not allowing non free firmware. Initially you won't have many submissions for this higher tier. That's fine. Some people will aspire to get there, and start working on free firmware. Even if they fail, they still may do stuff that improves the world.
@mjg59
In this, have some allowance so that hardware manufacturers don't have to choose between losing the ryf badge or destroying stock when someone announces free firmware for something not in their device.
Introduce bonus points, or stars on the badge, or some such, for each piece of free firmware that's used by the device. Simple, but could be effective.
@mjg59
I just came up with these during my breakfast this morning. I'm sure you can come up with more if you need to.
The point is to reward and encourage people to rid the world of free software. I know you want this. I want this.
Or, I don't know, your can keep your head in the sand and wait until the fsf is completely irrelevant because everyone knows they don't really care about free software.
🤷
@mjg59
CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social
CC: @wouter@pleroma.debian.social @mjg59@nondeterministic.computer
Great. I'm glad that you find my suggestions acceptable and that you can see the benefits in them.
If there's anything I can do to help with making that a reality... I can't promise to do much but I can try, and would be happy to be kept in the loop regardless. s/pleroma.debian.social/debian.org/ for my email.
Also, thanks for ignoring the accidentally missed 'non' in my last post. No we don't want to rid the world of free software 😲😂
@mjg59
:gondola_head: 🌿