pleroma.debian.social

pleroma.debian.social

@lxo
Do you genuinely honestly actually audit the source code of every single piece of software running on your system and compile it all yourself, including web code?
Either you have a lot of time on your hands and a lot of skill, or you're running a very minimal system, or you actually don't.

I don't have to. that's the power of community.

security doesn't work in absolutes, and auditability is an imperfect deterrent, but it's infinitely better than the moves to prevent auditability that hostile vendors adopt

I do audit the rare cases of web blobs that are imposed on me, because I can't count on community for those, and my security depends on it even when my freedom has been unjustly taken away

@lxo
And even if you do, most people* can't. So for them, they need third-party audits, which as I have previously pointed out, can be done without source code. Or otherwise they try to get their software from sources they trust.

*For example, rocket scientists and brain surgeons

@lxo
>but delivering them in the form of binary blobs mean that the one who accepts them has to trust them blindly and to give up any pretense of security from the vendor

It's the biggest cognitive dissonance in the computer security field and it is very tiresome.

>there is a significant risk in granting the vendor (just like to anyone else) a new round of control over your computer
Just found a good analogy for some range of people.
Security in proprietary computing is like a rubiks cube without any colors. It's still a reference to a black box but the rearranging nature of it, an update, means that you only have the illusion that the vulnerability you had at one specific square is just somewhere else in the unknown.

@lxo
Then you personally know other programmers that you trust to audit it for you. Again, most people don't have that.

that's missing the point. auditability alone is already quite a deterrent. that some of us actually engage in auditing is a bonus that benefits everyone, even if it doesn't happen very often. it's kind of the panopticon effect, but for the better.

@lxo The nature of side channel attacks is that CPU-imposed barriers are no longer as strict as they should be. This means that hypervisor boundaries are porous - it's possible to extract material from other virtual machines. I understand the perspective that simply avoiding executing any untrusted software removes that risk, but I do not control the software running in other VMs on the same hardware. How do you solve that?

knowing that the hypervisor and virtual machine boundaries are porous, with or without the fixes to already-published problems, you realize it's insecure to allow untrusted others to run untrusted software on your machine, even within a virtual machine. javascript virtual machines are an example that I explicitly mentioned, but the same logic applies to other cases.

@lxo so I can't run a service that allows others to run software of their choosing on my hardware?

sure you can. but both you and they should be aware of the porousness of the boundaries, and of the security implications thereof, so as to do your own computing there accordingly. and please don't SaaSS them.

@lxo Real world evidence suggests that from a practical perspective and with appropriate mitigations, the boundaries are solid. But if we're assuming that the security boundaries are porous, why do we bother with the security boundaries at all? Why not allow ptrace() to read memory across user boundaries?

that sort of all-or-nothing argument doesn't make much sense to me, just as much as "we need the fixes!" I mean, do you seriously believe that the existing fixes fix all side-channel problems for good? given the ongoing stream of such problems, IMHO it would be irresponsible to act as if this was the case and not take other isolation precautions. at which point the so-called fixes are more like wasteful overhead.

the way I see them, the boundaries are most useful to avoid accidental data corruption and leakage. the information and tactical asymmetries and the general quality of software make it so that an intelligent and resourceful opponent who gains some access can likely find ways to escalate that access, and presuming otherwise is likely self-defeating.

@lxo Do they fix them for good? I doubt it - any more than I doubt any security update fixes all bugs of that category. But they do fix the vulnerabilities that are publicly known and could otherwise be trivially exploited, and the fact that we haven't seen them exploited in the wild despite the relatively low cost and potential high gain strongly suggests that they work well enough.

Could a state-sponsored attacker still win? Possibly! But that's the all-or-nothing argument again.

@lxo What you're doing here is protecting against a theoretical attack (Intel providing backdoored microcode updates) and leaving yourself open to a known attack (sidechannel data exfiltration). You may well have a use case where that's not a concern to you - you may be the only user on your system, there may be no secret data on the system, that kind of thing - but that's not everyone's case, and people should be able to make an informed choice about that.

they shouldn't have to make themselves vulnerable to Intel to use Intel processors for their computing, whatever their computing is. but Intel doesn't want it to be this way. it wants the backdoors it places on computers to remain (whether or not you upgrade the microcode), and it wants to control how users get to use the computing equipment it sells. while some programs demand users to accept that submission, others remind users that they don't deserve to be so mistreated. all of these facts are important for decision making, and the choices are there regardless of what any of us says. the one choice that isn't is the one that most should be, that users most deserve.

FWIW, I don't doubt they fix the known problems. I doubt they fix the problems we still don't know about, that are likely to be there because of the pressure for performance and the long tradition of designing such things without taking timing side-channel attacks into account. I wouldn't count on others' not knowing about likely problems in a category whose specifics we still don't know about if my security depended on it. so I make my decisions under the logical reasoning that those problems exist, I plan accordingly, and as a consequence, the fixes would bring me no security, more vulnerability, and plenty of undesirable overhead. that's lose-lose-lose, so I won't go there. I guess that's not the case for you, if you're more willing to take other risks that may compromise your security, and I'm glad you have choices that enable you to do your computing as you wish, whether that's patching linux-libre to enable the loading of the blobs you want, or booting any of the nonfree kernels out there that would reprimand you if you didn't have those blobs handy for it to load for (or against) you.

They shouldn't have to, but that's the choice that exists in the real world - anyone using an Intel CPU is placing trust in Intel not to have backdoored it in some way (which is true even for non-microcoded CPUs). The threat you're describing is one where Intel is initially trustworthy but becomes untrustworthy - we have no evidence to say that's ever happened, so you're protecting against a theoretical threat while leaving yourself open against a demonstrated threat

to be more abundantly clear, neither attack is theoretical. the proprietary microcode update is also a known attack. and so is the technical measure that prevents you from installing your own fixes.

@lxo deployment of a back door via CPU microcode update is a theoretical event. Some people will have that in their threat model, and will want to avoid those updates as a result. Absolutely legitimate choice. As you say, those people should also be ensuring every other avenue of untrustworthy software in their system is closed off. But that's not everyone, and that's not a policy decision that should be imposed without ensuring people understand the tradeoffs.

the backdoor ships with the microprocessor, and it will only allow Intel's changes in. there's nothing theoretical about it.

I get it that you don't consider that a threat for your freedom or your security, and so you wish to overlook it.

@lxo I think we're using inconsistent terminology. I'm using "backdoor" to describe CPU behaviour that alters its security properties in an attacker controlled way. With that definition the ability to update microcode is not in itself a backdoor, as making use of it is under the user's control. Obviously it can be used to deliver a backdoor, but that is an event that has never been observed.

how doesn't the door that Intel uses to alter the CPU properties in an Intel-controlled way fit your description of backdoor? just because they're upfront about its existence, so we might as well call it a front door?

sure, you have to open that door for it to sneak its blob in, unlike other vendor-backdooring systems at higher levels of enshittifiability, and it's presumably not universal, unlike other vendor-backdooring systems, but it seems specious to not consider it a backdoor.

but I get that you're speaking of a theoretical backdoor they could conceivably install if you open the preexisting backdoor to them. that amounts to dismissing the known, actual backdoor to distract yourself with a theoretical one.

@lxo It's an advertised feature that does nothing unless the operating system actively engages with it. A backdoor is something that's hidden from the user, and which directly gives someone else access to something they shouldn't. Introducing hardcoded credentials into sshd would be a backdoor - an advertised feature that has no security impact unless someone actively makes use of it isn't.

I suppose you'll find that most users aren't aware and have never heard of this "advertised" feature. can you point of an ad that even mentions something like "get your processor fixed or broken or downgraded through your operating system vendor!"

see, in your post you show you trust Intel to not be an attacker, because you imply Intel should have access to the innards of your computer. well, not mine. if I'm not allowed to change those bits, nobody should.

and if it didn't have any security impact, why do we seem to always end up talking about security when the topic is microcode?

but, sure, if you don't want to call it a backdoor, what kind of door do you want to call it? sneakydoor? sidedoor? bottomdoor? frontdoor? masterdoor?

@lxo it's advertised in the same way as paging is, even if CR3 is never mentioned in user-facing adverts. It's not hidden. If you want to argue that we should do more to educate users on the tradeoffs of using proprietary blobs, then I would absolutely agree with you - but so far we have a track record of Intel shipping updates that do block the demonstrated attacks, and not of them violating existing security assumptions in the process. The available evidence is that they improve security.

to me, the available evidence is that they slow things down massively to accomplish this presumed security improvement. so much that Intel added obnoxious terms to microcode licenses to prohibit the publication of performance benchmarks, as I'm sure you know. would you say that this fact was advertised too?

but what else do these changes do? can we even tell? can we even look into what other holes and backdoors they may bring about? are these affordances-vs-prohibitions advertised anywhere?

to me, good engineering is about finding balances between conflicting requirements, not about believing and blindly trusting vendors' salespeople and marketing departments. there are far too many examples of vendors abusing "for security" (without as much as mentioning whose security they speak of) for private gains for me to trust them or go along with them without questioning. "what are the ingredients?" "nevermind that, jussst eat it, it'ssssafe, trust ussss" isn't exactly inspiring of confidence to me.

@lxo the majority of the performance loss isn't in the microcode updates, it's in the OS making use of new functionality in those updates - if you pass mitigations=off you regain the performance even with the new microcode, and you can choose the set of mitigations applied to fit the particular threat model you have. By removing the ability to update it you remove the ability for users to make that choice, without reducing the quantity of non-free blobs the system depends on.

@lxo and I completely understand you making the choice not to trust an opaque update! For a bunch of threat models it's probably the right choice. What I object to is you making that choice on behalf of all of your users, and not making it clear to them what the impact may be.

do you even listen to yourself? how foolish would it be to put something in that brings no benefit, causes a slowdown and potentially brings other problems just because you have the option to disable part of the damage it causes?

as for the fallacy that we remove that ability, we've already covered it: we don't. it's free software, people can change it so that it does whatever they wish, and they don't even have to use our fork if they don't want to. our policy makes sense to the self-selected set of users, and it's entirely unlike the 'install it or else' model that Intel and you promote.

as for "without reducing", my count is that avoiding the new blob halves the blob count even in your twisted interpretation.

explain to me as if I were 50: why does it make sense for you to grant Intel new powers over your computer, just because they had some power over it at the time they made the component?

one of the benefits of being a small community of freedom-conscious people is that we get to know our users better, instead of perceiving them as a mass that needs to be all pushed one way or another. the latter thinking would typically get them into walled gardens or slaughterhouses.

@lxo Brings no benefit for you, brings significant benefit for others. And, clearly, the CPU is running non-free microcode whether an update is loaded or not - replacing one blob with another doesn't increase the number of blobs the running system depends on.

But "fallacy"? Obviously it's removed. https://www.fsfla.org/ikiwiki/selibre/linux-libre/ uses the word "removed" several times. You removed the code that allowed someone to update the microcode. The fact that it can be added back doesn't mean it wasn't removed.

@lxo If I don't trust Intel to avoid introducing deliberate security backdoors via microcode updates, I should also not buy any new Intel CPUs - they might have introduced a backdoor. I shouldn't buy an old one either - the old one might have a backdoor that my current one doesn't. Either Intel is trustworthy, in which case the microcode updates are as safe as the microcode the CPU ships with, or they're not, in which case I should never trust any Intel CPUs at all.

the behavior of the CPU would be the same if the preloaded microcode were purely hardware circuits. it can be technically and ethically regarded as a component of the hardware, which makes it equivalent to neither free software nor nonfree software, but to a hardware circuit. which is not great if it's not freedom-respecting hardware, but it's what's there. over the years, its features and failings have been largely understood, and it seems that the processor equipped with it does follow our instructions after all, so it is usable in freedom. the same cannot be said of any of the many updates pushed onto users, that are quite clearly and self-evidently nonfree software, and that would turn the ethical equivalent of a piece of hardware into a piece of hardware with nonfree software, granting further unjust, exclusive and renewed power to that who made the blob onto the user.

or, in your twisted way of perceiving the preinstalled microcode, would double the blob count in the processor.

while the logic is neutered in the kernel, in the process of making the kernel stop demanding or requesting the blob, the ability to load the blob is not gone from the processor, obviously, and you know that. if you wish to deprive yourself of control over your processor, you can load and run a kernel that will help transfer your control over it to the vendor, whether it's the upstream blob-ridden and -pushing kernel, or a further modified version of Linux-libre that (re)implements your wish. that's your sovereign choice, and we're not taking it away, we're just refusing to entice you to "sacrifice freedom for a little security" (you know how that ends, yet you insist on it).

oh, my, here's that binary thinking of what others should or shouldn't do again, what should or shouldn't be trusted, without a hint of a grasp about the ethics of the power dynamics involved.
shall we start over, back from the top?

@lxo
Software loaded to ROM chips is still software. Otherwise I can make Windows be freedom-respecting by burning it to ROM chips.
@mjg59

@lxo putting non-free code on a read-only optical disk doesn't stop it being non-free code. Putting it in read-only memory doesn't stop it being non-free code. It's code. You've come up with an entirely arbitrary definition to stop having to care about it.

@lxo Intel has the power over how your CPU behaves, whether you load new microcode or not. If you trust your existing CPU but don't trust future Intel you shouldn't load new microcode and you shouldn't buy new CPUs. The power dynamic has an impact on a number of things, but not your ability to determine whether your CPU is trustworthy.

@lxo
In the early 80s, the fsf accepted having to run non free operating systems as there was no option at the time to run a free operating system instead. As soon as that stopped being true, they stopped accepting that. This was good and proper.
@mjg59

@lxo
Today, there is no way to run a computer without non free firmware. The good and proper way to handle that would have been to accept that (as with the non free operating systems in the early 80s) and to fund/promote/encourage projects to produce free replacements.

Instead, the fsf chose to put their heads in the sand and pretend non free firmware doesn't exist when it's burned to ROM.
@mjg59

@lxo
Worse, they required that firmware not be updatable for a piece of hardware to achieve the 'restricts your freedom' badge. As a result, if someone builds a free replacement for a bit of non free firmware in the device, you can't even make it free anymore.

This is sad.
@mjg59

your vendetta misses the point that this discussion is about a piece of software that you don't even stand a chance of modifying yourself, because it's digitally signed by the vendor so that you can't. the theoretical arguments about building a free replacement just don't apply to this program. conversely, RYF doesn't stand in the way of replacing hardware components, or equivalent-to-hardware components, but do you really care about that beyond spewing hatred and reinforcing the power imbalance that favors hardware vendors' control over users?

CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social

in the early 80s the fsf didn't exist

gnu was built on nonfree unix as part of making a replacement thereof, one of the few acceptable uses of nonfree software. it's acceptable because once the replacement is made, the problem is solved, and we've escaped the prison. making it unacceptable would keep us in prison forever.

(the other acceptable use is for reverse engineering; also acceptable because it solves the problem)

CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social

again, you're missing the point that most firmware is digitally signed to prevent us from solving the problem, or internal to the hardware black box and (technically and ethically) equivalent to a hardware circuit (so there's no ethical difference between its being software or hardware). the good and proper way to handle that is to promote the development of hardware that uses free firmware, which we do, not give hardware makers a free pass to push enshittifiable nonfree blobs onto their victims, which would be self-defeating, but many others insist on doing.

CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social

@lxo @wouter but you're happy to endorse hardware that contains code that can never be modified, even to the extent of promoting it over hardware that runs non-free code that *could* be freed. I accept this isn't the case for Intel microcode, but it's still an incoherent position.

Pondering about the true nature of the soul of hardware components doesn't strike me as a useful way to reason about whether hardware is usable.

Hardware is typically a black box. It's inescapable that, at some level, the machine will do what its designer made it to do, and there's nothing inherently wrong (as in unethical) about that.

Moreover, there's no expectation that you could be able to change it at that level, for it could be all hardware circuits, that are impossible to modify. Even if you could build another machine or component with the desired change, that wouldn't modify the original machine or component. There's no ethical imperative for that.

Even when you have access to its specifications and source code, which parts got compiled to hardware circuits and which were compiled into instructions for a general- or special-purpose programmable component is immaterial and irrelevant to tell whether the machine is usable.

It's a black box. It could range from all hardware to a qemu layer on top of all hardware or of a qemu layer on top of... You get the idea.

As with AGPLed software on a remote server, even with specifications and source code, you can't generally tell whether there are undisclosed malicious or undesirable features omitted from the sources. That would be unethical, but you can't generally tell, just as you can't generally tell whether friends really like you or just pretend to. It's the nature of black boxes, and if you worry too much about it, you may end up without friends, and without hardware.

Sure, if it exhibits malicious behaviors, you probably don't want it in your life.

For purposes of freedom and ethics, what matters for programmable hardware is whether the machine is faithful to its programming model. If it takes your instructions and carries them out, you can use it as a black box for your computing in freedom, whether it's on-premises hardware or a remote virtual machine.

Now, if you can tell that it takes instructions and commands from others, or send information to others, it's not a black box, and there are grounds for suspicion that those behaviors may be malicious, even if they don't directly interfere with the exposed programming model.

Software components outside the hardware black box bring with them an ethical issue that is not present in components inside the black box: they are visibly and indisputably software, and as such, you deserve control over what they do to your machine.

Being clearly outside the black box, they're not covered by the inescapable nature of hardware, not even theoretically: they're indisputably software, and software is modifiable unless someone prevents you from modifying it by unethical means.

This post is about ethics, the core issue for free software, not about security. For security, opening the black box matters, whether it's software or hardware.

see the post that starts "Pondering about the true nature of the soul of hardware components doesn't strike me as a useful way to reason about whether hardware is usable." at https://snac.lx.oliva.nom.br/lxo/p/1772012447.506809

CC: @mjg59@nondeterministic.computer

@lxo does sticking a copy of Linux on a CD and locking the player and attached computer in a black box mean that the owner of that box should have no expectations of being able to modify what is very clearly code? From an external perspective the operation of the box may be indistinguishable from a hardcoded CPU, but if we *know* that it contains free software, why is it ethical to prevent the owner from performing any modifications they desire?

come to think of it, "install now" or "maybe later" aren't very desirable choices to be pestered with frequently, whether from programs or from blob pushers. some people just prefer to say "no way, leave me alone", and then they install GNU Linux-libre.

I am happy to endorse hardware with unmodifiable software equivalent to a hardware circuit inside the black box, which creates no power dynamics (only power statics, which is an inescapable given), and to reject outside-the-box nonfree unjust software that vendors want users to install and run to further their control over the users, yes. what's wrong or incoherent with that?

what's incoherent to me is to talk about security while promoting black boxes of all sorts, including unethical ones.

it's specious at best to speak of the children who could be reverse engineering firmware for hardware components that will obsolete by the time they are finished. someone resourceful enough to carry out such a project in a more timely fashion can also replace or reflash an EEPROM. that argument is bullshit to promote the blobs and their vendors' control over users.

it deflects the anger that should be directed at the component vendors, for not offering free firmware, towards those who point out their unethical stance and come up with an ethical compromise that denies them the control they sought.

CC: @wouter@pleroma.debian.social

@lxo @wouter you encourage users to buy hardware containing software they will never be able to free instead of buying hardware that a sufficiently driven user may be able to free. But even if it's never freed, it is easier in many cases to examine and audit that non-free software if it's loadable and very hard if not impossible if it's embedded in ROM in the device. I have personally done so for various devices I own, and have identified security issues that were rectified by the manufacturer.

@lxo @wouter would this have been better if I could fix it myself and share those improvements with others? Yes! Absolutely! Would it have been worse if that code had been fixed in the hardware, making it impossible to rectify those issues? I think so!

if you're asking whether it's ethically equivalent to a hardware circuit, I'm inclined to say that it is.

if you're asking whether it's legally compliant with the GPL, I'll refer you to a lawyer, which I'm not, but my understanding is that nothing in the GPLv2 prevents this if you otherwise satisfy the GPLv2 requirements; GPLv3 might require another reading to make sure this wouldn't trip the requirement for installation instructions, but I'm pretty sure it wouldn't. a CDROM is not modifiable to begin with, and there's nothing illegal about burning a copy of the kernel Linux onto one. I find it very unlikely that anything in any GPL would break the principle of equivalence to a hardware circuit; it's long been established.

@lxo @wouter (and nothing with code in ROM is doing it with a replaceable chip these days, it's all going to be in mask on the SoC or fused in at manufacturing time)

@lxo I'm somewhat bewildered to have an FSF board member say that I should have no ethical expectation to be able to modify GPLed software running on something I own as long as the vendor does a good enough job of nailing the box shut.

no bullshit, please. a sufficiently driven user may be able to crack the black box, get at the firmware, and do as you say. of course if it's hardware, it's hardware, and then replacing it might require making a replacement hardware component, which is not viable for most. I can see the appeal. that's the reason why vendors love it, while guarding their firmware with encryption and digital signatures so that nobody else gets the benefits.

CC: @wouter@pleroma.debian.social

I can relate, I was just as shocked when I learned that Windows (of the time) burned on ROM would be equivalent to a hardware circuit.

again, crack the black box open and you can replace the CD, or the PROM or whatever.

it would be insane to prohibit recording in read-only media just because you can't modify it.

hardware equivalence removes the ethical imperatives normally associated with software, because the software nature ceases to be relevant.

it doesn't necessarily remove copyright issues in as much as they apply, but they don't normally apply to hardware.

@lxo @wouter there's plenty of firmware that isn't signed or encrypted, but even signed firmware gives the ability to inspect the security of the device in a way having it in mask ROM doesn't. For me, that's an important freedom.

@lxo except it's clearly *not* equivalent to a hardware circuit, that's just an assertion you've made. And in your repeated mentioning of replacing ROMs I'm becoming concerned that you don't actually know much about hardware.

@lxo
When there was no free GNU system yet, most people believed that Emacs was a nice editor but there is no chance they'll ever succeed in writing a free os.

When there was a free compiler and a free libc, must people were like, this is a nice user space but nobody will ever make it a fully free os.

At every stage, the GNU project proved them wrong.

Why would the situation be different for non free firmware replacements?
@mjg59

@lxo
It would be one thing if you advocated against firmware that can't be changed without a signature by the hardware manufacturer, but that's not the case here. You're generalizing that all hardware requires signed firmware blobs.

I agree that verifying firmware signatures in hardware is evil and should be outlawed. But hardware that does no verification, or that verifies only a checksum, in hardware? That's perfectly fine.
@mjg59

@lxo
And although I don't think it works for me, I can understand the argument for Linux-Libre. If there is no free firmware, and you prefer to keep your hardware unmodified, sure.

What I'm saying though is that forbidding any form of update, ever, of the firmware, and doing so in hardware, is wrong, because it makes it equally impossible to replace the non free firmware with a free one.
@mjg59

@lxo
That's an opinion, not a fact, and one I very much disagree with.

Software is software. It doesn't matter whether the software is burned in ROM, it's still software.

To claim otherwise means you're fine with running non free software.

I'm not. I accept that it's not possible in today's world, but it's still not a good thing.
@mjg59

@lxo
Let me put it this way.

There are only very few cases in which firmware really needs to be embedded in the hardware and can't be put elsewhere.

The initial few opcodes of a CPU are a good example: those can't be outside the CPU because it still needs to initialise its components and so there is absolutely no possible way it can be outside the CPU die, and it's small enough that it doesnt matter if it's not updatable. @mjg59

That's software, but it can't really be free software (because you can't change it even if your wanted to)

Anything that can theoretically be updated or loaded from outside the die though is software. It doesn't matter whether it is "indistinguishable" from hardware, the fact is that it *is* software. And as it is software, the only ethical thing for it is to be free.
@mjg59 @lxo

I would have been fine with an ryf campaign that said, say, at least X amount of the embedded software in the device must be free software, and no device can be used that requires non free software if alternative for the same function that doesn't require it exists, and no device, whether with free software or not may require cryptographic signatures for changes unless the owner can control the keys.
@lxo
@mjg59
replies
2
announces
0
likes
0

Doing this incentivises the device manufacturers, who are in a better position than anyone to write free replacement firmware, to actually do so.

Instead, the fsf caved to the people who push non free embedded software and told them it's allowed, as long as the fsf can pretend the embedded software is not there.

I find this sad, and a betrayal of everything the free software movement stands for.
@lxo
@mjg59

hey, hey, it was you, not me, who use the fantastic example of a CD-ROM locked inside the hardware. when I follow you along down that path to see where you're going with that, you can't just turn around and ask me how we got there.

now, you want to label free software core philosophy as an assertion of mine, fine, what could I do but accept the honor? thank you!

@lxo yes, it's a fantastical example that's intended to demonstrate that your argument is non-sensical. Your position seems to be that if the box is closed then it's not software, but if someone were to figure out how to open it it would become software. That's clearly not how any of this works.

two main reasons, I suspect:

  • resistance from hardware vendors, who were used to having all the pieces strictly in their hands, but were finding advantage in moving some pieces out of the black box, so they started coming up with various tricks to have the cake and eat it too. it looks like we may have just been lucky in blindsiding the OS vendors, who couldn't mount such a resistance in a timely fashion, even when they also made the hardware. now they have restricted boot, walled app stores and whatnot, and they're coming for our primary computing devices too.
  • lack of resistance from the community, that welcomed this move and didn't counter it, out of ignorance about the difference between bits inside the black box and those outside, of rationalization over the hardware vendors gambit, and of evidently not caring enough to go even as far as reverse engineering the bits that weren't guarded by crypto dogs, preferring to ship the binary blobs until someone else came up with an alternative
CC: @mjg59@nondeterministic.computer

it's a trend that I'm detecting, not a generalization I'm making out of my own volition.

and firmware restricted in this very way is what got this thread started, maybe you missed that.

I think we are in violent agreement that tivoizing hardware is unethical, and that hardware that does no such thing is acceptable. I guess we even agree that, if such hardware demands blobs to be loaded onto it, those blobs are unethical and shouldn't exist either. it seems to me that our differences are limited to the rarer and rarer case in which a former piece of hardware circuit has been replaced by a programmable circuit with a preloaded program inside the black box. that doesn't strike me as reason for so much heat.

now, I see you're associated with debian, that holds a stance that promotes very heavily, seemingly welcomes the growth of, and forces onto users the very blobs you oppose. you must be very strong to survive in that atmosphere. I hope you're aware of and supportive of such efforts as debian-libre.

CC: @mjg59@nondeterministic.computer

I don't think drawing a line at a percentage would be in line with free software philosophy of ethics, even if we could somehow tell what's inside black boxes, so I don't see that a free software-aligned would go down that path.

rejecting hardware that requires blobs with signatures sounds like a decent compromise for a certification program that isn't aligned with free software philosophy but that wants to push in that general direction.

how about launching a debian certification program along these lines?

CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social

I'm not really sure what the incentive is that you're speaking of (this thread has become unmanageable to me), but I don't have a lot of illusion as to how much influence we have over hardware manufacturers. our resistance needs to be planned accordingly, and it must not betray our primary value, ethics, for tactical or even strategic gains.

I find that the notion of tolerating components inside the hardware black box makes ethical sense, and that's the line that the free software movement has drawn from the beginning. there's no caving or betrayal involved, it's just a consistent ethical stance, even if you disagree with it and wish it wasn't so. if it comes across as betrayal to you, that follows from misalignment between our philosophy and your expectations. I hope you'll eventually share our values, or at least come to stop hating them and aiming for destruction.

CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social

if the box is closed then the fact that the designer controls its functioning it at some level is a given, whether the components are hardware or software. I'm with you so far.

if someone were to figure out how to open it, one would find out whether it's hardware or software, but it would still be of no ethical consequence. to you, AFAICT it retroactively makes a difference, and that makes no sense to me. I think this is where we differ.

your argument comes across to me as if the designer's choice of a programmable circuit and a program for a machine component, instead of a dedicated hardware circuit, were somehow meant to unethically make it harder for others to control what the machine does for them. that seems backwards and, TBH, completely bonkers to me. can you elucidate how your reasoning differs from this, or perhaps make it make sense?

@lxo it makes no retroactive difference - it is software, it always was software, all the normal ethical considerations should apply. Now, in the same way that free software published in a book can't be modified in place, there may be practical considerations that would limit exercise if those freedoms - in which case we should argue that implementations that make their exercise easier are preferable to ones that don't

that it always was software is not true, the scenario (at least in my mind) had it as a hardware circuit at first. that's how firmware came about historically. and most of the times you don't even know or can't even tell what's inside the box, so the only normal ethical considerations that could apply are those that apply to hardware. how could it be different without magical divination powers?

@lxo yes, you have come up with an incorrect model in order to avoid admitting you're running non-free code.

except I have no problem whatsoever admitting that there are likely programs running in my hard drives, keyboards, screens, TV sets, microwave ovens, digital clocks and whatnot. it's just that this possibility, or even certainty, doesn't trigger the same ethical issues that installable software does. heck, they're not even doing my computing for me, they're doing their own thing, that they were designed to do, so there aren't even any grounds for me to demand control over them under free software philosophy. it's like you and we don't even share the same philosophical foundations or the same goals. of course we'd come to different conclusions, and draw different lines, and find each other's positions inconsistent. the mistake was to assume the common ground.

@lxo if you're willing to call them programs, why do the four freedoms not apply? At minimum, why do you not deserve the right to know what these programs are actually doing?

@lxo (the program in your hard drive can, by the way, be updated by the vendor - but it's different to the microcode case because it's in mutable storage and never in ROM and so the update is permanent)

I said it, because they're not doing my computing. the four freedoms are essential for me to have control over the programs that I use to do my computing. that control is what the free software movement stands for. I have no entitlement, claim or right to control anyone else's computing.

@lxo the firmware in your WiFi card isn't doing your computing, but RYF insists that the program running there must either be in ROM or free. Why is it different to your hard drive?

yup

@lxo so why is it not relevant to RYF but WiFi is?

@lxo
I don't think it's a trend. Even so, you said, and I quote, "a piece of software that you don't stand a chance of modifying yourself, because it's digitally signed by the vendor so that you can't".

Not all non free firmware is like that. Yet the ryf campaign requires that no firmware be updatable as though it were.

So, when a replacement free firmware is built, a device that has the RYF badge will be less free than one that doesn't.
@mjg59

@lxo
Also, no, I'm not just associated with Debian, I've been a Debian Developer for just over 25 years now, and have been a DPL candidate thrice. Trust me when I say that we don't welcome the non free blobs. Our strategy however is one of pragmatism: if people need to buy expensive hardware to run free software, that's a barrier to adoption. We keep the barrier low in hopes that it will convince some people.
@mjg59

@lxo
This is not the strategy that the fsf chose to make, and that's fine. It also won't work every time, and that's also fine. But some people will be convinced, and become a member of the community, and that's a good thing.

I do think that having blobs is fine if there is no alternative, for very much the same reasons as why the GNU project started off by accepting non free kernels while replacements were being written.
@mjg59

@lxo
But in order for that strategy to work, you need to encourage and promote the production of free firmware. The ryf thing doesn't do that, on the contrary.
@mjg59

@lxo
Oh come now. The ryf program is a hardware certification badge. Why would you create a hardware certification badge if you weren't trying to influence hardware manufacturers?

I vehemently disagree with you that your stance is consistent. If it's theoretically updatable, it's software. Artificially crippling your hardware so that you can't update it anymore it's like putting your fingers in your ears and going "La La La can't hear you"
@mjg59

@lxo
It's modifiable, therefore it's software, therefore it should be free.

That's consistent and makes sense.

Yes that's difficult to reach today. The GNU project has accepted similar compromises in similar situations in the past, and a hardware certification program that encourages that could go a long way into fixing that.

But sure, tell me I need to go cripple the Debian kernel instead of admitting you were wrong.
@mjg59

it's not different at all.

hard drive, wifi, keyboard, all that firmware is tolerated if it's within the black box.

but if it's something that would have to be installed and loaded, in a user-visible way, that's no longer part of the black box, and it's known to be software.

@lxo That's not what RYF says:

"The exception applies to software delivered inside auxiliary and low-level processors and FPGAs, within which software installation is not intended after the user obtains the product"

Hard drive firmware is intended to be installed after the user obtains the product. Vendors routinely ship bug fix and reliability updates and won't provide support unless you install it. Hard drives don't meet the RYF guidelines.

@lxo this is important, because people have absolutely reverse engineered this and identified security issues that wouldn't be known if the code was invisible or ignored

the drives do. installing the firmware update on them doesn't. that's exactly the distinction between what goes in the black box and what is outside and would have to be installed separately. you can't possibly be that dense.

of course I know there's nonfree firmware that isn't like that. I also know about free firmware.

but the topic of the thread that you're attempting to hijack was nonfree firmware that is like that.

I don't see that the device will be any less free. it is free when it ships, just as free as a piece of hardware can be. it will just be slightly less flexible, because that component will behave like a hardware circuit even if, deep inside the black box, it turns out to be software.

the point is that there's not a loss of freedom, nor unethical behavior, just the nonrealization of a noncritical but yeah, desirable affordance.

CC: @mjg59@nondeterministic.computer

@lxo It's intended that the software be updated and so the exception doesn't apply, and so it needs to be free software to meet RYF. It's not, so doesn't. Sorry, I didn't write the rules.

that's not even close to the same reason. that's making up an excuse to tolerate unethical behavior based on a misunderstanding of others' behaviors.

GNU used nonfree programs to develop their replacements. that, taken to the end, solved the problem that the nonfree programs posed, and eventually they no longer had to be used.

you're justifying using nonfree programs because there's no alternative. that doesn't carry any kind of expectation of solving the problem, because the alternatives won't come into existence by that kind of acceptance. I pose that this unquestioning acceptance demotes, rather than promotes, the development of alternatives.

CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social

hardware manufacturers operate at different layers. the ones we stand a chance to appeal to are at one end of the chain; the ones that impose blobs are at the opposite end.

I get it that you disagree, if you even understand it. but your claiming that it's not consistent suggests to me that you don't understand it. making hardware behave like hardware is by no means crippling it, that's what hardware has always been supposed to be like. demanding users to install nonfree software would make the hardware incomplete and dependent on nonfree software, and that would be insulting to users who don't want to deal with nonfree software, but who don't mind hardware's being hardware.

CC: @mjg59@nondeterministic.computer

the Debian kernel has been fine for a while, freedom-wise, AFAIK. that's not where the main freedom problem with Debian lies. I expect you to already know that, so why do you bring such nonsense into this conversation?

CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social

@lxo @wouter if it's fine freedom-wise, why does linux-libre need to remove features that are present in it?

I get it that you intend to install such nonfree software on your systems, to the point that you can't stand the notion that neither the manufacturer nor the buyer intend to do that to themselves.

by twisting the rules you are effectively rewriting them.

anyhow, this is also off topic for the thread, and it's already too big for me to deal with.

@lxo what? The device provides an interface to update the software included in it, and it is intended that this occur after the user purchases the device. It's the extremely clear and plain reading of the language. The guideline doesn't say "It's fine if the user chooses not to do this".

right. and the microprocessor is also out (to bring the thread a little back on track), because it offers an interface for alternate microcode to be updated. that's why the certified products ship without processor and without hard drives.

except they do, but your twisted reading is somehow right and the intended one isn't. sure.

I've had enough of this. I hope you had fun too.

@lxo or the reviewers were unaware of the update interfaces? The exception doesn't apply to the CPU in any case.

@lxo
I'm not sure you're getting what I'm saying, so I'm going to give it one more try and then just give up.

I'm not going to insult your intelligence by doing a socratean dialogue here. I know you know that non free firmware is software, and that as it is not free, it would be better if it were made free.
@mjg59

@lxo
You (the fsf, not you personally) came up with a rule that allows you to ignore the fact that it is there so you can live your live with computing infrastructure that is under your control for as much as possible. I get that.

But I know that you know, deep down, if you are honest with yourself, that the non free software is still there and that the rule is an illusion.
@mjg59

@lxo
So your goal, as the fsf, should be to come up with a plan to eradicate that non free software. I know it will be an uphill battle, but the whole GNU project was an uphill battle and that never stopped you before. It shouldn't stop you now.

I already made a few suggestions as to how you could use the ryf program to make that happen. You dismissed some of those options for reasons that I agree make sense.
@mjg59

@lxo
But there are more things you could do. Here are some more suggestions:

Make the ryf program a multi tiered program, with the lower tier being the current situation and the higher tier not allowing non free firmware. Initially you won't have many submissions for this higher tier. That's fine. Some people will aspire to get there, and start working on free firmware. Even if they fail, they still may do stuff that improves the world.
@mjg59

@lxo
Introduce a rule that if there is a piece of hardware that works with free firmware, no alternative that performs the same function may be used if that requires non free firmware, even if the non free option disables firmware updates to keep up the black box illusion.
@mjg59

@lxo
In this, have some allowance so that hardware manufacturers don't have to choose between losing the ryf badge or destroying stock when someone announces free firmware for something not in their device.

Introduce bonus points, or stars on the badge, or some such, for each piece of free firmware that's used by the device. Simple, but could be effective.
@mjg59

@lxo
I just came up with these during my breakfast this morning. I'm sure you can come up with more if you need to.

The point is to reward and encourage people to rid the world of free software. I know you want this. I want this.

Or, I don't know, your can keep your head in the sand and wait until the fsf is completely irrelevant because everyone knows they don't really care about free software.

🤷
@mjg59

thanks for the suggestions, I'll pass them on. I'm pretty sure some of them may have already been considered before my time, and my understanding is that they set out to address something that we don't think of as an ethical problem, but I think they may be useful to form a stronger coalition without any setbacks to our cause.

CC: @mjg59@nondeterministic.computer @wouter@pleroma.debian.social

sorry, my response came up a little too terse and truncated as I typed it in on my way out to an appointment. I meant to also highlight that I like the notion of shifting the focus out of the bare minimum that would be acceptable, and point at a more positive direction for development, even as (or especially because) the industry in general is pushing in the opposite direction. we probably won't change where we draw the line, but we should definitely aspire for higher goals than that. thanks again,

CC: @wouter@pleroma.debian.social @mjg59@nondeterministic.computer

@lxo
Great. I'm glad that you find my suggestions acceptable and that you can see the benefits in them.

If there's anything I can do to help with making that a reality... I can't promise to do much but I can try, and would be happy to be kept in the loop regardless. s/pleroma.debian.social/debian.org/ for my email.

Also, thanks for ignoring the accidentally missed 'non' in my last post. No we don't want to rid the world of free software 😲😂
@mjg59