When it comes to Intel Management Engine, I actually think it’s not a threat if you neutralize it. I mean to just set the HAP bit on it. Because if that isn’t enough then that means all computers in the world which use Intel CPU can be accessed by NSA but if NSA had this much power then it seems obvious that they aren’t using it and why wouldn’t they use it?
There’s a github project to neutralize/disbale Intel ME: https://github.com/corna/me_cleaner Disable is overwriting intel ME as much as possible with zeros, leaving only a little remaining to be able to boot the computer. The newer the intel chips are, the less likely it is to be able to disable it. But all chip sets can be neutralized which means to set the HAP bit which is an official feature. In theory we can’t actually trust the HAP bit to really disable intel ME permanently. It’s more like asking Intel to do what they have promised because it’s proprietary. But I think it really does permanently disable it because otherwise NSA would be abusing this power.
That’s why I think the newer laptop models are better because it’s probably not necessary to disable, it’s enough to just neutralize withthe HAP bit. And with a newer modern laptop they can have open source Embedded Controller firmware which is better than proprietary Embedded Controller firmware.
I’m interested to hear what you think as well.
Buying other hardware that you (well… not me ;) can inspect and verify, e.g RISC?
For now the performances are pretty terrible BUT one can imagine, assuming they have the right discipline and mental model doing what’s actually personal on a verifiable processor, e.g browsing and reading emails, and what’s not, e.g watching a TV show on another machine with CPU/GPU with an unverifiable architecture.
PS: I have a Precursor and a Banana Pi BPI-F3 with SpacemiT K1 8 core RISC-V chip and that’s the main idea behind them both, i.e knowing, as a community, how it works all the way down.
How do you want to verify a RISC core not doing something funny?
The same way you would do it with a black box while optionally taking as many shortcuts as one is comfortable with by virtue of assuming having a better understanding of it’s been built?
Get it audited by tools, e.g OneSpin, or people, e.g Bunnie, that one trusts?
I’m not saying it’s intrinsically safer than other architectures but it is at least more inspectable and, for people who do value trust for whatever, can be again federated.
I assume if you do ask the question you are skeptical about it so curious to know what you believe is a better alternative and why.
I mean can’t they just audit a version that doesn’t have a backdoor/snoops. Verifying against silicon is probably very hard.
I imagine it’s like everything else, you can only realistically verify against a random sample. It’s like trucks passing a border, they should ALL be checked but in practice only few gets checked and punished with the hope that punishment will deter others.
Here if 1 chip is checked for 1 million produced and there is a single problem with it, being a backdoor or “just” a security flaw that is NOT present due to the original design, then the trust in the company producing them is shattered. Nobody who can afford alternatives will want to work with them.
I imagine in a lot of situations the economical risk is not worth it. Even if say a state actor does commission a backdoor to be added and thus tell the producing company they’ll cover their losses, as soon as the news is out nobody will even use the chips so even for a state actor it doesn’t work.
Thats true, but that sadly won’t help against a state forcing a company to put these things into the silicon. Not saying they do rn, but its a real possibility.