Press "Enter" to skip to content

What insecure by design means: Lessons from a security researcher

Highlighting a blog post by Jeffrey Paul, a security researcher who observes that the Modern Macs cannot be used in 100% offline environments due to the requirement for internet connectivity within the hardware.

There is a security-specific OS that runs either in the M1 or the T2 for Intel machines, and with a fully blanked and wiped internal disk, that OS will require a cryptographic signature from Apple in order to activate.   Paul highlights that this makes the systems unusable for airgapped needs, systems that must maintain cryptographic integrity, and systems that are offline for extended periods of time that could require repair, such as remote research stations, ships at sea, and in space.

Why do we care?

There’s a statement at the end of the blog that struck me.  “These systems are now insecure by design: there is no way for them to be made secure”

The data point simply about use cases for Apple is important and notable, but not why I noted this.

I’m not specifically calling out Apple here, I’m disturbed by the idea that security researchers are noting that companies are specifically making choices that make systems insecure by design.     We spend so much time talking about security issues on this show, and while this specific issue is not at its core an example of those problems, it does highlight that unless those developing these products truly engineer security into them, it doesn’t matter how much we slap solutions or process or people on top of it.   If the core is broken because of a business decision made by the vendor, you’re out of luck.

Or rather, the market has to push back.

Source: Jeffrey Paul’s Sneak.Berlin