I was looking at recovering my account, but it seems I need either the FileVault recovery key (protected by the system password which I apparently no longer have), or I need to log in as another user and reset with my Apple ID... but there’s no other users on the system, and with Full Disk Encryption I’d need the recovery key to add another one.
I seem to have locked myself out of 1 of 2 company-issued macOS machines while setting up Intune-based MDM.
One of two things may have happened:
- it forced me to reset me length-over-complexity password. I might have mistyped the new password twice (I don’t think the is the case though, since I believe I logged in again after the reset), or...
- as an experiment, I clicked the “rotate filevault key” button in the intune portal, which failed
Could that failure present as an account lockout?
How do you train for the technical aspects of IT incident response?
Diagnosing issues seems to be a mix of recognizing know patterns, and where that fails, methods of deduction.
Are there effective ways to teach these things without throwing someone into a real incident? If not, what’s the ideal balance between handing someone the solution, and allowing them to fail catastrophically?
After much more digging, I learned that they actually read-only mount the zip as the wwwroot directory with this subtly-divergent deploy technique.
This is actually kind of a cool optimization. It probably wouldn't be as much of a sharp edge if it wasn't so divergent, though. Would be nice if you could mix-and-match by allowing zip-deploy from URL without the read-only mount thing.
Playing with Azure Functions for an infrastructure automation thing today - weekend project stuff, not for work.
They cram so many features into app services infrastructure that it’s hard to sort out the specifics in the docs.
For example: you can deploy a nodejs function app with source control, and it will npm install for you. But if you ship your code as a zip and provide a URL, you have to package your dependencies with your code.
It’s documented, but it adds to the learning curve.
Is there a proper/formal term for the security permissions model where the creator of a resource has full permissions on it regardless of their access to the system at large?
Most common example I can think of is document or file systems.
Is there any good literature contrasting that to other models of permission?
It’s so tempting to fall down this crypto rabbit hole.
I keep looking for ways to interop with Azure KeyVault for signing and encryption with more generic applications, but the software doesn’t seem to exist.
For example, wouldn’t it be nice to generate a PGP detached signature using an HSM-protected key in CI, for release signing? Looking at Go-lang’s crypto libraries, it would “just” need to connect the dots between the Azure SDK and the crypto.Signer interface.
It was incredibly frustrating getting to this, though. The YubiKey documentation was pretty ambiguous in the use of terms for Open PGP and PIV interfaces. Turns out they're conceptually similar, but use different pins on the device. That resulted in me locking myself out a bunch of times.
Good thing I backed up my keys offline before moving them to the key.
After a little playing with my YubiKey last weekend - getting it set up as a smart card for GPG - I finally have it working kinda smooth.
It's kinda nice having the touch policy enabled. The agent seems to remember my pin for a little while, so when I git commit, the little "y" on the key glows, and when I touch it my commit goes through signed.
I’m so glad I dug in deep to PGP message and packet formats for a nonsense side project.
Now I’m working on a simple download verification mechanism based on detached signatures. Earlier in my career this would have been a bunch of magic functions... now I have the foundation to understand and avoid any obvious mistakes.
I especially love how they went to the trouble of making so many things that are familiar, but different enough to feel alien. The obvious example is books with the corners cut off.
But another great example is Helo and Boomer arriving at a fallout shelter, and the door has a symbol on it has a lot of the elements of our symbol for radioactivity, but adapted to be kind of triangular instead of a circle.
There were some other good nuggets in the paper as well. Including a conclusion that complex systems with many controls that allow a degree of freedom will inevitably reach an error state as operators explore those degrees of freedom. And that designed safeguards and training/documentation isn’t sufficient to prevent this.
The best place to put effort is in monitoring for error conditions and recovering as quickly as possible.
This from a paper in 1990. There is no such thing as a new idea. 🤯
Last week I read an accident analysis paper by Jens Rasmussen from 1990. There was a citation to a conference keynote from 1988 on the idea of “defence-in-depth”. Seriously, how old is this term?
From context, it seems like it didn’t originate in IT or even Security, but may have migrated from industrial systems design. This bears further historical investigation. I wonder how much interpretation has changed in re-telling over the decades.
I have the week off, and planning some activities with the kids.
It always seems like all the great things to do have me planning for 5 or 6 times longer than the actual activities last. I can’t wait till they’re old enough to learn the basics and run with it on their own. Will happen any time with the older one.