<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://conic.al/feed.xml" rel="self" type="application/atom+xml" /><link href="https://conic.al/" rel="alternate" type="text/html" /><updated>2026-03-26T11:44:05+00:00</updated><id>https://conic.al/feed.xml</id><title type="html">Sean Byrne</title><subtitle>Security engineering leader. Building and scaling security programs at high-growth companies.</subtitle><author><name>Sean Byrne</name></author><entry><title type="html">The Fence Is Down</title><link href="https://conic.al/writing/the-fence-is-down/" rel="alternate" type="text/html" title="The Fence Is Down" /><published>2026-03-08T00:00:00+00:00</published><updated>2026-03-08T00:00:00+00:00</updated><id>https://conic.al/writing/the-fence-is-down</id><content type="html" xml:base="https://conic.al/writing/the-fence-is-down/"><![CDATA[<p><img src="/assets/jurassic_park_perimeter_fence.webp" alt="Jurassic park Dinosaur Enclosures and Electric Fence." /></p>

<p>Everything feels uncertain right now. Security is no different.</p>

<p>Here’s what’s been nagging at me. For all the talk about AI transforming cybersecurity, we’re not seeing the attacks we should be seeing. Not given what the technology can do. The phishing is better, the reconnaissance is faster, but the truly AI-native attacks, bespoke exploit chains, automated zero-day discovery across entire ecosystems, haven’t materialised at scale. Not yet.</p>

<p>There’s a scene in Jurassic Park where the electric fences go down. The dinosaurs don’t charge through. They don’t know the fence is off. They’ve been conditioned to stay put, so they do. For a while.</p>

<p>That’s where we are. The fence is down. Most threat actors just haven’t copped on yet.</p>

<p>This week, Anthropic published the results of a <a href="https://www.anthropic.com/news/mozilla-firefox-security">security collaboration with Mozilla</a>. They pointed Claude Opus 4.6 at the Firefox codebase and in two weeks it found 22 previously unknown vulnerabilities, 14 classified as high-severity. That’s nearly a fifth of all high-severity Firefox vulnerabilities remediated in 2025. Found in a fortnight by a model. When they tested whether Claude could also write working exploits, it managed only two crude successes across several hundred attempts. Discovery is dramatically cheaper and more effective than exploitation right now. That’s good news for defenders. Anthropic themselves noted that gap is unlikely to last, but right now the window favours defence.</p>

<p>That’s the good news.</p>

<p>The bad news is what’s coming. Alex Stamos laid this out at Reddit’s SnooSec conference in his talk <em><a href="https://www.linkedin.com/in/alexstamos/">AI is Eating Security</a></em>. Open-source models are less than a year behind frontier models in bug discovery. Once widely available, thousands of adversary groups will have tooling that was recently the preserve of nation-states. As Stamos put it: we have no historical precedent for this many adversary groups having this kind of capability while we have a massive dearth of skilled defenders. Attacker benefits are going exponential. Defender benefits are geometric.</p>

<p>The answer isn’t more headcount. Using AI leads to a different model. Smaller, narrower teams. Fewer but more senior people on top of AI agents. AI coding tools already make security engineering dramatically faster. The organisations that make this transition keep pace, AI gets you there faster. The ones that don’t get left operating at human speed against machine-speed attackers.</p>

<p>If you’re a security leader, the harder part is getting the rest of leadership there. Especially when you’re asking for more money or restructured teams. The <a href="https://www.anthropic.com/news/mozilla-firefox-security">Anthropic blog post</a> helps. It’s 22 zero-days in a production browser, found by a model in two weeks. That cuts through boardroom scepticism.</p>

<p>We might be in for a tough time ahead, but in the immortal words of Toto: <a href="https://www.youtube.com/watch?v=htgr3pvBr-I">hold the line, love isn’t always on time</a>.</p>]]></content><author><name>Sean Byrne</name></author><summary type="html"><![CDATA[Anthropic pointed Claude at the Firefox codebase and found 22 zero-days in two weeks. The capability gap between AI and widespread exploitation won't last. The window to prepare is now.]]></summary></entry><entry><title type="html">Giving iOS Lockdown Mode Another Look</title><link href="https://conic.al/writing/giving-ios-lockdown-mode-another-look/" rel="alternate" type="text/html" title="Giving iOS Lockdown Mode Another Look" /><published>2026-03-06T00:00:00+00:00</published><updated>2026-03-06T00:00:00+00:00</updated><id>https://conic.al/writing/giving-ios-lockdown-mode-another-look</id><content type="html" xml:base="https://conic.al/writing/giving-ios-lockdown-mode-another-look/"><![CDATA[<p>There was an interesting story this week in <a href="https://www.wired.com/story/coruna-iphone-hacking-toolkit-us-government/">Wired</a> about an unsettling development in the iPhone security world.</p>

<p>A sophisticated iPhone exploit framework known as <a href="https://cloud.google.com/blog/topics/threat-intelligence/coruna-powerful-ios-exploit-kit">“Coruna”</a> appears to have originated from tools developed for US government use, before making its way through the murky exploit market. From there it seems to have ended up first in the hands of Russian espionage groups targeting Ukrainians, and later with cybercriminal operations stealing cryptocurrency from victims. The toolkit bundles multiple full iOS exploit chains and more than twenty individual exploits, capable of compromising vulnerable devices simply by visiting a malicious website.</p>

<p>What caught my eye, though, was a small but notable detail: the exploit kit apparently checks for Lockdown Mode and does not attempt infection if it’s enabled.</p>

<p>This isn’t the first time Lockdown Mode has stopped an attack dead in its tracks. When the FBI raided the home of Washington Post reporter Hannah Natanson earlier this year, court records show their forensics team <a href="https://www.404media.co/fbi-couldnt-get-into-wapo-reporters-iphone-because-it-had-lockdown-mode-enabled/">couldn’t extract a thing from her iPhone</a>. Lockdown Mode made the device a non-starter from the outset.</p>

<p><img src="/assets/posts/giving-ios-lockdown-mode-another-look/lockdown-mode.png" alt="iOS Lockdown Mode confirmation dialog" /></p>

<p>I tried Lockdown Mode when Apple first released it a few years back. At the time I left it on for a few days, but eventually turned it off. Some sites I rely on simply weren’t behaving properly, and it felt a bit too restrictive for everyday use.</p>

<p>Since then, though, Apple has gradually improved the feature’s usability, for example by allowing exceptions for trusted websites, while keeping the stronger security posture in place. Reading about real-world exploit chains like this finding their way from state actors to ordinary criminals was a good reminder that these defences aren’t purely theoretical.</p>

<p>For people like myself who tried Lockdown Mode early on and gave up, it may well be worth another look now that the rough edges have been smoothed out a bit.</p>

<p>For the full story, the <a href="https://www.wired.com/story/coruna-iphone-hacking-toolkit-us-government/">Wired article</a> is well worth a read.</p>]]></content><author><name>Sean Byrne</name></author><summary type="html"><![CDATA[A leaked iPhone exploit framework avoids devices with Lockdown Mode enabled. For anyone who tried the feature early and gave up, it may be worth revisiting.]]></summary></entry><entry><title type="html">Passkeys and the Quiet Revolution in Corporate Crypto</title><link href="https://conic.al/writing/passkeys-and-the-quiet-revolution-in-corporate-key-material/" rel="alternate" type="text/html" title="Passkeys and the Quiet Revolution in Corporate Crypto" /><published>2026-02-23T00:00:00+00:00</published><updated>2026-02-23T00:00:00+00:00</updated><id>https://conic.al/writing/passkeys-and-the-quiet-revolution-in-corporate-key-material</id><content type="html" xml:base="https://conic.al/writing/passkeys-and-the-quiet-revolution-in-corporate-key-material/"><![CDATA[<p>Most Passkey commentary stops at “better MFA”.</p>

<p>B2B Developers, Corporate IT and security teams should open their minds about what Passkeys can do for them. Not because they finally kill passwords. Passkeys can change who controls cryptographic key material inside organizations. After all the blood, sweat, and tears of deploying a better MFA solution, it would be nice to get more in return. And if you haven’t deployed them yet, maybe this will give you more reasons to do so.</p>

<p>For decades, serious cryptography in enterprises lived in narrow domains: PKI teams, HSMs, code signing infrastructure, smart cards for a subset of employees. Everyone else got passwords plus MFA bolted on top. Keys were expensive, specialized, and centrally managed.</p>

<p>Passkeys invert that model. Every modern phone and laptop ships with a secure enclave or TPM capable of generating and protecting asymmetric keys. WebAuthn exposes a standard interface for creating and using those keys. The user experience is solved: biometric prompt, done.</p>

<p>The result is simple but profound: every employee now carries hardware-backed key material by default.</p>

<h2 id="authentication-is-the-obvious-win">Authentication Is the Obvious Win</h2>

<p>Passkeys eliminate entire categories of enterprise pain. There are no shared secrets to reset, no TOTP codes to phish, no push notifications to fatigue-attack, and no hardware token inventory to manage. The private key never leaves the device. Authentication is origin-bound and gated by biometrics or a device PIN.</p>

<p>For many organizations, password reset support is one of the most expensive recurring IT costs. Passkeys materially reduce that surface area.</p>

<p>But authentication is just the surface.</p>

<h2 id="hardware-backed-cryptography">Hardware-Backed Cryptography</h2>

<p>Underneath the UX, passkeys are hardware-protected asymmetric keys accessed through WebAuthn. That matters less because of how they log users in, and more because of what they normalize.</p>

<p>For the first time, enterprises have ubiquitous access to hardware-protected signing and key derivation capabilities on employee devices, without issuing smart cards or deploying client certificates. Every enrolled device can hold non-exportable private keys, gate their use behind biometrics, and produce signatures over structured data.</p>

<p>Historically, if an organization wanted hardware-backed keys on endpoints, it meant provisioning smart cards, managing certificates, or distributing tokens. Now the capability ships by default on iPhones, Android devices, Windows laptops, and Macs.</p>

<p>That changes what is economically feasible. And in enterprise security, money decides most things.</p>

<h2 id="moxies-direction-using-passkeys-for-encryption-not-just-login">Moxie’s Direction: Using Passkeys for Encryption, Not Just Login</h2>

<p>The most interesting extension of this idea comes from the creator of Signal, Moxie Marlinspike’s recent work with Confer. In <a href="https://confer.to/blog/2025/12/passkey-encryption/">Passkey Encryption</a>, he describes using the WebAuthn PRF extension to derive durable encryption key material from a passkey. The private key remains protected by the secure enclave. The server never receives the derived root secret.</p>

<p>Instead of using passkeys only to authenticate to a server that holds the real keys, Confer uses them to generate client-side encryption keys. The service stores ciphertext. Decryption requires local biometric authorization on the user’s device.</p>

<p><img src="/assets/passkey-prompt.png" alt="Passkey prompt from Confer" style="max-width: 35%;" /> <em>Image: <a href="https://confer.to">Confer.to AI</a></em></p>

<p>Moxie trying to do for AI what he did for messaging: make it private by design. The important move is subtle. The passkey is not just proving identity. It is controlling access to encryption keys. The server cannot read user data, even if it wants to.</p>

<p>In practical terms, this replaces a lot of the awkward machinery behind encrypted systems. End-to-end messaging usually requires long-lived identity keys, recovery phrases, or some form of server-assisted key escrow. Encrypted SaaS products often rely on password-derived keys or server-stored wrapped keys for recovery. Using passkeys and the WebAuthn PRF shifts that root of trust into hardware-backed credentials that already exist on user devices, reducing both system complexity and the number of high-value secrets stored on servers.</p>

<p>That relocation of trust is what should matter to enterprises.</p>

<p>If employee devices already contain hardware-backed keys capable of deriving stable secrets, signing structured data, participating in key agreement, and attesting to hardware properties, those keys can gate access to encrypted documents, internal AI systems, sensitive knowledge bases, and collaboration tools without standing up traditional enterprise PKI for every use case.</p>

<p>Passkeys are not just a cleaner login flow. They are a client-side cryptographic foundation that now ships, by default, on every endpoint your organization owns.</p>]]></content><author><name>Sean Byrne</name></author><summary type="html"><![CDATA[Passkeys solve the authentication problem corporate IT has been fighting for decades. But the more interesting story is what happens when every employee has a hardware-backed key generation and storage facility in their pocket.]]></summary></entry><entry><title type="html">AMD’s Remote Execution Bug and the Limits of Responsible Disclosure</title><link href="https://conic.al/writing/amds-remote-execution-bug/" rel="alternate" type="text/html" title="AMD’s Remote Execution Bug and the Limits of Responsible Disclosure" /><published>2026-02-16T00:00:00+00:00</published><updated>2026-02-16T00:00:00+00:00</updated><id>https://conic.al/writing/amds-remote-execution-bug</id><content type="html" xml:base="https://conic.al/writing/amds-remote-execution-bug/"><![CDATA[<p><img src="/assets/1efa9401-029f-44e1-8bbf-645023b993e9.jpeg" alt="AMD remote execution vulnerability" /></p>

<p>I’ve led coordinated disclosure processes within organizations and participated as a reporter, so I’m sympathetic to the people trying to make the process work. It is a difficult task.</p>

<p>However, we are well over two decades into responsible disclosure, and one of the world’s largest processor manufacturers is committing failures like this. If the reporting is accurate, AMD’s auto-updater downloaded software updates insecurely and did not verify signatures. That is inexcusable for a company of this scale. Authenticated updates and signature verification are the basics. These are not exotic research problems. They are baseline engineering responsibilities.</p>

<p>What concerns me most is how we, as a community, treat the completion of the responsible disclosure process as the end of the matter. A bug is reported. A patch is issued. There is a brief news cycle. The reporter may receive credit. And then we collectively move on, as if fixing the bug and absorbing a modest amount of bad press is sufficient.</p>

<p>But the fact that issues like this exist at this scale should prompt harder questions. How did this pass design review? Where were the guardrails? What incentives allowed this to ship? What organizational decisions made this acceptable? Instead, the process itself becomes the story. The completion of disclosure is treated as evidence that the system works, when in reality it often just contains the damage.</p>

<p>These are billion-dollar firms. Issues like this should not exist at this level of maturity. In other industries, when bridges collapse or ships crash due to professional negligence, there are investigations, accountability, and reform. In software, the cost is frequently externalized and quietly absorbed, poisoning the system while no one feels the pain directly.</p>

<p>Responsible disclosure remains essential. But as it exists today, it too often serves to contain reputational damage rather than raise engineering standards. It should not function as closure. It should be the starting point for accountability and structural improvement. Otherwise, what are we doing?</p>]]></content><author><name>Sean Byrne</name></author><summary type="html"><![CDATA[On AMD's insecure auto-updater, the responsible disclosure process, and why fixing the bug should be the beginning of accountability, not the end.]]></summary></entry><entry><title type="html">SMS 2FA Will Die, But Not for the Reason You Think</title><link href="https://conic.al/writing/sms-2fa-will-die-but-not-for-the-reason-you-think/" rel="alternate" type="text/html" title="SMS 2FA Will Die, But Not for the Reason You Think" /><published>2024-03-18T00:00:00+00:00</published><updated>2024-03-18T00:00:00+00:00</updated><id>https://conic.al/writing/sms-2fa-will-die-but-not-for-the-reason-you-think</id><content type="html" xml:base="https://conic.al/writing/sms-2fa-will-die-but-not-for-the-reason-you-think/"><![CDATA[<p>The security community has long argued that SMS-based two-factor authentication is insecure. SIM swapping, SS7 interception, and social engineering are real and well documented.</p>

<p>But SMS 2FA is unlikely to disappear because of those attacks. It is becoming untenable because sending a single authentication text in the United States now requires navigating a bureaucratic system that small companies and independent developers struggle to clear.</p>

<h2 id="a2p-10dlc">A2P 10DLC</h2>

<p><img src="/assets/a2p-10dlc-rejection.png" alt="A2P 10DLC registration rejection" /></p>

<p>Under the A2P 10DLC framework, businesses must register before sending application-to-person SMS. That means registering a brand, registering a campaign, describing the use case, and waiting for approval from providers and carriers.</p>

<p>In theory, this reduces spam. In practice, it introduces fees, delays, and opaque review processes.</p>

<p>I spent over a week attempting to register a simple OTP-only application with two providers. The application does one thing: send six-digit authentication codes. It was rejected multiple times for vague or shifting procedural reasons. There is no fast path for a minimal, legitimate use case.</p>

<p>The issue is not that OTP over SMS is controversial. It is that the approval system is inconsistent and difficult to navigate.</p>

<h2 id="the-economics">The Economics</h2>

<p>The 10DLC process adds monthly brand fees, campaign registration fees, and per-message surcharges. For large enterprises, these costs are minor. For small teams and side projects, they are meaningful.</p>

<p>The fee structure is layered and frequently updated. Calculating the true cost of sending a single OTP often requires reading multiple documentation pages or speaking to sales.</p>

<p>This shifts SMS from a simple utility into a gated service where compliance overhead and recurring fees discourage small senders. The system may reduce some abuse, but it also raises the barrier to entry for everyone else.</p>

<h2 id="the-rejections">The Rejections</h2>

<p>The use case in question is the most basic SMS flow on the internet: send a code, verify the code, complete authentication.</p>

<p>Yet repeated rejections cited insufficient detail, concerns about projected volume, or requests for documentation not mentioned in the initial application. The objections were procedural, not substantive. The system is not selectively filtering bad actors. It is filtering broadly.</p>

<h2 id="what-replaces-sms">What Replaces SMS</h2>

<p>Meanwhile, the industry has been moving toward stronger alternatives: authenticator apps, push-based approvals, passkeys, WebAuthn, and FIDO2. These methods are more secure and more resistant to phishing.</p>

<p>Migration away from SMS was expected to happen because better technologies emerged. Instead, it is being accelerated by friction in the delivery infrastructure itself.</p>

<p>Developers who cannot reliably send SMS will switch. Many will adopt stronger options. Some may implement weaker fallbacks or skip a second factor altogether because the simplest path no longer runs through a phone number.</p>

<h2 id="the-endgame">The Endgame</h2>

<p>The 10DLC framework was introduced to combat A2P spam. Spam remains common. What has clearly increased is cost and complexity for legitimate senders.</p>

<p>The approval process is slow and inconsistent. Fees scale in ways that favor high-volume incumbents. Small developers face disproportionate friction for straightforward authentication use cases.</p>

<p>SMS 2FA is not failing primarily because of cryptographic weaknesses. It is failing because the surrounding regulatory and commercial structure has made it difficult to use.</p>

<p>It will not be remembered as the second factor that was too insecure. It will be remembered as the one that became too annoying to set up.</p>]]></content><author><name>Sean Byrne</name></author><summary type="html"><![CDATA[SMS two-factor authentication won't be killed by SIM swapping or SS7 attacks. It will be killed by the bureaucratic impossibility of sending a text message.]]></summary></entry><entry><title type="html">End-to-End Encryption and the Signal Protocol</title><link href="https://conic.al/writing/end-to-end-encryption-and-the-signal-protocol/" rel="alternate" type="text/html" title="End-to-End Encryption and the Signal Protocol" /><published>2019-03-22T00:00:00+00:00</published><updated>2019-03-22T00:00:00+00:00</updated><id>https://conic.al/writing/end-to-end-encryption-and-the-signal-protocol</id><content type="html" xml:base="https://conic.al/writing/end-to-end-encryption-and-the-signal-protocol/"><![CDATA[<div class="video-embed">
  <iframe src="https://www.youtube.com/embed/7WnwSovjYMs" title="Trevor Perrin - TextSecure (Signal) Protocol: Present and Future" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""></iframe>
</div>

<p>Trevor Perrin’s presentation on the TextSecure protocol, what the world now knows as the Signal Protocol, is one of the most important talks in applied cryptography from the past decade. It is also, characteristically, understated.</p>

<h1 id="the-signal-protocol-and-the-authentication-problem">The Signal Protocol and the Authentication Problem</h1>

<p>Most discussion focuses on the Double Ratchet, which combines Diffie-Hellman and symmetric ratchets to provide forward and future secrecy. That matters. But the more interesting and underappreciated contribution is how the protocol approaches authentication, and the limits of what authentication can realistically achieve.</p>

<h2 id="what-encryption-alone-cannot-provide">What Encryption Alone Cannot Provide</h2>

<p>Signal provides strong end-to-end encryption. Messages are encrypted using keys derived from X3DH and the Double Ratchet. Even if long-term keys are compromised, past messages remain protected and future sessions recover once new ephemeral keys are introduced.</p>

<p>But encryption does not guarantee you are encrypting to the right person. If a server provides a malicious public key, you can send a perfectly encrypted message directly to an attacker. The encryption works. The identity assurance fails.</p>

<p>Authentication is a separate and harder problem.</p>

<h2 id="usable-security">Usable Security</h2>

<p>PGP addressed authentication through manual key verification and webs of trust. It worked for experts, but it never reached mass adoption because the process was burdensome.</p>

<p>Signal made encryption invisible. Key generation, prekeys, rotation, and session setup happen automatically. You install the app and messages are encrypted.</p>

<p>Authentication cannot be made fully invisible. As Perrin notes, users must be involved somehow. The question becomes: what do you do when most users will not complete a formal verification ceremony?</p>

<h2 id="two-approaches">Two Approaches</h2>

<p>One approach is strict authentication before encryption. No verification, no secure channel. In practice, this means only a small subset of users get protection, and those users become identifiable as security-conscious targets.</p>

<p>Signal chooses the opposite model: encrypt everything by default and make verification optional. All users get encrypted transport. Some verify fingerprints or scan QR codes. Most do not. An outside observer cannot distinguish between them.</p>

<p>This design has a subtle property. Users who verify are protected by not being distinguishable from those who do not. Users who do not verify are protected by blending into a population where verification might have occurred. The crowd provides cover.</p>

<h2 id="trust-on-first-use">Trust on First Use</h2>

<p>When you first fetch a contact’s key, you trust it and store it. If that key changes unexpectedly, you receive a warning.</p>

<p>TOFU is imperfect. Users often ignore warnings. But in this model, the initial key is silently recorded and only changes introduce friction. The realistic goal is not perfect fingerprint checking. It is detecting suspicious key changes at least sometimes.</p>

<p>For stronger guarantees, users can verify keys out of band by scanning QR codes. The option exists without being mandatory. Skipping verification does not weaken encryption itself. It means trusting the key directory, which is an acceptable tradeoff for many conversations.</p>

<h2 id="cryptographic-engineering">Cryptographic Engineering</h2>

<p>The primitives behind Signal are well understood. The innovation is in how they are composed into a system that works under real-world constraints: asynchronous messaging, key exhaustion, multi-device use, group chats, and potentially untrusted servers.</p>

<p>The authentication philosophy illustrates the difference between theoretical cryptography and cryptographic engineering. A purely rigorous solution might protect a small, disciplined minority. Signal instead encrypts everything, relies on ubiquity for cover, and offers stronger verification to those who want it.</p>]]></content><author><name>Sean Byrne</name></author><summary type="html"><![CDATA[On Trevor Perrin's presentation about the TextSecure (Signal) protocol, and what it reveals about building cryptographic systems that actually protect people.]]></summary></entry><entry><title type="html">The UNIX Philosophy and Why It Still Matters</title><link href="https://conic.al/writing/the-unix-philosophy-and-why-it-still-matters/" rel="alternate" type="text/html" title="The UNIX Philosophy and Why It Still Matters" /><published>2017-09-14T00:00:00+00:00</published><updated>2017-09-14T00:00:00+00:00</updated><id>https://conic.al/writing/the-unix-philosophy-and-why-it-still-matters</id><content type="html" xml:base="https://conic.al/writing/the-unix-philosophy-and-why-it-still-matters/"><![CDATA[<div class="video-embed">
  <iframe src="https://www.youtube.com/embed/tc4ROCJYbm0" title="AT&amp;T Archives: The UNIX Operating System" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""></iframe>
</div>

<p>In 1982, AT&amp;T produced a short documentary about UNIX. It features Ken Thompson, Dennis Ritchie, Brian Kernighan, and others explaining what they built and, more importantly, <em>why</em> they built it the way they did. Over forty years later, the film remains one of the clearest articulations of systems design thinking I’ve encountered.</p>

<p>I recommend watching the documentary. It is unpretentious, and technical, qualities that are as rare and valuable now as they were in 1982.</p>

<h2 id="small-sharp-tools">Small, Sharp Tools</h2>

<p>The central idea of UNIX is composition. Small programs, each doing one thing well, connected through a universal interface: the pipe. This wasn’t just an engineering convenience. It was a philosophy about managing complexity.</p>

<p>In security engineering, we face the same problem at a different scale. Complex systems are hard to reason about. Complex systems are hard to secure. The instinct to build monolithic solutions, a single platform that handles authentication, authorization, logging, alerting, and compliance, creates exactly the kind of opacity that attackers exploit.</p>

<p>The UNIX approach suggests an alternative: build smaller, well-defined components with clear boundaries. Make the interfaces explicit. Let the pieces be inspected, replaced, and composed independently.</p>

<p>One of the things that strikes me about the documentary is how much emphasis the designers placed on being able to <em>see</em> what the system was doing. Text as the universal format. Human-readable configuration. Tools that could be chained together to answer questions about the system’s own behavior.</p>

<p>This is the foundation of security observability. Before we had the term, UNIX had the practice. <code class="language-plaintext highlighter-rouge">ps</code>, <code class="language-plaintext highlighter-rouge">ls</code>, <code class="language-plaintext highlighter-rouge">grep</code>, <code class="language-plaintext highlighter-rouge">awk</code>, <code class="language-plaintext highlighter-rouge">find</code>: these are instruments for understanding a running system. The principle that a system should be inspectable by its operators is one of the most security-relevant design decisions you can make.</p>

<h2 id="durability-of-good-design">Durability of Good Design</h2>

<p>Perhaps the most remarkable thing about UNIX is its longevity. The ideas in this 1982 documentary describe an approach that still works. Not because the technology hasn’t changed (it has, profoundly) but because the design principles were sound.</p>

<p>When I think about building a security program at a company experiencing rapid growth, I think about this kind of durability. The specific tools and vendors will change. The compliance frameworks will evolve. The threat landscape will shift. What endures are the structural decisions: how you decompose problems, where you draw boundaries, what you make visible, and what you make composable.</p>

<p>UNIX teaches that lesson better than most textbooks on the subject.</p>]]></content><author><name>Sean Byrne</name></author><summary type="html"><![CDATA[Reflections on the design principles behind UNIX, prompted by the AT&T Archives documentary, and why they remain relevant to security engineering today.]]></summary></entry></feed>