jlcases 18 hours ago

What impresses me most about technical documentation like this is how it structures knowledge into comprehensible layers. This article manages to explain an extremely complex system by establishing clear relationships between components.

I've been experimenting with similar approaches for documentation in open source projects, using knowledge graphs to link concepts and architectural decisions. The biggest challenge is always keeping documentation synchronized with evolving code.

Has anyone found effective tools for maintaining this synchronization between documented architecture and implemented code? Large projects like Darwin must have established processes for this.

  • rollcat 16 hours ago

    > Has anyone found effective tools for maintaining this synchronization between documented architecture and implemented code?

    Yes, it's called structure, discipline, and iterative improvement.

    Keep the documentation alongside the code. Think in BSD terms: the OS is delivered as a whole; if I modify /bin/ls to support a new flag, then I update the ls.1 man page accordingly, preferably in the same commit/PR.

    The man pages are a good reference if you already have familiarity with the employed concepts, so it's good to have an intro/overview document that walks you through those basics. This core design rarely sees radical changes, it tends to evolve - so adapt the overview as you make strategic decisions. The best benchmark is always a new hire. Find out what is it that they didn't understand, and task them with improving the document.

    • worik 13 hours ago

      > Has anyone found effective tools for...

      Managing management?

      Code comments and documentation make no money, only features make money.

      Bitter experience...

bch a day ago

> Mach’s virtual memory (VM) system was influential beyond the project – it was adopted by 4.4BSD and later FreeBSD as their memory management subsystem.

…and NetBSD[0], OpenBSD[1], but apparently not DragonFly BSD[2].

[0] https://netbsd.org/docs/kernel/uvm.html

[1] https://man.openbsd.org/OpenBSD-3.0/uvm.9

[2] https://www.dragonflybsd.org/mailarchive/kernel/2011-04/msg0...

  • inkyoto a day ago

    Sadly, that is not entirely correct.

    Whilst all three BSDs (386BSD, FreeBSD, and NetBSD; there was no OpenBSD in the beginning) did inherit the legacy Mach 2.5-style design, it did not live on in FreeBSD, whose core team started pretty quickly replacing all remaining vestiges of the Mach VM[0] with a complete, modern, and highly performant rewrite of the entire VM. FreeBSD 4 had none of the original Mach code left in the kernel codebase, and that happened in the late 1990s. Therefore, FreeBSD can't be referenced in a relationship to Mach apart from the initial separation/very early foundation stage.

    NetBSD (and OpenBSD) went on for a while but also quickly hit the wall with the Mach design (performance, SMP/scalability, networking) and also set out on a complete rewrite with UVM (unified virtual memory) designed and led by Chuck Cranor, who wrote his dissertation on the UVM. OpenBSD later borrowed and adopted the UVM implementation, which remains in use today.

    So out of all living BSD's[1], only XNU/Darwin continues to use Mach, and not Mach 2.5 but Mach 3. There have been Mach 2.5, 3 and 4 (GNU/Hurd uses Mach 4) in existence, and the compatibility between them is rather low, and remains mostly at the overall architectural level. They are better to be treated as distinct design with shared influence.

    [0] Of which there were not that many to start off with.

    [1] I am not sure whether DragonBSD is dead or alive today at all.

    • bch a day ago

      > I am not sure whether DragonBSD is dead or alive today at all.

      Oof, yeah.[0][1]. I hope they're doing alright - technically fascinating, and charming as they march to the beat of their own accordion.[2][3][4][5]

      [0] https://www.dragonflybsd.org/release64/

      [1] https://gitweb.dragonflybsd.org/dragonfly.git

      [2] https://www.dragonflybsd.org/mailarchive/kernel/2012-03/msg0...

      [3] http://www.bsdnewsletter.com/2007/02/Features176.html

      [4] https://en.wikipedia.org/wiki/Vkernel

      [5] https://en.wikipedia.org/wiki/HAMMER_(file_system)

      • inkyoto a day ago

        The last release being «Version 6.4.0 released 2022 12 30», links from 2007 and 2012 do not lend much assurance that the project is still alive in 2025 – compared to other similar projects.

        Also note that HAMMER (the previous design) and HAMMER2 (the current design, since 2018) are two distinct, incompatible file system designs. I am not sure what is the value of mentioning the previous and abandoned design in the this context.

        • bch a day ago

          > The last release being «Version 6.4.0 released 2022 12 30», links from 2007 and 2012 do not lend much assurance that the project is still alive in 2025 – compared to other similar projects.

          Right - the git repo has commits from yesterday, but it ain’t no NetBSD… (h/t ‘o11c)

          > Also note that HAMMER (the previous design) and HAMMER2 (the current design, since 2018) are two distinct, incompatible file system designs. I am not sure what is the value of mentioning the previous and abandoned design in the this context.

          Sure - I linked to the first for the general intro, which mentions Hammer2 in the first paragraph if anybody reads through… my mistake.

    • o11c a day ago

      > I am not sure whether DragonBSD is dead or alive today at all.

      It seems to have about the same level of activity as NetBSD. Take that how you will.

  • tansanrao a day ago

    Ohhh interesting! I’ll update the post to include this soon, thanks!

agentkilo a day ago

The article states that "pager daemons" that manage swap files runs in user space, and the kernel memory can also get swapped out, but never explained how a user space daemon swaps out kernel memory. Do they have hard-coded exceptions for special daemons, or use special system calls? Where can I find out more details about the user space memory management specifically?

  • comex a day ago

    The claim is inaccurate and mixes together multiple different things:

    - The Mach microkernel originally supported true userland paging, like mmap but with an arbitrary daemon in place of the filesystem. You can see the interface here:

    https://web.mit.edu/darwin/src/modules/xnu/osfmk/man/memory_...

    But I'm not sure if Darwin ever used this functionality; it certainly hasn't used it for the last ~20 years.

    - dynamic_pager never used this interface. It used a different, much more limited Mach interface where xnu could alert it when it was low on swap; dynamic_pager would create swap files, and pass them back into the kernel using macx_swapon and macx_swapoff syscalls. But the actual swapping was done by the kernel. Here is what dynamic_pager used to look like:

    https://github.com/apple-oss-distributions/system_cmds/blob/...

    But that functionality has since moved into the kernel, so now dynamic_pager does basically nothing:

    https://github.com/apple-oss-distributions/system_cmds/blob/...

    - The vast majority of kernel memory is wired and cannot be paged out. But the kernel can explicitly ask for pageable memory (e.g. with IOMallocPageable), and yes, that memory can be swapped to disk. It's just rarely used.

    Still, any code that does this needs to be careful to avoid deadlocks. Even though userland is no longer involved in "paging" per se, it's still possible and in fact common for userland to get involved one or two layers down. You can have userland filesystems with FSKit (or third-party FUSE). You can have filesystems mounted on disk images which rely on userland to convert reads and writes to the virtual block device into reads and writes to the underlying dmg file (see `man hdiutil`). You can have NFS or SMB connections going through userland networking extensions. There are probably other cases I'm not thinking of.

    EDIT: Actually, I may be wrong about that last bit. You can definitely have filesystems that block on userspace, but it may not be supported to put swap on those filesystems.

    • krackers a day ago

      >xnu could alert it when it was low on swap; dynamic_pager would create swap files, and pass them back into the kernel

      What's the benefit of this indirection through userspace for swap file creation? Can't the kernel create the swap file itself?

      • comex a day ago

        Today the kernel does create the swap file itself. I don't know why it behaved differently in the past, given that the version of dynamic_pager I linked is only 355 lines of code, not obviously complex enough to be worth offloading to userspace. But this was written back in 1999 and maybe there was more enthusiasm for being microkernel-y (even if they had already backed away from full Mach paging).

        • delusional a day ago

          Looking at some of the contemporary documentation it does look like it was essentially a historical accident. The interface it built to be all microkernel, but when it was adapted into a real system the microkernel concepts fell to the wayside as they were no longer useful, but where they didn't impose too much of a burden (as in the pager interface) they were allowed to stick around.

      • rollcat 15 hours ago

        IMHO a kernel managing a file (any file) all on its own imposes too many assumptions about hardware and user space. This could unexpectedly bite you if you're in a rescue system trying to fsck, booting from external RO media, running diskless or from NFS, etc.

        Meanwhile Linux allows you to swapon(2) just about anything. A file, a partition, a whole disk, /dev/zram, even a zvol. (The last one could lead to a nasty deadlock, don't do it.)

        Perhaps the XNU/NeXT/Darwin/OSX developers wanted a similar level of flexibility? Have the right piece in place, even just as a stub?

llincerd 15 hours ago

Darwin is interesting because of the pace of radical changes to its core components. From dropping syscall backwards compatibility to mandatory code signing to dyld_shared_cache eliminating individual system library files to speed up dynamic executable loading. It's a very results-oriented design approach with no nostalgia and no sacred cows. I think only a big hardware vendor like Apple could pull it off.

swatson741 a day ago

Whenever I see the Darwin kernel brought into the discussion I can't help but wonder how different things could have been if Apple had just forked Linux and ran their OS services on top of that.

Especially when I think about how committed they are to Darwin it really paints a poor image in my mind. The loss that open source suffers from that, and the time and money Apple has to dedicate to this with a disproportionate return.

  • wtallis a day ago

    There was never a right time for Apple to make such a switch. NeXTSTEP predates Linux, and when it was adapted into Mac OS X, Apple couldn't afford a wholesale kernel replacement project on top of everything else, and Linux in the late 1990s was far from being an obviously superior choice. Once they were a few versions in to OS X and solidly established as the most successful UNIX-like OS for consumer PCs, switching to a Linux base would have been an expensive risk with very little short-term upside.

    Maybe if Apple had been able to keep classic MacOS going five years longer, or Linux had matured five years earlier, the OS X transition could have been very different. But throwing out XNU in favor of a pre-2.6 Linux kernel wouldn't have made much sense.

    • swatson741 a day ago

      I agree with all of this. Moreover depending on what Torvalds chooses to do Apple may have ended up with a more expensive XNU in the end which would have been a disaster. Although I think Apple can deal with Torvalds just fine who really knows how that would have played out.

      • inopinatus a day ago

        It would not be fine. It would have never been fine. It would have been a titanic clash of egos and culture, producing endless bickering and finger-pointing, with little meeting of minds. Apple runs the most vertically integrated general systems model outside of mainframes. Linux and its ecosystem represent the least.

        In any case, as others have noted, the timeline here w.r.t nextstep is backwards.

    • lunarlull a day ago

      Making a switch is one thing, but using Linux from the start for OS X would have made more sense. The only reason that didn't happen is because of Jobs' attachment to his other baby. It wasn't a bad choice, but it was a choice made from vanity and ego over technical merit.

      • dagmx a day ago

        You haven’t really expanded on why basing off the Linux kernel would have made more sense, especially at the time.

        People have responded to you with timelines explaining why it couldn’t have happened but you seem to keep restating this claim without more substance or context to the time.

        Imho Linux would have been the wrong choice and perhaps even the incorrect assumption. Mac is not really BSD based outside of the userland. The kernel was and is significantly different and would’ve hard forked from Linux if they did use it at the time.

        Often when people say Linux they mean (the often memes) GNU/Linux , except GNU diverged significantly from the posix command line tools (in that sense macOS is truer) and the GPL3 license is anathema to Apple.

        I don’t see any area where basing off Linux would have resulted in materially better results today.

        • jart 21 hours ago

          Well for starters, it would have better memory management. The XNU kernel's memory manager has poor time complexity. If I create a bunch of sparse memory maps using mmap() then XNU starts to croak once I have 10,000+ of them.

          • dagmx 18 hours ago

            Please re read the comment you’re responding to about how the kernel would have diverged significantly even if they did use the Linux kernel. Unless you think a three decade old kernel would have the same characteristics as today.

            What benefit would it have had at the time? What guarantees would it have given at the time that would have persisted three decades later?

      • andrewf a day ago

        This presumes that Apple brought in Jobs as a decision maker, and NeXTSTEP was attached baggage. At the time, the reverse was true - Apple purchased NeXTSTEP as their future OS, and Jobs came along for the ride. Given the disaster that was Apple's OS initiatives in the 90s, I doubt the Apple board would have bought into a Linux adventure.

        • lunarlull a day ago

          Why wouldn't Apple have been interested in a Linux option? They bought NeXTSTEP because of Jobs. Linux was already useable as a desktop OS in 2000, and they could have added in the UX stuff and drivers for their particular macs on top of it. There wouldn't have been any downsides for them, and it would have strengthened something that was hurting their biggest rival.

          • pjmlp a day ago

            Not only was the acquisition during the 1990's, as someone that happened to be a Linux zealot up to around 2004, usable was quite relative in 2000, if one had the right desktop parts.

            And it only became usable as Solaris/AIX/HP-UX replacement thanks to the money IBM, Oracle and Compaq pumped into Linux's development around 2000, it is even on the official timeline history.

          • musicale a day ago

            > Linux was already useable as a desktop OS in 2000

            Apple made its decision in 1996.

          • icedchai 8 hours ago

            In the early 2000's, Linux was practically unusable as a desktop OS because the only "fully functional" web browser was Internet Explorer. Netscape 4.x "worked" but was incredibly unstable and crashed roughly every half hour. Mozilla / Phoenix / Firefox wasn't done yet. Chrome didn't exist.

            It was a very different world. We won't even talk about audio and video playback. I was an early Linux user, having done my first install in 1993, and sadly ran Windows on my desktop then because the Linux desktop experience was awful.

            • f33d5173 8 hours ago

              Safari came out in 2003.

              • icedchai 5 hours ago

                Yeah, but I didn't use a Mac back then. And early 2000's web development was heavily biased towards IE.

          • wpm 16 hours ago

            Jobs initially did not want to come back to Apple. Apple bought NeXTSTEP because between it and BeOS, Jean-Louis Gassee overplayed his hand and was asking way too much money for the acquisition. Apple then defaulted to NeXT. Jobs thought Apple was hopeless just like everyone else did at the time and didn't want to take over a doomed company to steer it into the abyss, and it's not like NeXT was doing great at the time.

            >There wouldn't have been any downsides for them

            Really? NO downsides???

            - throwing away a decade and a half of work and engineering experience (Avie Tevanian helped write Mach, this is like having Linus being your chief of software development and saying "just switch to Hurd!")

            - uncertain licensing (Apple still ships ancient bash 3.2 because of GPL)

            - increased development time to a shipping, modern OS (it already took them 5 years to ship 10.0, and it was rough)

            That's just off the top of my head. I believe you think there wouldn't have been any downsides because you didn't stop to think of any, or are ideaologically disposed to present the Linux kernel in 1996 as being better or safer than XNU.

            • _rpf 12 hours ago

              > Jean-Louis Gassee overplayed his hand

              Well, there’s a parallel universe! Beige boxes running BeOS late-90s-cool maybe, but would we still have had the same upending results for mobile phones, industrial design, world integration, streaming media services…

          • DeathArrow 16 hours ago

            >it would have strengthened something that was hurting their biggest rival.

            If by biggest rival you mean Microsoft, it was Microsoft who saved Apple from bancrupcy in 1997.

            • philistine 8 hours ago

              The investment Microsoft famously made in Apple in 1997 did not prevent Apple from going bankrupt. By the time the money was in Apple's accounts, its fortunes were already reversed.

              The fact Microsoft announced they were investing, and that they were committed to continue shipping Office to Mac, definitely helped.

            • kfir 10 hours ago

              Microsoft did that not out of charity to Apple but as an attempt to fend off the DOJ trial accusing it of being a monopoly

      • musicale a day ago

        In 1996, Apple evaluated the options and decided (quite reasonably) that NeXTSTEP - the whole OS including kernel, userland, and application toolkit – was a better starting point than various other contenders (BeOS, Solaris, ...) to replace the failed Copland. Moreover, by acquiring NeXT, Apple got NeXTSTEP, NeXT's technical staff (including people like Bud Tribble and Avie Tevanian), and (ultimately very importantly) Steve Jobs.

  • GianFabien a day ago

    Back in the days when Apple acquired NeXT, Linux was undergoing lots of development and wasn't well established. Linux being a monolithic kernel didn't offer the levels of compartmentalization that Mach did.

    As things now stand, FreeBSD represents many of the benefits of Darwin and the open source nature of Linux. If you seek a more secure environment without Apple's increasing levels of lock-in, then FreeBSD (and the other BSDs) merit consideration for deployment.

    • laurencerowe a day ago

      Isn’t FreeBSD a monolithic kernel? I don’t believe it provides the compartmentalisation that you talk about.

      As I understand it Mach was based on BSD and was effectively a hybrid with much of the existing BSD kernel running as a single big task under the microkernel. Darwin has since updated the BSD kernel under microkernel with the current developments from FreeBSD.

      • TickleSteve a day ago

        Mach was never based on BSD, it replaced it. Mach is the descendant of the Accent and Aleph kernels. BSD came into the frame for the userland tools.

        "Mach was developed as a replacement for the kernel in the BSD version of Unix," (https://en.wikipedia.org/wiki/Mach_(kernel))

        Interestingly, MkLinux was the same type of project but for Linux instead of BSD (i.e. Linux userland with Mach kernel).

    • finnjohnsen2 a day ago

      Is the driver support fit for using FreeBSD as a desktop OS these days?

      Last I tried (~10 years ago) I gave up and I assumed FreeBSD was a Server OS, because I couldn't for the life of me get Nvidia drivers working in native resolution. I don't recall specifics but Bluetooth was problematic also.

    • inkyoto a day ago

      > As things now stand, FreeBSD represents many of the benefits of Darwin and the open source nature of Linux.

      No. FreeBSD has committed the original sin of UNIX by deliberately dropping support for all non-Intel architectures, intending to focus on optimising FreeBSD for the Intel ISA and platforms. UNIX portability and support for a diverse range of CPU's and hardware platforms are ingrained in the DNA of UNIX, however.

      I would argue that FreeBSD has paid the price for this decision – FreeBSD has faded into irrelevance today (despite having introduced some of the most outstanding and brilliant innovations in UNIX kernel design) – because the FreeBSD core team bet heavily on Intel remaining the only hardware platform in existence, and they missed the turn (ARM, RISC-V, and marginally MIPS in embdedded). Linux stepped in and filled in the niche very quickly, and it now runs everywhere. FreeBSD is faster but Linux is better.

      And it does not matter that Netflix still runs FreeBSD on its servers serving up the content at the theoretical speed of light – it is a sad living proof of FreeBSD having become a niche within a niche.

      P.S. I would also argue that the BSD core teams (Free/Net/Open) were a major factor in the downfall of all BSD's, due to their insular nature and, especially in the early days, a near-hostile attitude towards outsiders. «Customers» voted with their feet – and chose Linux.

      • adrian_b 21 hours ago

        Having used continuously both FreeBSD and Linux, wherever they are best suited, since around 1995 until today, I disagree.

        In my opinion the single factor that has contributed the most to a greater success for Linux than for FreeBSD has been the transition to multithreaded and multicore CPUs even in the cheapest computers, which has started in 2003 with the SMT Intel Pentium 4, followed in 2005 by the dual-core AMD CPUs.

        Around 2003, FreeBSD 4.x was the most performant and the most reliable operating system for single-core single-thread CPUs, for networking or storage applications, well above Linux or Microsoft Windows (source: at that time I was designing networking equipment and we had big server farms on which the equipment was tested, under all operating systems).

        However it could not use CPUs with multiple cores or threads, so on such CPUs it fell behind Linux and Windows. The support introduced in FreeBSD 5.x was only partial and many years have passed until FreeBSD had again a competitive performance on up-to-date CPUs. Other BSD variants were even slower in their conversion to multithreaded support. During those years the fraction of users of *BSD systems has diminished a lot.

        The second most important factor has been the much smaller set of device drivers for various add-on interface cards than for Linux. Only few hardware vendors have provided FreeBSD device drivers for their products, mostly only Intel and NVIDIA, and for the products of other vendors there have been few FreeBSD users able to reverse engineer them and write device drivers, in comparison with Linux.

        The support for non-x86 ISAs has also been worse than in Linux, but this was just a detail among the general support for less kinds of hardware than Linux.

        All this has been caused by positive feedback, FreeBSD has started with fewer users, because by the time when the lawsuits have been settled favorably for FreeBSD most potential users had already started to use Linux. Then the smaller number of users have been less capable of porting the system to new hardware devices and newer architectures, which has lead to even lower adoption.

        Nevertheless, there have always been various details in the *BSD systems that have been better than in Linux. A few of them have been adopted in Linux, like the software package systems that are now ubiquitous in Linux distributions, but in many cases Linux users have invented alternative solutions, which in enough cases were inferior, instead of studying the *BSD systems and see whether an already existing solution could be adopted instead of inventing yet another alternative.

        • tedunangst 11 hours ago

          Not quite accurate history of SMP. FreeBSD had SMP well before 5.0, but not "fine grained" which is what the 5.0 release was all about. But the conversion led to many regressions.

        • tzs 17 hours ago

          I don't know if this had much affect on anything, but another thing that hindered using FreeBSD for some users was that Linux worked better as a dual boot system with DOS/Windows on a typical home PC.

          There were two problems.

          The first was that FreeBSD really wanted to own the whole disk. If you wanted to dual boot with DOS/Windows you were supposed to put FreeBSD on a separate disk. Linux was OK with just having a partition on the same disk you had DOS/Windows on. For those of us whose PCs only had one hard disk, buying a copy of Partition Magic was cheaper than buying a second hard disk.

          The reason for this was that the FreeBSD developers felt that multiple operating system on the same disk was not safe due to the lack of standards for how to emulate a cylinder/head/sector (CHS) addressing scheme on disks that used logical block addressing (LBA). They were technically correct, but greatly overestimated the practical risks.

          In the early days PC hard disks used CHS addressing, and the system software such as the PC BIOS worked in those terms. Software using the BIOS such as DOS applications and DOS itself worked with CHS addresses and the number of cylinders, heads, and sectors per track (called the "drive geometry") they saw matched the actual physical geometry of the drive.

          The INT 13h BIOS interface for low level disk access allowed for a maximum of 1024 cylinders, 256 heads, and 63 sectors per track (giving a maximum possible drive size of 8 GB if the sectors were 512 bytes).

          At some point as disks got bigger drives with more than 63 sectors per track became available. If you had a drive with for example 400 cylinders, 16 heads, and 256 sectors per track you would only be able to access about 1/4 of the drive using CHS addressing that uses the actual drive geometry.

          It wasn't really practical to change the INT 13h interface to give the sectors per track more bits, and so we entered the era of made up drive geometries. The BIOS would see that the disk geometry is 400/16/256 and make up a geometry with the same capacity that fit within the limits, such as 400/256/16.

          Another place with made up geometry was SCSI disks. SCSI used LBA addressing. If you had a SCSI disk on your PC whatever implemented INT 13h handling for that (typically the BIOS ROM on your SCSI host adaptor) would make up a geometry. Different host adaptor makers might use different algorithms for making up that geometry. Non-SCSI disk interfaces for PCs also moved to LBA addressing, and so the need to make up a geometry for INT 13h arose with those too, and different disk controller vendors might use a different made up geometry.

          So suppose you had a DOS/Windows PC, you repartitioned your one disk to make room for FreeBSD, and went to install FreeBSD. FreeBSD does not use the INT 13h BIOS interface. It uses its own drivers to talk to the low level disk controller hardware and those drivers use LBA addressing.

          It can read the partition map and find the entry for the partition you want to install on. But the entries in the partition map use CHS addressing. FreeBSD would need to translate the CHS addresses from the partition map into LBA addresses, and to do that it would need to know the disk geometry that whatever created the partition map was using. If it didn't get that right and assumed a made up geometry that didn't match the partitioner's made up geometry the actual space for DOS/Windows and the actual space for FreeBSD could end up overlapping.

          In practice you can almost always figure out from looking at the partition map what geometry the partitioner used with enough accuracy to avoid stomping on someone else's partition. Partitions started at track boundaries, and typically the next partition started as close as possible to the end of the previous partition and that sufficiently narrows down where the partition is supposed to be in LBA address space.

          That was the approach taken by most SCSI vendors and it worked fine. I think eventually FreeBSD did start doing this too but by then Linux had become dominant in the "Dual boot DOS/Windows and a Unix-like OS on my one disk PC" market.

          The other problem was CD-ROM support. FreeBSD was slow to support IDE CD-ROM drives. Even people who had SCSI on their home PC and used SCSI hard disks were much more likely to have an IDE CD-ROM than a SCSI CD-ROM. SCSI CD-ROM drives were several times more expensive and it wasn't the interface that was the bottleneck so SCSI CD-ROM just didn't make much sense on a home PC.

          For many then it came down to with Linux they could install they didn't need a two disk system and they could install from a convenient CD-ROM, but for FreeBSD they would need a dedicated disk for it and would have to deal with a stack of floppies.

          • LargoLasskhyfv 10 hours ago

            Related fun fact up to maybe a decade ago: If you had a disk labeled/partitioned in FBSDs 'dangerously dedicated' style, and tried to image that, or reading the image of that with some forensic tool called Encase (running under Windows of course, how else could it be?), this tool would crash that Windows with an irrecoverable blew screen :)

            I loved that!

        • inkyoto 21 hours ago

          Whilst I do agree with most of your insights and the narrative of historic events, I also believe that BSD core teams were a major contributing factor to the demise of BSD's (however unpopular such an opinion might be).

          The first mistake was that all BSD core teams flatly refused to provide native support for the JVM back in its heyday. They eventually partially conceded and made it work using Linux emulation; however, it was riddled with bugs, crashes and other issues for years before it could run Java server apps. Yet, users clamoured to run Java applications, like, now and vociferously.

          The second grave mistake was to flatly refuse to support containerisation (Docker) due to not being kosher. Linux based containerisation is what underpins all cloud computing today. Again, the FreeBSD arrived too late, and it was too little.

          P.S. I still hold the view that FreeBSD made matters even worse by dropping support for non-Intel platforms early on – at a stage when its bleak future was already all but certain. New CPU architectures are enjoying a renaissance, whilst FreeBSD nervously sucks its thumb by the roadside of history.

          • usrnm 19 hours ago

            Docker was created in 2013, long after BSDs had lost all their popularity. And, fwiw, FreeBSD pioneered containers long before Linux: https://en.m.wikipedia.org/wiki/FreeBSD_jail

            • pjmlp 13 minutes ago

              And HP-UX before them, with HP-UX Vaults already in 1999.

            • inkyoto 18 hours ago

              FreeBSD jails are advanced chroot++. Albeit they do set a precedent for a predessor of true containers, they have:

                1. Minimal kernel isolation.
              
                2. Optional network stack isolation via VNET (but not used by default).
              
                3. Rudimentary resource controls with no default enforcement (important!).
              
                4. Simple capability security model.
              
              Most importantly, since FreeBSD was a very popular choice for hosting providers at the time, jails were originally invented to fully support partitioned-off web hosting, rather than to run self-sufficient, fully contained (containerised) applications as first-class citizens.

              The claim to have invented true containers belongs to Solaris 10 (not Linux) and its zones. Solaris 10 was released in January 2005.

              • vermaden 9 hours ago

                I believe you have wrong view of how secure FreeBSD Jails are - definitely a lot more secure the rootless Podman for a start.

                Isolation: With rootless Podman it seems to be on the same level as Jails - but only if You run Podman with SELinux or AppArmor enabled. Without SELinux/AppArmor the Jails offer better isolation. When you run Podman with SELinux/AppArmor and then you add MAC Framework (like mac_sebsd/mac_jail/mac_bsdextended/mac_portacl) the Jails are more isolated again.

                Kernel Syscalls Surface: Even rootless Podman has 'full' syscall access unless blocked by seccomp (SELinux). Jails have restricted use of syscalls without any additional tools - and that can be also narrowed with MAC Framework on FreeBSD.

                Firewall: You can not run firewall inside rootless Podman container. You can run entire network stack and any firewall like PF or IPFW independently from the host inside VNET Jail - which means more security.

                TL;DR: FreeBSD Jails are generally more secure out-of-the-box compared to Podman containers and even more secure if you take the time to add additional layers of security.

                > How battle-tested are FreeBSD Jails?

                Jails are in production since 1999/2000 when they were introduced - so 25 years strong - very well battle tested.

                Docker is with us since 2014 so that means about 10 years less - but we must compare to Podman ...

                Rootless support for Podman first appeared late 2019 (1.6) so only less then 6 years to test.

                That means Jails are the most battle tested of all of them.

                Hope that helps.

                Regards,

                vermaden

      • danieldk a day ago

        I am very skeptical that it's primarily caused by the focus on Intel CPUs. FreeBSD already fell into obscurity way before RISC-V. And even though they missed the ARM router/appliance boat, Linux already overtook FreeBSD when people were primarily using Linux for x86 servers and (hobbyist) desktops. The Netcraft has confirmed: BSD is dying Slashdot meme was from the late 90ies or early 2000s. Also, if this was the main reason, we would all be using OpenBSD or NetBSD.

        IMO it's really a mixture of factors, some I can think of:

        - BSD projects were slowed down by the AT&T lawsuit in the early 90ies.

        - FreeBSD focused more on expert users, whereas Linux distributions focused on graphical installers and configuration tools early on. Some distributions had graphical installers at the end of the 90ies. So, Linux distributions could onboard people who were looking for a Windows alternative much more quickly.

        - BSD had forks very early on (FreeBSD, NetBSD, OpenBSD, BSDi). The cost is much higher than multiple Linux distributions, since all BSDs maintain their own kernel and userland.

        - The BSDs (except BSDi) were non-profits, whereas many early Linux distributions were by for-profit companies (Red Hat, SUSE, Caldera, TurboLinux). This gave Linux a larger development and marketing budget and it made it easier to start partnerships with IBM, SAP, etc.

        - The BSDs projects were organized as cathedrals and more hierarchical, so made it harder for new contributors to step in.

        - The BSD projects provided full systems, whereas in Linux distributions would piece together systems. This made Linux development messier, but allowed quicker evolution and made it easier to adapt Linux for different applications.

        - The GPL put a lot more pressure on hardware companies to contribute back to the Linux kernel.

        Besides that there is probably also a fair amount of randomness involved.

        • inkyoto 21 hours ago

          The AT&T lawsuits are a moot point, as they were all settled in the early 1990s. They are the sole reason why FreeBSD and NetBSD even came into existence – by forking the 4.4BSD-Lite codebase after the disputed code had been eliminated or replaced with non-encumbered reimplementations. Otherwise, we would all be running on descendants of 4.4BSD-Lite today.

          Linux has been running uninterruptedly on s/390 since October 1999 (31-bit support, Linux v2.2.13) and since January 2001 for 64-bit (Linux v2.4.0). Linux mainlined PPC64 support in August 2002 (Linux v2.4.19), and it has been running on ppc64 happily ever since, whereas FreeBSD dropped ppc64 support around 2008–2010. Both s/390 and ppc64 (as well as many others) are hardly hobbyist platforms, and both remain in active use today. Yes, IBM was behind each port, although the Linux community has been a net $0 beneficiary of the porting efforts.

          I am also of the opinion that licensing is a red herring, as BSD/MIT licences are best suited for proprietary, closed-source development. However, the real issue with proprietary development is its siloed nature, and the fact that closed-source design and development very quickly start diverging from the mainline and become prohibitively expensive to maintain in-house long-term. So the big wigs quickly figured out that they could make a sacrifice and embrace the GPL to reduce ongoing costs. Now, with the *BSD core team-led development, new contributors (including commercial entities) would be promptly shown the door, whereas the Linux community would give them the warmest welcome. That was the second major reason for the downfall of all things BSD.

          • danieldk 12 hours ago

            The AT&T lawsuits are a moot point, as they were all settled in the early 1990s. They are the sole reason why FreeBSD and NetBSD even came into existence – by forking the 4.4BSD-Lite codebase after the disputed code had been eliminated or replaced with non-encumbered reimplementations. Otherwise, we would all be running on descendants of 4.4BSD-Lite today.

            The lawsuit was settled in Feb 1994, FreeBSD was started in 1993. FreeBSD was started because development on 386BSD was too slow. It took FreeBSD until Nov 1994 until it rebased on BSD-Lite 4.4 (in FreeBSD 2.0.0).

            At the time 386BSD and then FreeBSD were much more mature than Linux, but it took from 1992 until the end of 1994 for the legal clarity around 386BSD/FreeBSD to clear up. So Linux had about three years to try to catch up.

      • _paulc a day ago

        > FreeBSD has committed the original sin of UNIX by deliberately dropping support for all non-Intel architectures, intending to focus on optimising FreeBSD for the Intel ISA and platforms.

        FreeBSD supports amd64 and aarch64 as Tier 1 platforms and a number of others (RiscV, PowerPC, Arm7) as Tier 2

        https://www.freebsd.org/platforms/

        • inkyoto 21 hours ago

          It is irrelevant what FreeBSD supports today.

          FreeBSD started demoting non-Intel platforms around 2008-2010, with FreeBSD 11 released in 2016 only supporting x86. The first non-Intel architecture support was reinstated in April 2021, with the official release of FreeBSD 13, which is over a decade of the time having been irrevocably lost.

          Plainly, FreeBSD has missed the boat – the first AWS Graviton CPU was released in 2018, and it ran Linux. Everything now runs Linux, but it could have been FreeBSD.

      • pjmlp a day ago

        Not really everywhere, exactly because of GPL, most embedded FOSS OSes are either Apache or BSD based.

        It is not only Netflix, Sony is also quite found of cherry picking stuff from BSDs to their Orbit OS.

        Finally, I would assert Linux kernel as we know it today, is only relevant as the ones responsible for its creation still walk this planet, and like every project, when the creators are no longer around it will be taken into directions that no longer match the original goals.

  • linguae a day ago

    Interestingly enough, Apple did contribute to porting Linux to PowerPC Macs in the mid-1990s under the MkLinux project, which started in 1996 before Apple’s purchase of NeXT later that year:

    https://en.m.wikipedia.org/wiki/MkLinux

    I don’t think there was any work done on bringing the Macintosh GUI and application ecosystem to Linux. However, until the purchase of NeXT, Apple already had the Macintosh environment running on top of Unix via A/UX (for 68k Macs) and later the Macintosh Application Environment for Solaris and HP-UX; the latter ran Mac OS as a Unix process. If I remember correctly, the work Apple did for creating the Macintosh Application Environment laid the groundwork for Rhapsody’s Blue Box, which later became Mac OS X’s Classic environment. It is definitely possible to imagine the Macintosh Application Environment being ported to MkLinux. The modern FOSS BSDs were also available in 1996, since this was after the settlement of the lawsuit affecting the BSDs.

    Of course, running the classic Mac OS as a process on top of Linux, FreeBSD, BeOS, Windows NT, or some other contemporary OS was not a viable consumer desktop OS strategy in the mid 1990s, since this required workstation-level resources at a time when Apple was still supporting 68k Macs (Mac OS 8 ran on some 68030 and 68040 machines). This idea would’ve been more viable in the G3/G4 era, and by the 2000s it would have be feasible to give each classic Macintosh program its own Mac OS process running on top of a modern OS, but I don’t think Apple would have made it past 1998 without Jobs’ return, not to mention that the NeXT purchase brought other important components to the Mac such as Cocoa, IOKit, Quartz (the successor to Display PostScript) and other now-fundamental technologies.

    • threeseed a day ago

      Completely forget about MkLinux. The timing is fascinating.

      MkLinux was released in February 1996 whilst Copland got officially cancelled in August 1996.

      So it's definitely conceivable that internally they were considering to just give up on the Copland microkernel and run it all on Linux. And maybe this was a legitimate third option to BeOS and NeXT that was never made public.

      • kalleboo a day ago

        What's crazy is that MkLinux was actually Linux-on-Mach, not just a baremetal PowerPC Linux. The work they did to port Mach to PowerPC for MkLinux was then reused in the port of NeXTSTEP Mach to PowerPC. Everything was very intertwined.

        • masswerk 21 hours ago

          Also, MkLinux wasn't that stable. I experimented a bit with it at the time and it wasn't really ripe for production. It kind of worked, but there would have been lots of work to be invested (probably more than Apple could afford) to turn this into a mainstream OS.

    • CharlesW a day ago

      > I don’t think there was any work done on bringing the Macintosh GUI and application ecosystem to Linux.

      QTML (which became the foundation of the Carbon API) was OS agnostic. The Windows versions of QuickTime and iTunes used QTML, and in an alternate universe Apple could've empowered developers to bring Mac OS apps to Windows and Linux with a more mature version of that technology.

  • surajrmal a day ago

    Why would we want more of a monoculture? We've put so many eggs in one basket already. I hope we see more diversity in kernels, not further consolidation.

    Taken a different way, it feels similar to suggesting Apple should rebase safari on chromium.

  • skissane a day ago

    > Whenever I see the Darwin kernel brought into the discussion I can't help but wonder how different things could have been if Apple had just forked Linux

    XNU is only partially open sourced – the core is open sourced, but significant chunks are missing, e.g. APFS filesystem.

    Forking Linux might have legally compelled them to make all kernel modules open source–which while that would likely be a positive for humanity, isn't what Apple wants to do

    • mattl a day ago

      At one point NeXT considered distributing GCC under the GPL with some proprietary parts linked at first boot into the binary.

      Stallman after speaking with lawyers rejected this.

      https://sourceforge.net/p/clisp/clisp/ci/default/tree/doc/Wh...

      Look for "NeXT" on this page.

      • leoh a day ago

        Stallman’s insistence that a judge would side with him is pretty arrogant in my opinion; eg looking at Oracle v. Google decades later and how folks deciding the case seemed to be confused about technical matters.

        • skissane a day ago

          I don't think it was "arrogant" – if you read the link, he explains that he originally thought differently, but he changed his mind based on what his lawyer told him. I don't think you can label a non-lawyer "arrogant" for accepting the legal advice of their own attorney – whether that advice is correct or not can be debated, but it isn't arrogant for someone to trust the correctness of their own lawyer's advice.

  • threeseed a day ago

    1) We are talking about the late 90s, well before Ubuntu, where Desktop Linux was pretty poor in terms of features and polish.

    2) Apple had no money or time to invest in rewriting NeXTStep for a completely new kernel they had no experience in. Especially when so many of the dev team was involved in sorting out Apple's engineering and tech strategy as well as all the features needed to make it more Mac like.

    3) Apple was still using PowerPC at the time which NeXTStep supported but Linux did not. It took IBM a couple of years to get Linux running.

    • bigger_cheese 7 hours ago

      >1) We are talking about the late 90s, well before Ubuntu, where Desktop Linux was pretty poor in terms of features and polish.

      I think it's hard to understate how much traction Linux had in the late 90's/ early 2000's. It felt like ground breaking stuff was happening pretty much all the time, major things were changing rapidly every release and it felt exciting and genuinely revolutionary to download updates and try out all the new things it really felt like you were on the bleeding edge, your system would break all the time but it was fun and exciting.

      I remember reading Slashdot daily being excited to try out every new distribution I'd see on distrowatch, I'd download and build kernels fairly regularly etc.

      Things I can remember from back in those days:

      - LILO to GRUB boot loader changes

      - Going from EXT2 to EXT3 and all the other experimental filesystems that kept coming out.

      - Sound system changing from OSS to ASLA

      - Introduction of /sys

      - Gentoo and all the memes (funroll-loops website)

      - Udev and being able to hotplug usb devices

      - Signalfd

      - Splice/VMsplice

      - Early wireless support and the cursed "ndiswrapper"

      Nowadays Linux is pretty stable and dare I say it "boring" (in a good way). It's probably mostly because I've gotten older and have way less free time to spend living on the bleeding edge. It feels like Linux has gone from something you had to wrestle with constantly to have a working system to a spot where nowadays everything "mostly works" out of the box. I can't remember last time I've had to cntrl + alt + backspace my desktop for example.

      Last major thing I can remember hearing about and being excited for was io_uring.

      • pjmlp 11 minutes ago

        Yes, and all of that completely uninteresing for Apple's customer base.

    • kergonath a day ago

      > Apple had no money or time to invest in rewriting NeXTStep for a completely new kernel they had no experience in.

      I broadly agree, but it is more nuanced than that. They actually had experience with Linux. Shortly before acquiring NeXT, they did the opposite of what you mentioned and ported Linux to the Mach microkernel for their MkLinux OS. It was cancelled at some point, but had things turned a bit differently, it could have ended up more important than it actually did.

  • hypercube33 21 hours ago

    Keep in mind they were also looking at BeOS which is more real time and notably not unix/Linux. I wish I lived in the timeline that they went with it as I'm a huge Be fan.

  • askvictor a day ago

    Diverse systems are more resilient. It's probably a good thing for IT in a general sense, even if it's not the most efficient

  • phendrenad2 17 hours ago

    Control is important. Apple has never had to fight with Torvalds or IBM or Microsoft over getting something added to the kernel. Just look at the fiasco when Microsoft wanted to add a driver for their virtualization system to the kernel.

    Also, one thing you'll notice about big companies - they know that not only is time valuable, worst-case time is important too. If someone in an open-source ecosystem CAN delay your project, that's almost as bad as if they regularly DO delay your project. This is why big companies like Google tend to invent everything themselves, I.E. Google may have "invented Kubernetes" (really, an engineer at Google uninvolved with the progenitor of K8s - Borg - invented it based on Borg), but they still use Borg, which everyone Xoogler here likes to say is "not as good as k8s". Yet they still use it. Because it gives them full control, and no possibility of outsiders slowing them down.

  • mycall 9 hours ago

    One must consider the loss of control moving to Linux would bring. Even Google is reconsidering with fuchsia inline to replace Linux on Android.

  • toast0 a day ago

    Based on how often they pull in updated bits from FreeBSD (pretty much never), an Apple fork of Linux would be more or less Linux 2.4 today.

    I don't know what the loss that open source suffers is in this context?

    I don't think Apple would need to spend less time or money on their kernel grafted ontop of Linux 2.4 vs their kernel grafted on top of FreeBSD 4.4

    • WD-42 a day ago

      Because presumably the GPL would force them to release their modifications. Apple gets/got away with leeching off the BSDs because of the permissive license.

      • toast0 17 hours ago

        They release their kernel source more or less timely without the GPL.

      • zigzag312 a day ago

        A bit off topic, but is there any data or estimates of how often big companies use modified versions GPL software/libraries for their web services without releasing their modifications?

  • piuantiderp 16 hours ago

    The world is better with multiple flavors instead of one bloated one.

  • DeathArrow 17 hours ago

    >Whenever I see the Darwin kernel brought into the discussion I can't help but wonder how different things could have been if Apple had just forked Linux and ran their OS services on top of that.

    They have a long history with XNU and BSD. And Linux has s GPL license which might not suit Apple.

    >Especially when I think about how committed they are to Darwin it really paints a poor image in my mind. The loss that open source suffers from that, and the time and money Apple has to dedicate to this with a disproportionate return.

    They share a lot of code with FreeBSD, NetBSD and OpenBSD. Which are open source. And Darwin is open source, too. So there's no loss that open source suffers.

  • palata 21 hours ago

    Seen differently, I think it's great that there is yet another kernel being maintained out there.

    Imagine if Apple decided to open source Darwin: wouldn't that be a big win for open source?

pjmlp a day ago

Lots of love and work went into this article, as someone around for most of this history, ported code from NeXTSTEP into Windows, dived into the GNUStep attempts to clone the experience, remembers YellowBox and OpenStep, read the internals books, regular consumer of WWDC content, I would say the article matches pretty much my recollection on most of the systems have evolved.

wuming2 4 hours ago

Where is Scott Forstall mark in all of this evolution I wonder. He was responsible for adapting macOS, hence XNU, to iPhone. The most successful Apple product of all times. And where is he now?

larusso a day ago

I’m not sure if I/O kit was written in this c++ subset just for speed. There was this controversial at the time. Apple announced MacOS X and said that it won’t be compatible with current software. All partners would need to rewrite the software in Objective-C. This didn’t go over well. Apple back paddelt and introduced “carbon”. An API layer for cpp applications as well as “Core Foundation” an underpinning to the objective-c base Framework “Foundation”. Also the reason why we have Obj-c++. The interesting part is that they managed to get the memory management toll free. Means an object allocated in the c/cpp world can be passed to obj-c without extra overhead.

  • comex a day ago

    IOKit C++ is running in the kernel, so it's not really related to any of the technologies you mentioned which are all userland-only.

    • larusso 4 hours ago

      You may underestimate how many drivers had to be shipped and developed by external companies compared to today. For software / hardware companies that was a huge deal.

    • dcrazy 15 hours ago

      Being able to port your existing C++ driver to IOKit instead of rewriting it in Objective-C is a selling point. For some reason people a lot of people seem to dislike writing an Objective-C shell around their C++.

darksaints 15 hours ago

It seems like the XNU kernel is architecturally super close to the Mach kernel, and XNU drivers architecturally work like Mach drivers, but just that they are compiled into the kernel instead of running in userspace as a separate process. And it seems like the only reason for doing so is performance.

That makes me wonder: how hard would it be to run the XNU kernel in something like a “Mach mode”, where you take the same kernel and drivers but run them separately as the Mach microkernel was intended?

I feel like from a security standpoint, a lot of situations would gladly call for giving up a little bit of performance for the process isolation security benefits that come from running a microkernel.

Is anybody here familiar enough with XNU to opine on this?

mike_hearn 21 hours ago

That's a good history, but it skips over a lot of the nice security work that really distinguishes Apple's operating systems from Linux or Windows. There's a lack of appreciation out there for just how far ahead Apple now is when it comes to security. I sometimes wonder if one day awareness of this will grow and people working in sensitive contexts will be required to use a Mac by their CISO.

The keystone is the code signing system. It's what allows apps to be granted permissions, or to be sandboxed, and for that to actually stick. Apple doesn't use ELF like most UNIXs do, they use a format called Mach-O. The differences between ELF and Mach-O aren't important except for one: Mach-O supports an extra section containing a signed code directory. The code directory contains a series of hashes over code pages. The kernel has some understanding of this data structure and dyld can associate it with the binary or library as it gets loaded. XNU checks the signature over the code directory and the VMM subsystem then hashes code pages as they are loaded on demand, verifying the hashes match the signed hash in the directory. The hash of the code directory therefore can act as a unique identifier for any program in the Apple ecosystem. There's a bug here: the association hangs off the Mach vnode structure so if you overwrite a signed binary and then run it the kernel gets upset and kills the process, even if the new file has a valid signature. You have to actually replace the file as a whole for it to recognize the new situation.

On top of this foundation Apple adds code requirements. These are programs written in a small expression language that specifies constraints over aspects of a code signature. You can write a requirement like, "this binary must be signed by Apple" or "this binary can be of any version signed by an entity whose identity is X according to certificate authority Y" or "this binary must have a cdhash of Z" (i.e. be that exact binary). Binaries can also expose a designated requirement, which is the requirement by which they'd like to be known by other parties. This system initially looks like overkill but enables programs to evolve whilst retaining a stable and unforgeable identity.

The kernel exposes the signing identity of tasks to other tasks via ports. Requirements can then be imposed on those ports using a userspace library that interprets the constraint language. For example, if a program stores a key in the system keychain (which is implemented in user space) the keychain daemon examines the designated requirement of the program sending the RPC and ensures it matches future requests to use the key.

This system is abstracted by entitlements. These are key=value pairs that express permissions. Entitlements are an open system and apps can define their own. However, most entitlements are defined by Apple. Some are purely opt-in: you obtain the permission merely by asking for it and the OS grants it automatically and silently. These seem useless at first, but allow the App Store to explain what an app will do up front, and more generally enable a least-privilege stance where apps don't have access to things unless they need them. Some require additional evidence like a provisioning profile: this is a signed CMS data structure provided by Apple that basically says "apps with designated requirement X are allowed to use restricted entitlement Y", and so you must get Apple's permission to use them. And some are basically abused as a generic signed flags system; they aren't security related at all.

The system is then extended further, again through cooperation of userspace and XNU. Binaries being signable is a start but many programs have data files too. At this point the Apple security system becomes a bit hacky IMHO: the kernel isn't involved in checking the integrity of data files. Instead a plist is included at a special place in the slightly ad-hoc bundle directory layout format, the plist contains hashes of every data file in the bundle (at file not page granularity), the hash of the plist is placed in the code signature, and finally the whole thing is checked by Gatekeeper on first run. Gatekeeper is asked by the kernel if it's willing to let a program run and it decides based on the presence of extended attributes that are placed on files and then propagated by GUI tools like web browsers and decompression utilities. The userspace OS code like Finder invokes Gatekeeper to check out a program when it's been first downloaded, and Gatekeeper hashes every file in the bundle to ensure it matches what's signed in the binaries. This is why macOS has this slow "Verifying app" dialog that pops up on first run. Presumably it's done this way to avoid causing apps to stall when they open large data files without using mmap, but it's a pity because on fast networks the unoptimized Gatekeeper verification can actually be slower than the download itself. Apple doesn't care because they view out-of-store distribution as legacy tech.

Finally there is Seatbelt, a Lisp-based programming language for expressing sandbox rules. These files are compiled in userspace to some sort of bytecode that's evaluated by the kernel. The language is quite sophisticated and lets you express arbitrary rules for how different system components interact and what they can do, all based on the code signing identities.

The above scheme has an obvious loophole that was only closed in recent releases: data files might contain code and they're only checked once. In fact for any Electron or JVM app this is true because the code is in a portable format. So, one app could potentially inject code into another by editing data files and thus subvert code signing. To block this in modern macOS Seatbelt actually sandboxes every single app running. AFAIK there is no unsandboxed code in a modern macOS. One of the policies the sandbox imposes is that apps aren't allowed to modify the data files of other apps unless they've been granted that permission. The policy is quite sophisticated: apps can modify other apps if they're signed by the same legal entity as verified by Apple, apps can allow others matching code requirements to modify them, and users can grant permission on demand. To see this in action go into Settings -> Privacy & Security -> App Management, then turn it off for Terminal.app and (re)start it. Run something like "vim /Applications/Google Chrome.app/Contents/Info.plist" and observe that although the file has rw permissions vim thinks it's read-only.

Now, I'll admit that my understanding of how this works ends here because I don't work for Apple. AFAIK the kernel doesn't understand app bundles, and I'm not sure how it decides whether an open() syscall should be converted to read only or not. My guess is that the default Seatbelt policy tells the kernel to do an upcall to a security daemon which understands the bundle format and how to read the SQLite permission database. It then compares the designated requirement of the opener against the policies expressed by the bundle and the sandbox to make the decision.

  • adrian_b 21 hours ago

    I do not think that "security" is the appropriate name for such features.

    In my opinion "security" should always refer to the security of the computer owners or users.

    These Apple features may be used for enhancing security, but the main purpose for which they have been designed is to provide enhanced control of the computer vendor on how the computer that they have sold, and which is supposed to no longer belong to them, is used by its theoretical owner, i.e. by allowing Apple to decide which programs are run by the end user.

    • mike_hearn 20 hours ago

      On macOS the security system is open even though the codebase is closed. You can disable SIP and get full root access. Gatekeeper can be configured to trust some authority other than Apple, or disabled completely. You can write and load your own sandbox policies. These things aren't well known and require reading obscure man pages, but the capabilities are there.

      Even in the default out-of-the-box configuration, Apple isn't exercising editorial control over what apps you can run. Out of store distribution requires only a verified identity and a notarization pass, but notarization is a fully automated malware scan. There's no human in the loop. The App Store is different, of course.

      Could Apple close up the Mac? Yes. The tech is there to do so and they do it on iOS. But... people have been predicting they'd do this from the first day the unfortunately named Gatekeeper was introduced. Yet they never have.

      I totally get the concern and in the beginning I shared it, but at some point you have to just stop speculating give them credit for what they've actually done. It's much easier to distribute an app Apple executives don't like to a Mac than it is to distribute an app Linux distributors don't like to Linux users, because Linux app distribution barely works if you go "out of store" (distro repositories). In theory it should be the other way around, but it's not.

      • p_ing 13 hours ago

        > Even in the default out-of-the-box configuration, Apple isn't exercising editorial control over what apps you can run

        Perhaps not in the strictest sense, but Apple continues to ramp up the editorial friction for the end user to run un-notarized applications.

        I feel/felt <macOS 15 that right-click Open was an OK approach, but as we know that's gone. It's xattr or Settings.app. More egregious is the monthly reminder that an application is doing something that you want it to do.

        A level between "disable all security" and what macOS 15 introduces would be appreciated.

        • mike_hearn an hour ago

          More knobs would be nice, yes. Still nothing stops you using a customized file browser, browser, archiver etc that doesn't set the xattrs at all.

    • saagarjha 20 hours ago

      I think you went for a lazy reply rather than actually reading the comment through. Most of the things mentioned here directly improve security for the computer's owner.

      • lapcat 20 hours ago

        > I think you went for a lazy reply rather than actually reading the comment through.

        https://news.ycombinator.com/newsguidelines.html

        Your reply could have omitted the first sentence.

        Many years ago, at Macworld San Francisco, I met "Perry the Cynic", the Apple engineer who added code signing to Mac OS X. Nice person, but I also kind of hate him and wish I could travel back in time to stop this all from happening.

        • saagarjha 4 hours ago

          It could have, but I would just replace it with the same link you posted. And we all hate Perry sometimes :)

mannyv 15 hours ago

I suppose with unified memory there's no real difference between the kernel and userspace; it's just different security zones.

The MMU era used separate memory spaces to enforce security, but it's probably safer in the log run to actually have secure areas instead of 'accidentslly secure areas" that aren't that secure.

  • dcrazy 14 hours ago

    I think you’re misunderstanding “unified memory”. That term refers to whether the GPU has its own onboard memory chips which must be populated by a DMA transfer. It doesn’t refer to whether the system has an MMU.

emchammer a day ago

Couldn't Apple have used ZFS instead of inventing APFS? Maybe modifying it to use less physical memory?

  • linguae a day ago

    I remember reading back in 2007-2008 that Apple was interested in bringing ZFS support to Mac OS X, but discussions ended once Oracle purchased Sun. This was a bummer; I would’ve loved ZFS on a Mac.

    After a cursory Google search, I found this article:

    https://www.zdnet.com/article/zfs-on-snow-leopard-forget-abo...

    • leoh a day ago

      Kind of surprising that the Oracle deal would have killed it given that Jobs and Ellison were such close friends.

      • krger 21 hours ago

        They were probably close friends because they weren't business competitors.

  • cosmic_cheese a day ago

    IIRC that was something they had been working on, but it got axed when ZFS changed hands and licensing became potentially thorny. My memory may be failing me though.

  • wpm a day ago

    Around the time of Snow Leopard, it was rumored. I assume the Oracle buyout of Sun around the same time had a big part in killing that particular idea.

  • inkyoto a day ago

    Supporting ZFS in a UNIX kernel requires excessively extensive modifications to the design and implementation of the VMM, namely:

      1. Integration of the kernel's VM with ZFS's adaptive replacement cache which runs in user space – memory pressure cooperation, page accounting and unified memory management. It also requires extensive VM modifications to support ZFS's controlled page eviction, fine-grained dirty page tracking, plus other stuff.
    
      2. VMM alignment with the ZFS transactional semantics and intent logs – delayed write optimisations, efficient page syncing.
    
      3. Support for large memory pages and proper memory page alignment – support for superpages (to reduce the TLB pressure and to efficiently map large ZFS blocks efficiently) and I/O alignment awareness (to ensure proper alignment of memory pages to avoid unnecessary copies).
    
      4. Memory-mapped I/O: different implementation of mmap and support for lazy checksumming for mmap pages.
    
      5. Integration with kernel thread management and scheduling, co-opertation with VMM memmory allocators.
    
      6. … and the list goes on and on.
    
    ZFS is just not the right answer for consumer facing and mobile/portable devices due being a heavyweight server design with vastly different design provisions and due to being the answer to a entirely different question.
    • AndrewDavis a day ago

      > Supporting ZFS in a UNIX kernel requires excessively extensive modifications to the design and implementation of the VMM, namely:

      FYI: Apple did a bunch of that work. They ported ZFS to OSX shortly after it was open sourced. With with only support landing in 10.5. With it being listed as an upcoming feature in 10.6.

      But something happened and they abandoned it. The rumour is a sun exec let the cat out of the bag about it being the next main filesystem for osx (ie not just support for non root drives) and this annoyed Jobs so much he canned the whole project.

      • inkyoto a day ago

        Yes, they did, but… it was more of a proof of concept and a promise rather than a production quality release. They also had the OS X Server product line at the time (no more), which ZFS would have been the best fit for at the time, and they also released the OS X ZFS port before the advent of the first iPhone.

        It is not a given that ZFS would have performed well within the tight hardware constraints of the first ten or so generations of the iPhone – file systems such as APFS, btrfs or bcachefs are better suited for the needs of mobile platforms.

        Another conundrum with ZFS is that ZFS disk pools really, really want a RAID setup, which is not a consumer grade thing, and Apple is a consumer company. Even if ZFS did see the light back then, there is no guarantee it would have lived on – I am not sure, anyway.

      • lunarlull a day ago

        > The rumour is a sun exec let the cat out of the bag about it being the next main filesystem for osx (ie not just support for non root drives) and this annoyed Jobs so much he canned the whole project.

        Very petty if true.

        • MBCook a day ago

          It would fit jobs though.

          That’s one of the famous rumors.

          As others here have said Oracle bought Sun two years later. Between me increased memory requirements, uncertainty due to Sun’s status as an ongoing concern, and who knows what else maybe it really did make sense not to go forward.

ladyanita22 an hour ago

I'm a bit disappointed at the lack of interest Apple seems to have on Rust, given their focus on performance, UX and security.

  • pjmlp 7 minutes ago

    They created Swift as replacement for C, C++ and Objective-C, why should they bother with Rust?

    Even Google on Android and ChromeOS is not exposing Rust to userspace, Java, Kotlin, C, C++, Javascript, Typescript, remain the official userspace languages.

  • melodyogonna an hour ago

    I believe their plan is to make Swift good enough to use in high-performance low-level scenarios. Some of their recent work has been to reduce implicit copies

comex a day ago

At the risk of nitpicking, there are a bunch of things that are not quite right. Nonexhaustive list:

- Discussion of paging mixes together some concepts as I described in [1].

- Mach port "rights" are not directly related to entitlements. Port rights are port of the original Mach design; entitlements are part of a very different, Apple-specific security system grafted on much later. They are connected in the sense that Mach IPC lets the receiver get an "audit token" describing the process that sent them, which it can then use to look up entitlements.

- All IOKit calls go through Mach IPC, not just asynchronous events.

- "kmem" (assuming this refers to the kmem_* functions) is not really a “general-purpose kernel malloc”; that would be kalloc. The kmem_* functions are sometimes used for allocations, but they’re closer to a “kernel mmap” in the sense that they always allocate new whole pages.

- It’s true that xnu can map the same physical pages into multiple tasks read-only, but that’s nothing special. Every OS does that if you use mmap or similar APIs. What does make the shared cache special is that it can also share physical page tables between tasks.

- The discussion about “shared address space” is mixing things up.

The current 64-bit behavior is the same as the traditional 32-bit behavior: the lower half of the address space is reserved for the current user process, and the upper half is reserved for the kernel. This is typically called a shared address space, in the sense that the kernel page tables are always loaded, and only page permissions prevent userland from accessing kernel memory. Though you could also think of it as a 'separate' address space in the sense that userland and kernel stick to separate addresses. Anyway, this approach is more efficient (because you don't have to swap page tables for every syscall) and it's the standard thing kernels do.

What was tricky and unusual was the intermediate 32-bit behavior where the kernel and user page tables actually were completely independent (so the same address would mean one thing in user mode and another thing in kernel mode). This allowed 32-bit user processes to use more memory (4GB rather than 2GB), but at the cost of making syscalls more expensive.

Even weirder, in the same era, xnu could even run 64-bit processes while itself being 32-bit! [2]

- The part about Secure Enclave / Exclaves does not explain the main difference between them: the Secure Enclave is its own CPU, while Exclaves are running on the main CPU, just in a more-trusted context.

- Probably shouldn't describe dispatch queues as a "new technique". They're more than 15 years old, and now they're sort of being phased out, at least as a programming model you interact with directly, in favor of Swift Concurrency. To be fair, Swift Concurrency uses libdispatch as a backend.

[1] https://news.ycombinator.com/item?id=43599230

[2] https://superuser.com/questions/23214/why-does-my-mac-os-x-1...

  • saagarjha a day ago

    Also,

    > As of iOS 15, Apple even allows virtualization on iOS (to run Linux inside an iPad app, for example, which some developers have demoed), indicating the XNU hypervisor is capable on mobile as well, though subject to entitlement.

    Apple definitely does not allow this; in fact the hypervisor code has been removed from the kernel as of late.

lapcat a day ago

Question for the author, who is here in the comments: for clarification, to what extent is the article a deep dive into the OS itself (e.g., reverse engineering) vs. a deep dive into the extant literature on the OS?

whalesalad a day ago

I’ve been wanting to understand Darwin at this depth for a long time. Great read!

  • jshier a day ago

    Mac OS X Internals by Singh is one of my favorite books, such a great in depth examination of Mac OS X circa 10.4. I really wish there was an updated version.

    Edit: I see it's even cited at the end of this article. Truly a source for the (macOS) ages.

    • wpm a day ago

      Jonathan Levin’s three part series “*OS Internals” is that update, but they stopped working on and writing about Darwin around Catalina.

  • kccqzy a day ago

    I've also wanted to understand Windows NT at this depth for a while. Skip the Win32 stuff, and discuss what's underneath it. As I understand Win32 is just one personality; there was also Windows Services for UNIX in the Windows XP days and Subsystem for UNIX-based Applications in Windows Vista. The underlying NT kernel is flexible enough to allow POSIX compliance. That would be an interesting read.

    • p_ing a day ago

      Windows Internals is the book you want.

      Or Inside Windows NT, if you want "version 1" of the Internals series. Or read the Windows NT OS/2 Design Workbook - https://computernewb.com/~lily/files/Documents/NTDesignWorkb....

      Yes, Win32 is just one personality, but a required one. OpenNT, Interix, SFU, SUA will ride alongside Win32. And of course there was the official OS/2 personality.

      • nunez 16 hours ago

        100%. Russinovich, who now heads up Azure, co-wrote many of the follow-on books, and David Solomon, who co-wrote the NT kernel, co-wrote the first few. The latest version of this book covers Windows 10/Server 2016. They are very, very good.

    • skissane a day ago

      > As I understand Win32 is just one personality

      Not really... although NT was designed to run multiple "personalities" (or "environment subsystems" to use the official term), relatively early in its development they decided to make Win32 the "primary" environment subsystem, with the result that the other two subsystems (OS/2 and POSIX) ended up relying on Win32 for essential system services.

      I think this multiple personalities thing was the original vision but it never really took off in the way its original architects intended – although there used to be OS/2 and POSIX subsystems, Microsoft never put a great deal of effort into them, and now them are both dead, so Win32 is the only environment subsystem left.

      Yes, there is WSL, but: WSL1 is not an environment subsystem in the classic NT sense – it has a radically different implementation from the old OS/2 and POSIX subsystems, a "picoprocess provider". And WSL2 is just a Linux virtual machine.

      • ForOldHack a day ago

        "At the same time, NT (up to and including Windows 2000) shipped with an OS/2 subsystem which ran character-mode 16-bit OS/2 applications." From OS/2 museum.

        • ForOldHack 19 hours ago

          Turns out this is not true. Confirmed that os/2 2.0 was a skinning and compatibility layer for NT it came out for OS/2, not with windows and not from Microsoft. It came with OS/2 and from IBM. No idea whether it supported HPFS+ but it was not a subsystem.

          • skissane 16 hours ago

            > Turns out this is not true.

            No, what you quoted in your comment you are replying to is accurate. What you are saying in this comment isn’t.

            > Confirmed that os/2 2.0 was a skinning and compatibility layer for NT it came out for OS/2,

            This is confused. OS/2 was not a “skinning and compatibility layer for NT” it was a completely separate operating system.

            I think at one point NT was going to be OS/2 2.0, and then it was going to be OS/2 3.0 - but the OS/2 2.0 which eventually ended up shipping had nothing to do with NT, it was IBM’s independent work, in which Microsoft was uninvolved (except maybe in its early stages).

      • ForOldHack a day ago

        WSL2 is just a Linux VM, and the POSIX subsystem is just a kluge I never heard of an OS/2 subsystem for NT, for which Cutler would take extreme umbridge in.

        I have three charts on my wall( now 4): the Unix timeline, the windows timeline, and the Linux distribution tree,and now a very decent MacOS X timeline.

        The personalities became containers which is just the windows version of common subsystem virtualization. Containers were based on VirtualPC, but with the genius of Mark Russinivich.

        • skissane a day ago

          > I never heard of an OS/2 subsystem for NT,

          It was there from NT 3.1 until Windows 2000; it was removed in Windows XP onwards.

          It was very limited – it only supported character mode 16-bit OS/2 1.x applications. 32-bit apps, which IBM introduced with OS/2 2.0, were never supported. Microsoft offered an extra cost add-on called "Microsoft OS/2 Presentation Manager For Windows NT" aka "Windows NT Add-On Subsystem for Presentation Manager", which added support for GUI apps (but still only 16-bit OS/2 1.x apps) – which was available for NT version 3.1 thru 4.0, I don't believe it was offered for Windows 2000.

          The main reason why it existed – OS/2 1.x was jointly developed by IBM and Microsoft, with both having the right to sell it – so some business customers bought Microsoft OS/2 and then used it as the basis for their business applications – when Microsoft decided to replace Microsoft OS/2 with Windows NT, they needed to provide these customers with backward compatibility and an upgrade path, lest they jump ship to IBM OS/2 instead. But Microsoft never tried to support 32-bit OS/2, since Microsoft never sold it, and given their "divorce" with IBM they didn't have the rights to ship it (possibly they might have retained rights to some early in-development version of OS/2 2.0 from before the breakup, but definitely not the final shipped OS/2 2.0 version) – the OS/2 subsystem wasn't some completely from-scratch emulation layer, it was actually based off the OS/2 code, with the lower levels rewritten to run under Windows NT, but higher level components included OS/2 code largely unchanged.

          > for which Cutler would take extreme umbridge in.

          Windows NT was originally called NT OS/2, because it was originally going to be Microsoft OS/2 3.0. Part way through development – but at which point Cutler and his team had already got the basics of the OS up and running on Microsoft Jazz workstations (in-house Microsoft workstation design using Intel i860 RISC CPUs) – Microsoft and IBM had a falling out and there was a change of strategy, instead of NT providing a 32-bit OS/2 API, they'd extend the 16-bit Windows 3.x API to 32-bit and use that. So I doubt Cutler would take "extreme umbrage" at something which was the plan at the time he was hired, and remained the plan through the first year or two of NT's development.

          > The personalities became containers which is just the windows version of common subsystem virtualization.

          Containers and virtualization are (at least somewhat) successors to personalities / environment subsystems in terms of the purpose they serve – but in terms of the actual implementation architecture, they are completely different.

          • p_ing 20 hours ago

            > 32-bit apps, which IBM introduced with OS/2 2.0, were never supported.

            This was obviously due to the divorce, but also that the Cruiser API wasn't finalized.

            > Our initial OS/2 API set centers around the evolving 32-bit Cruiser, or OS/2 2.0 API set. (The design of Cruiser APIs is being done in parallel with the NT OS/2 design.)

            ...

            > Given the nature of OS/2 design (the joint development agreement), we have had little success in influencing the design of the 2.0 APIs so that they are portable and reasonable to implement on non-x86 systems.

  • tansanrao a day ago

    It’s my first time condensing my research notes into a blog post like this, glad you liked it!

    • ForOldHack a day ago

      It was easily comprehensive enough, that I printed it out and was showing people today what the differences were in the last couple of os releases, while ruminating about snow leopard. No one mentions that the blue box became Rosetta, and Rosetta II did the same thing in the switch from Intel to ARM.

      There are some very tiny points, but this is easily very best to date. ( I started with Rhapsody, and Linux in swedish, and NT 3.1 )( Ran MKLinux on a 7100, but never got accelerated video to work. )

      • ForOldHack a day ago

        I should add, I have never seen accelerated video in Linux, except for built in Intel video and it's fine.

opengears a day ago

Does this give us any indication on when Apple will be ditching x86 support?

  • snovymgodym 16 hours ago

    Late 2027 or 2028 most likely. MacOS version lifespan is about 3 years give or take so we'll have a better idea once they announce whether or not MacOS 16 (Sequoia's successor) will support Intel Macs.

    I have a hunch we'll get one more MacOS version with Intel support since they were still making Mac Minis and Pros with Intel chips in the first half of 2023.

  • SG- 20 hours ago

    any day now, usually after 6-7 years after hardware releases.

    • icedchai 8 hours ago

      We probably have a few more years to go then. Consider that they were still selling Mac Pro 2019 Intel systems as late as June, 2023.

      • philistine 8 hours ago

        I'm not convinced this year's OS release will support Intel. When they switched to Intel, Apple was quicker to dump PowerPC than people remember.

        • icedchai 5 hours ago

          I'd be surprised if they don't support it for at least one more release. With Intel, they discontinued the PowerPC systems much faster (by the end of 2006.) They were still selling Intel hardware 2 years ago, a couple years into the ARM / Apple Silicon transition.

fithisux a day ago

They should have fostered a better FOSS community around XNU, now that they moved to ARM there should have been a runnable distribution for x64

adolph 14 hours ago

Can someone speak to the below statement from the article? I thought Objective-C did not have a runtime like memory managed languages like C#.

> avoid the runtime overhead of Objective-C in the kernel

From Apple docs[0]:

You typically don’t need to use the Objective-C runtime library directly when programming in Objective-C. This API is useful primarily for developing bridge layers between Objective-C and other languages, or for low-level debugging.

0. https://developer.apple.com/documentation/objectivec/objecti...

  • twoodfin 14 hours ago

    Objective-C does have a runtime that maintains all the state necessary to implement the APIs in the documentation you linked.

    For example, how to map class objects to string representations of their names.

    • dcrazy 13 hours ago

      Yep, ObjC programs call into the runtime every time they call a method.

devmtk a day ago

oh interesting