Kary
28⟡247
Zea Pou
1369
Gregorian 2024-09-11
Khayyamian 976/06/21
Shamsi 1403/06/21
Quotes & Excerpts

For the longest time, Unix only offered the simplistic read/write/execute permission sets for each file. NT, on the other hand, came with advanced ACLs from the get go—something that’s still a sore spot on Unix. Even though Linux and the BSDs now have ACLs too, their interfaces are inconsistent across systems and they feel like an alien add-on to the design of the system. On NT, ACLs work at the object level, which means they apply consistently throughout all kernel features.

The thing that makes the I/O subsystem of NT much more advanced than Unix’s is the fact that its interface is asynchronous in nature and has been like that since the very beginning.

Aa major goal of NT was to be compatible with applications written for legacy Windows, DOS, OS/2 and POSIX.

This need for compatibility forced NT’s design to be significantly different than Unix’s. In Unix, user-space applications talk to the kernel directly via its system call interface, and this interface is the Unix interface. Oftentimes, but not always, the C library provides the glue to call the kernel and applications never issue system calls themselves—but that’s a minor detail.

Contrast this to NT where applications do not talk to the executive (the kernel) directly. Instead, each application talks to one specific protected subsystem,and these subsystems are the ones that implement the APIs of the various operating systems that NT wanted to be compatible with. These subsystems are implemented as user-space servers (they are not inside the NT “microkernel”).

Unix systems have never been big on supporting arbitrary drivers: remember that Unix systems were typically coupled to specific machines and vendors. NT, on the other hand, intended to be an OS for “any” machine and was sold by a software company, so supporting drivers written by others was critical. As a result, NT came with the Network Driver Interface Specification (NDIS), an abstraction to support network card drivers with ease. To this day, manufacturer-supplied drivers are just not a thing on Linux, which leads to interesting contraptions like the ndiswrapper, a very popular shim in the early 2000s to be able to reuse Windows drivers for WiFi cards on Linux.

Even though protobuf and gRPC may seem like novel ideas due to their widespread use, they are based on old ideas. On Unix, we had Sun RPC from the early 1980s, primarily to support NFS. Similarly, NT shipped with built-in RPC support via its own DSL—known as MIDL to specify interface definitions and to generate code for remote procedures—and its own facility to implement RPC clients and servers.

One important piece of the NT executive is the Hardware Abstraction Layer (HAL), a module that provides abstract primitives to access the machine’s hardware and that serves as the foundation for the rest of the kernel. This layer is the key that allows NT to run on various architectures, including i386, Alpha, and PowerPC. To put the importance of the HAL in perspective, contemporary Unixes were coupled to a specific architecture: yes, Unix-the-concept was portable because there existed many different variants for different machines, but the implementation was not.

Named pipes are a local construct in Unix: they offer a mechanism for two processes on the same machine to talk to each other with a persistent file name on disk. NT has this same functionality, but its named pipes can operate over the network. By placing a named pipe on a shared file system, two applications on different computers can communicate with each other without having to worry about the networking details.

NTFS was a really advanced file system for its time even if we like to bash on it for its poor performance (a misguided claim). The I/O subsystem of NT, in combination with NTFS, brought 64-bit addressing, journaling, and even Unicode file names. Linux didn’t get 64-bit file support until the late 1990s and didn’t get journaling until ext3 launched in 2001. Soft updates, an alternate fault tolerance mechanism, didn’t appear in FreeBSD until 1998. And Unix represents filenames as nul-terminated byte arrays, not Unicode.

Other features that NT included at launch were disk stripping and mirroring—what we know today as RAID— and device hot plugging. These features were not a novelty given that SunOS did include RAID support since the early 1990s, but what’s interesting is that these were all accounted for as part of the original design.

The NT kernel has various interrupt levels (SPLs in BSD terminology) to determine what can interrupt what else (e.g. a clock interrupt has higher priority than a disk interrupt) but, more importantly, the kernel threads can be preempted by other kernel threads. This is “of course” what every high-performance Unix system does today, but it’s not how many Unixes started: those systems started with a kernel that didn’t support preemption nor multiprocessing; then they added support for user-space multiprocessing; and then they added kernel preemption. [...] So it is interesting to see that NT started with the right foundations from its inception.

As much as we like to bash Windows for security problems, NT started with an advanced security design for early Internet standards given that the system works, basically, as a capability-based system. The first user process that starts after logon gets an access token from the kernel representing the privileges of the user session, and the process and its subprocesses must supply this token to the kernel to assert their privileges. This is different from Unix where processes just have identifiers and the kernel needs to keep track of what each process can do in the process table.

One thing that put NT ahead of contemporary Unix systems is that the kernel itself can be paged out to disk too. Obviously not the whole kernel—if it all were pageable, you’d run into the situation where a resolving kernel page fault requires code from a file system driver that was paged out—but large portions of it are. This is not particularly interesting these days because kernels are small compared to the typical installed memory on a machine, but it certainly made a big difference in the past where every byte was precious.

Unix-the-concept was portable because there existed many different variants for different machines, but the implementation was not.

Day's Context
Open Books