• 0 Posts
  • 6 Comments
Joined 3 years ago
cake
Cake day: November 6th, 2022

help-circle
  • From watching the questions the author was asking others just before writing this, I think at least part of the purpose of this article is to draw attention to how the shell, the kernel, the terminal emulator and other components work together to provide these features. Or, to look at it another way, which of the components is the one responsible for each part, so that the reader might know which part they need to reconfigure if they want to achieve a particular result.


  • I guess the part I’m having trouble with is that the microkernel vs. monolith question is mostly about what privileges different parts of the system run under, rather than how they are compiled and loaded into memory and what CPU features they use.

    I would agree that any sort of modular kernel could make it possible to choose on a driver-by-driver basis whether to load a module built with vector instructions or without. But I don’t think it’s greatly important whether those modules are running as isolated userspace processes with limited privilege or all inside the kernel with supervisor privilege: the important thing is that you be able to decide dynamically (at driver load time) whether to load the version built for the vector extension or the version that does not expect it.

    Whatever strategy you use there is some variation of the problem of making sure different parts of the system can cooperate in their use of the vector registers. For drivers in processes microkernel-style that probably looks like normal context switching with the whole vector register set. For a Linux-style kernel module where the driver is just a bunch of functions loaded and linked in the main kernel address space, that’s a question of defining and following a consistent calling convention, which could potentially have less overhead because the callee knows which subset of the registers it is using and so only needs to preserve those across a call, whereas a full context switch would presumably need to save and restore all of them.

    (Since we’re talking hypotheticals here I’m intentionally ignoring the detail that the Linux kernel typically avoids using extensions that involve additional CPU state because it avoids the need for saving and restoring all of those extra registers on kernel entry and exit. I expect that this question of what Ubuntu supports is more about normal userspace programs and how that are compiled, rather than the kernel itself. I don’t know that for certain though, since I don’t know exactly what they are planning to change about distribution build process under the new policy.)


  • Perhaps I’ve missed something that disagrees with this, but as far as I know Ubuntu’s decision to target the latest profile is primarily a question of how they are configuring the compiler when building the binary packages: it will be configured so that, for example, the compiler’s autovector optimizations are allowed to transform scalar code into vector code when that’s productive, and so the resulting binaries are not guaranteed to run on a processor that doesn’t support V.

    The kernel itself has code to support context switching when running userspace code that uses V to preserve the contents of all the extra registers, but the kernel can detect that at runtime, so it doesn’t require separate kernel builds. This is similar to how adding kernel support for the various x86 SIMD extensions didn’t prevent the kernel from running without them.

    Therefore I don’t think the kernel’s internal architecture makes a great deal of difference to this situation. Ubuntu could, if they wanted to, keep building packages with the compiler configured to target the old profile, in which case only software that explicitly uses the new extensions in its own source code (rather than as an automatic optimization in the compiler) would still work, but those binaries would not exploit the new extensions even when they are available. I assume Ubuntu just wants to maximise the performance benefits of using V and are betting that new hardware will become available soon enough that they can get away with not maintaining two parallel sets of packages targeting different RISC-V profiles for the entire LTS period.

    (Separately, I don’t think there’s anything to prevent having a Linux loadable kernel module containing V instructions and loading it into a kernel that doesn’t use them otherwise, as long as the code in that module is careful to avoid leaving the V registers in a wrong state when control returns to the rest of the kernel. The main kernel would be oblivious, unless the module were buggy.)


  • In the early days (before everyone started cloning the IBM PC) replacing IO.SYS was indeed how MS-DOS was ported to other platforms, and so I suppose in theory that could work. However:

    • UEFI, unlike BIOS, separates the boot phase from the runtime phase and provides far less functionality in the runtime phase. To get functionality comparable to BIOS I expect this DOS port would need to remain in the boot phase for its entire runtime.
    • Since UEFI expects calls to be made in protected mode while the BIOS API is real mode, this compatibility layer would presumably need to keep switching into protected mode each time the BIOS is called, which is amusing because that’s the opposite of the typical arrangement where DPMI was used to call the real mode API from protected mode in various software.
    • Because most of the commercial success of MS-DOS etc were on IBM PC clones rather than the early systems that relied on the IO.SYS abstraction layer, there’s not much extant DOS software that solely targets the DOS API. Some expects to call directly into the IBM ROM BIOS, and lots bypass even that layer and talk directly to legacy hardware devices that might not exist on a pure-UEFI system without a BIOS compatibility layer, so it’s not clear to me that there would be much software that would run on this hypothetical FreeDOS port to the UEFI API.

    Honestly, if I were trying to do something like this I’d probably shoot for a very minimal Linux image that boots directly into something like QEMU/KVM running directly against the Linux framebuffer/KMS API, since the kernel would then presumably provide drivers for the real hardware (instead of using the more limited drivers in the UEFI firmware) and QEMU can already emulate various legacy hardware that software of the DOS era tends to expect to directly communicate with.

    Of course, that’s not nearly as satisfying a solution as running directly as a UEFI application! I’m just concerned that UEFI isn’t really designed to provide equivalent services to IBM-style BIOS, so it would be an uphill struggle.


  • Indeed, I was thinking about OSes like DOS that use the BIOS API even at runtime, for tasks like accessing disks.

    As you say, Linux is built for the same world that UEFI was built for, where the firmware is mostly used only to boot the system and for low-level stuff like power management. In that case, the “boot services” in UEFI help to get the kernel loaded and then that takes over most of the hardware interactions. Linux uses BIOS in the same limited way it uses UEFI.

    But the IO.SYS in DOS (on IBM PC-compatible platforms, at least) is effectively a wrapper around the BIOS interrupts, and applications running under DOS also expect to be able to interact with BIOS directly sometimes, so I think to do what was asked would mean the OS effectively running inside the UEFI “boot services” environment, rather than the usual approach of the UEFI application only dealing with early boot and then transferring control fully to the OS.

    (UEFI does have a legacy compatibility layer that I’ve been ignoring for the sake of this discussion because it’s something normally built in to your firmware rather than something you can add yourself. But it is technically possible for a BIOS implementation to run in that environment. I don’t think it’s possible for a normal UEFI application to use that facility, but I might be wrong about that.)


  • OSes that expect BIOS have some expectations that would be hard to meet in the UEFI application execution environment: BIOS ROM at a specific memory address, processor in real mode, and probably expecting to find some other legacy hardware even though that’s not strictly a BIOS thing.

    Maybe you could use the CPU’s virtualization features to implement a low-level virtual machine with a BIOS implementation in it, launched directly from the UEFI environment, but would the entire OS then be running in that VM? 🤔