

I guess the part I’m having trouble with is that the microkernel vs. monolith question is mostly about what privileges different parts of the system run under, rather than how they are compiled and loaded into memory and what CPU features they use.
I would agree that any sort of modular kernel could make it possible to choose on a driver-by-driver basis whether to load a module built with vector instructions or without. But I don’t think it’s greatly important whether those modules are running as isolated userspace processes with limited privilege or all inside the kernel with supervisor privilege: the important thing is that you be able to decide dynamically (at driver load time) whether to load the version built for the vector extension or the version that does not expect it.
Whatever strategy you use there is some variation of the problem of making sure different parts of the system can cooperate in their use of the vector registers. For drivers in processes microkernel-style that probably looks like normal context switching with the whole vector register set. For a Linux-style kernel module where the driver is just a bunch of functions loaded and linked in the main kernel address space, that’s a question of defining and following a consistent calling convention, which could potentially have less overhead because the callee knows which subset of the registers it is using and so only needs to preserve those across a call, whereas a full context switch would presumably need to save and restore all of them.
(Since we’re talking hypotheticals here I’m intentionally ignoring the detail that the Linux kernel typically avoids using extensions that involve additional CPU state because it avoids the need for saving and restoring all of those extra registers on kernel entry and exit. I expect that this question of what Ubuntu supports is more about normal userspace programs and how that are compiled, rather than the kernel itself. I don’t know that for certain though, since I don’t know exactly what they are planning to change about distribution build process under the new policy.)
From watching the questions the author was asking others just before writing this, I think at least part of the purpose of this article is to draw attention to how the shell, the kernel, the terminal emulator and other components work together to provide these features. Or, to look at it another way, which of the components is the one responsible for each part, so that the reader might know which part they need to reconfigure if they want to achieve a particular result.