One big difference that I’ve noticed between Windows and Linux is that Windows does a much better job ensuring that the system stays responsive even under heavy load.

For instance, I often need to compile Rust code. Anyone who writes Rust knows that the Rust compiler is very good at using all your cores and all the CPU time it can get its hands on (which is good, you want it to compile as fast as possible after all). But that means that for a time while my Rust code is compiling, I will be maxing out all my CPU cores at 100% usage.

When this happens on Windows, I’ve never really noticed. I can use my web browser or my code editor just fine while the code compiles, so I’ve never really thought about it.

However, on Linux when all my cores reach 100%, I start to notice it. It seems like every window I have open starts to lag and I get stuttering as the programs struggle to get a little bit of CPU that’s left. My web browser starts lagging with whole seconds of no response and my editor behaves the same. Even my KDE Plasma desktop environment starts lagging.

I suppose Windows must be doing something clever to somehow prioritize user-facing GUI applications even in the face of extreme CPU starvation, while Linux doesn’t seem to do a similar thing (or doesn’t do it as well).

Is this an inherent problem of Linux at the moment or can I do something to improve this? I’m on Kubuntu 24.04 if it matters. Also, I don’t believe it is a memory or I/O problem as my memory is sitting at around 60% usage when it happens with 0% swap usage, while my CPU sits at basically 100% on all cores. I’ve also tried disabling swap and it doesn’t seem to make a difference.

EDIT: Tried nice -n +19, still lags my other programs.

EDIT 2: Tried installing the Liquorix kernel, which is supposedly better for this kinda thing. I dunno if it’s placebo but stuff feels a bit snappier now? My mouse feels more responsive. Again, dunno if it’s placebo. But anyways, I tried compiling again and it still lags my other stuff.

    • SorteKanin@feddit.dkOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      “they never know what you intend to do”

      I feel like if Linux wants to be a serious desktop OS contender, this needs to “just work” without having to look into all these custom solutions. If there is a desktop environment with windows and such, that obviously is intended to always stay responsive. Assuming no intentions makes more sense for a server environment.

      • BearOfaTime@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Even for a server, the UI should always get priority, because when you gotta remote in, most likely shit’s already going wrong.

        • SirDimples@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Totally agree, I’ve been in the situation where a remote host is 100%-ing and when I want to ssh into it to figure out why and possibly fix it, I can’t cause ssh is unresponsive! leaving only one way out of this, hard reboot and hope I didn’t lose data.

          This is a fundamental issue in Linux, it needs a scheduler from this century.

          • eyeon@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            You should look into IPMI console access, that’s usually the real ‘only way out of this’

            SSH has a lot of complexity but it’s still the happy path with a lot of dependencies that can get in your way- is it waiting to do a reverse dns lookup on your IP? Trying to read files like your auth key from a saturated or failing disk? syncing logs?

            With that said i am surprised people are having responsiveness issues under full load, are you sure you weren’t running out of memory and relying heavily on swapping?

    • 0x0@programming.dev
      cake
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I’d say nice alone is a good place to start, without delving into the scheduler rabbit hole…

  • Amanda@aggregatet.org
    link
    fedilink
    Svenska
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Lots of bad answers here. Obviously the kernel should schedule the UI to be responsive even under high load. That’s doable; just prioritise running those over batch jobs. That’s a perfectly valid demand to have on your system.

    This is one of the cases where Linux shows its history as a large shared unix system and its focus as a server OS; if the desktop is just a program like any other, who’s to say it should have more priority than Rust?

    I’ve also run into this problem. I never found a solution for this, but I think one of those fancy new schedulers might work, or at least is worth a shot. I’d appreciate hearing about it if it does work for you!

    Hopefully in a while there are separate desktop-oriented schedulers for the desktop distros (and ideally also better OOM handlers), but that seems to be a few years away maybe.

    In the short term you may have some success in adjusting the priority of Rust with nice, an incomprehensibly named tool to adjust the priority of your processes. High numbers = low priority (the task is “nicer” to the system). You run it like this: nice -n5 cargo build.

    • 0x0@programming.dev
      cake
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Obviously the kernel should schedule the UI to be responsive even under high load.

      Obviously… to you.

      This is one of the cases where Linux shows its history as a large shared unix system and its focus as a server OS; if the desktop is just a program like any other,

      Exactly.

      • SorteKanin@feddit.dkOP
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Obviously… to you.

        No. I’m sorry but if you are logged in with a desktop environment, obviously the UI of that desktop needs to stay responsive at all times, also under heavy load. If you don’t care about such a basic requirement, you could run the system without a desktop or you could tweak it yourself. But the default should be that a desktop is prioritized and input from users is responded to as quickly as possible.

        This whole “Linux shouldn’t assume anything”-attitude is not helpful. It harms Linux’s potential as a replacement for Windows and macOS and also just harms its UX. Linux cannot ever truly replace Windows and macOS if it doesn’t start thinking about these basic UX guarantees, like a responsive desktop.

        This is one of the cases where Linux shows its history as a large shared unix system and its focus as a server OS; if the desktop is just a program like any other,

        Exactly.

        You say that like it’s a good thing; it is not. The desktop is not a program like any other, it is much more important that the desktop keeps being responsive than most other programs in the general case. Of course, you should have the ability to customize that but for the default and the general case, desktop responsiveness needs to be prioritized.

        • BearOfaTime@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Even for a server, the UI should always be priority. If you’re not using the desktop/UI, what’s the harm?

          When you do need to remote into a box, it’s often when shit’s already sideways, and having an unresponsive UI (or even a sluggish shell) gets old.

          A person interacting with a system needs priority.

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    No. And even worse is Linux’s OOM behaviour - 99% of the time it just reboots the machine! Yes I have swap and zswap.

    Linux is just really bad at desktop.

  • agilob@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    EDIT: Tried nice -n +19, still lags my other programs.

    yea, this is wrong way of doing things. You should have better results with CPU-pinning. Increasing priority for YOUR threads that interact all the time with disk io, memory caches and display IO is the wrong end of the stick. You still need to display compilation progress, warnings, access IO.

    There’s no way of knowing why your system is so slow without profiling it first. Taking any advice from here or elsewhere without telling us first what your machine is doing is missing the point. You need to find out what the problem is and report it at the source.

  • tatterdemalion@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Sounds like Kubuntu’s fault to me. If they provide the desktop environment, shouldn’t they be the ones making it play nice with the Linux scheduler? Linux is configurable enough to support real-time scheduling.

    FWIW I run NixOS and I’ve never experienced lag while compiling Rust code.

  • crispy_kilt@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    nice +5 cargo build

    nice is a program that sets priorities for the CPU scheduler. Default is 0. Goes from -19, which is max prio, to +19 which is min prio.

    This way other programs will get CPU time before cargo/rustc.

      • crispy_kilt@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        No. This will wreak havoc. At most at -1 but I’d advise against that. Just spawn the lesser-prioritised programs with a positive value.

          • crispy_kilt@feddit.de
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Critical operating system tasks run at -19. If they don’t get priority it will create all kinds of problems. Audio often runs below 0 as well, at perhaps -2, so music doesn’t stutter under load. Stuff like that.

              • crispy_kilt@feddit.de
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Default is 0. Also, processes inherit the priority of their parent.

                This is another reason why starting the desktop environment as a whole with a different prio won’t work: the compiler is started as a child of the editor or shell which is a child of the DE so it will also have the changed prio.

  • Valmond@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    My work windoz machine clogged up quite much recompiling large projects (GB s of C/C++ code), so I set it to use 19/20 “cores”. Worked okayish but was not some snappy experience IMO (64GB RAM & SSD).

    • SorteKanin@feddit.dkOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      If that is the case, Linux will never be a viable desktop OS alternative.

      Either that needs to change or distributions targeting desktop needs to do it. Maybe we need desktop and server variants of Linux. It kinda makes sense as these use cases are quite different.

      EDIT: I’m curious about the down votes. Do people really believe that it benefits Linux to deprioritise user experience in this way? Do you really think Linux will become an actual commonplace OS if it keeps focusing on “performance” instead of UX?

  • JATth@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    The kernel runs out of time to solve the NP-complete scheduling problem in time.

    More responsiveness requires more context-switching, which then subtracts from the available total CPU bandwidth. There is a point where the task scheduler and CPUs get so overloaded that a non-RT kernel can no longer guarantee timed events.

    So, web browsing is basically poison for the task scheduler under high load. Unless you reserve some CPU bandwidth (with cgroups, etc.) beforehand for the foreground task.

    Since SMT threads also aren’t real cores (about ~0.4 - 0.7 of an actual core), putting 16 tasks on a 16/8 machine is only going to slow down the execution of all other tasks on the shared cores. I usually leave one CPU thread for “housekeeping” if I need to do something else. If I don’t, some random task is going to be very pleased by not having to share a core. That “spare” CPU thread will be running literally everything else, so it may get saturated by the kernel tasks alone.

    nice +5 is more of a suggestion to “please run this task with a worse latency on a contended CPU.”.

    (I think I should benchmark make -j15 vs. make -j16 to see what the difference is)

    • SorteKanin@feddit.dkOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      That’s all fine, but as I said, Windows seems to handle this situation without a hitch. Why can Windows do it when Linux can’t?

      Also, it sounds like you suggest there is a tradeoff between bandwidth and responsiveness. That sounds reasonable. But shouldn’t Linux then allow me to easily decide where I want that tradeoff to lie? Currently I only have workarounds. Why isn’t there some setting somewhere to say “Yes, please prioritise responsiveness even if it reduces bandwidth a little bit”. And that probably ought to be the default setting. I don’t think a responsive UI should be questioned - that should just be a given.

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        You’re right of course. I think the issue is that Linux doesn’t care about the UI. As far as it is concerned GUI is just another program. That’s the same reason you don’t have things like ctrl-alt-del on Linux.

        • JATth@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          To be fair, there should be some heuristics to boost priority of anything that has received input from the hardware. (a button click e.g.) The no-care-latency jobs can be delayed indefinitely.

      • JATth@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 year ago

        Why can Windows do it when Linux can’t?

        Windows lies to you. The only way they don’t get this problem is that they are reserving some CPU bandwidth for the UI beforehand. Which explains the 1-2% y-cruncher worse results on windows.

        • SorteKanin@feddit.dkOP
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          If that’s the solution to the problem, it’s a good solution. Linux ought to do the same thing, cause none of the suggestions in this thread have worked for me.