• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • Not many people know the history of the treaty. It basically was signed under duress. Prior to the meeting where it was signed all but one of the Maori tribal leaders were against signing the treaty, even the Maori version. What was said at the signing was purposely never recorded, but considering the existential threat of the New Zealand Company (NZC) on the horizon (the primary reason a treaty was even being discussed), it is believed that the Maori leaders were basically given the choice of ‘sign this treaty and be a part of the British empire, or don’t and have no legal rights against the whims of the New Zealand Company’.

    The New Zealand Company was a private British company with the goal of obtaining as much land as possible at any cost, and the Maori would have had zero legal protections unless they were part of the British empire. Without a treaty the NZC would have been able to push out the Maori entirely with no repercussions. The British people who brought the treaty to the Maori leaders knew this was coming, and wanted to avoid it.

    Signing the treaty was a quick and dirty solution to the quickly approaching NZC and was responsible for preventing the worst of the damage, but it is a very flawed document. The translations were rushed, and vague. Basically everyone was against signing it, but they knew it was the least worst option available. It was never designed to be the core document underpinning a nation, merely a speed bump to stall the private annexation of New Zealand.


  • The MSP430 is just the chip I happen to use at work, if you’re not convinced you could try looking for an actual ultra low power chip, I found the STM32U0 at 70uA/MHz and the STM32U5 at 16uA/MHz in the first result.

    Even ignoring selecting a more efficient micro, a smattering of tiny ceramic caps will buy you a few hundred microjoules for bursts. If you’re already operating at 2V you can get a 6V rated 100uF cap in a 1210 package - and that’s after considering the capacitance drop with DC biasing. Each one of those would buy you 200 microjoules, even just one ought to be plenty to wake up for a few tens of milliseconds every second to get a reading from some onboard peripheral (as an example) then go to sleep again.

    For sure, you’re not going to be doing any heavy lifting and external peripherals could be tricky, but there are certainly embedded sensor use cases where this could be sufficient.



  • Unfortunately not. The major difference between an honest bot and a regular user is a single text string (the user agent). There’s no reason that bots have to be honest though and anyone can modify their user agent. You can go further and use something like Selenium to make your bot appear even more like a regular user including random human-like mouse movements. There are also a plethora of tools to fool captchas now too. It’s getting harder by the day to differentiate.




  • When I say upstream that’s technically upstream of upstream - I mean the application repositories. Manjaro has in the past applied their own patches on top and broken functionality. The example that comes to mind is the most heinous one where a Manjaro maintainer patched in three pull requests (including CLOSED ones) and pushed the result to their stable repo: https://source.puri.sm/Librem5/chatty/-/merge_requests/986 https://source.puri.sm/Librem5/chatty/-/merge_requests/1035 https://source.puri.sm/Librem5/chatty/-/merge_requests/1060 https://forum.manjaro.org/t/manjaro-arm-beta25-with-phosh-pinephone-pinephonepro/116529/11 . Applying patches to upstream is not unheard of, but you don’t do it without contacting the developer, because they are the ones going to get the bug reports. Manjaro did not notify the developers. It’s this recurring trend of unprofessionalism which has tainted Manjaro’s reputation, whether it’s letting their SSL cert expire FOUR separate times (once, maybe twice is understandable, but more speaks to underlying issues in structure), or applying patches to applications without developer’s knowledge and shipping it to users, or the two separate times they DDoSed the AUR servers with a poorly thought out pamac feature, etc…

    I give no concrete examples because this all occurred almost two years ago for me at this point. I’m not out to capsize Manjaro or bring about it’s demise, so I don’t write down every package that breaks for use as ammunition in internet debates. I just want a distro that works for me. Manjaro wasn’t that for me so I moved on. You asked why some people don’t like Manjaro and I’m simply explaining why.

    The AUR issue happened often enough for me to consider it frequent. It happened most often with niche packages, like the various MSP430 toolchain packages which I often needed, but I explicitly remember it happening at least once on fairly mainline packages like cemu (or was it yuzu?).

    The problem is not that Manjaro allows you to pick whichever major release kernel you like, but rather that it doesn’t account for this in the packaging system. You could be running kernel 6.4 (i.e. not officially supported anymore) and update your packages, resulting in a broken system with no warning. By decoupling the kernel version from the package system Manjaro unleashes a whole new failure mode. This would be fine if they accounted for this in their packaging model, but they don’t (because Arch doesn’t and it would be too much work to implement and support it themselves, presumably. It sounds quite tough). This tool, which is designed to make the system more stable as you say, actually can make it less stable!

    Manjaro was sold to me as ‘Beginner Arch’, so I don’t know what to tell you on that front. I don’t think this is at all related to why people dislike Manjaro though: Nobody hates Ubuntu because it’s based on Debian, they hate it because of their decisions, like Snaps. Likewise nobody hates Manjaro because it’s Arch based, they hate it because of the decisions they’ve made. Manjaro isn’t the only distro getting hate, but it is probably the lowest hanging fruit due to all of the administative fumbles.


  • My DE broke because Manjaro added untested/beta patches from upstream, sometimes even against the developer’s word. This is something that Manjaro is known for. Guess who inspired dont-ship.it?

    Also I would appreciate you not calling my statements on the AUR false. I have personal experience on the matter so we can play my experiences against yours if you like, or we can listen to the official Manjaro maintainers reccommending that it not be used, as it is incompatible with the Manjaro repos. By design Manjaro holds back Arch packages, which means AUR package dependencies often do not match what is expected. This is not false. Can you use the AUR? Sure, but you must keep in mind that Manjaro was not designed for it and it will break AUR packages sometimes. Sometimes it’s as simple as waiting a couple weeks for Manjaro to let new packages through, but sometimes you can’t just wait several weeks and you need to fix it yourself.

    And yes, Manjaro does hold kernels back because you have to specify when you want to move off a major release. You can accidentally be using an unsupported kernel and not even notice. Ask me how I know. Manjaro literally requires more maintenance than Arch on this front.

    I can’t comment on what maintenance Arch requires that Manjaro doesn’t, as I run EndeavourOS. I’ve found it to be everything Manjaro wishes it was - a thin, user-friendly wrapper around Arch.

    Just remember that Manjaro’s official response to them forgetting to update their SSL certs was to roll back your clock, putting everyone at risk of accepting invalid certs in the process.


  • During my six month usage of Manjaro (my introduction to Arch-based distros), my desktop broke four times and booted me to the terminal. Almost once a month. I told myself this was the price you paid for living on the edge, using a rolling release. I switched to EndeavourOS and have not had a broken desktop in two whole years.

    Manjaro’s handling of AUR packages is fundamentally wrong and with their design decisions it cannot be fixed. You either give up the AUR entirely, or resign yourself to constantly breaking AUR packages and having to try and fix them.

    Manjaro’s handling of kernels via a GUI sounds good until you realise it’s entirely manual and if you don’t keep checking you will end up running an unsupported, out of date kernel with Arch packages that expect a newer one. Again, Manjaro violates Arch’s golden rule of avoiding partial upgrades by holding your kernels back until you manually update them in their GUI. If you’re running an Arch-based distro 99% of the time you want the latest kernel and an LTS kernel as a backup, but these are already in Arch as packages (and are thus updated in lockstep with your packages, as designed) so you don’t need Manjaro’s special GUI. Now if you wanted a particular kernel for some reason then sure, but Manjaro’s GUI doesn’t even let you pick the exact version you want anyway! All you can pick is the latest version of each major release.

    If you’re anything like I was at the time, you think you like Manjaro but what you actually like is Arch. Manjaro just gets in the way.


  • That’s fair. Because I explicitly mentioned &'static str later on, my explanation of &str implicitly assumes that it’s a non-static lifetime str, so it isn’t stored in the executable, which would only leave the stack. I didn’t want to get into lifetimes in what’s supposed to be a high-level description of types for non-Rust programmers, though. I mentioned ‘stack’ and ‘heap’ explicitly here because people understand that they mean ‘fast’ and ‘slow’, respectively. Otherwise the first question out of people’s mouths is ‘why have a non-growable string type at all??’.



  • Rossphorus@lemmy.worldtoC Programming Language@programming.devOde to C
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    10 months ago

    I haven’t used Ada myself, but I have heard it brought up before. One of the huge advantages Rust has is it’s packaging, versioning and build system. I’d argue this is second to none.

    Rust is GPL licensed. As I understand it, licensing was a major blocker for Ada and potentially hampered it’s uptake in the past.

    Rust has modern sensibilities, like first-class iterator support, or built-in UTF-8 strings, etc… It also has a lot more of a functional style, rather than procedural.

    More subjectively, Ada’s syntax looks very… unflattering to my eyes. I much prefer Rust in that regard. Looking at Ada reminds me of my time with VHDL, which is never a flattering comparison.

    Ada actually found itself implementing Rust’s ownership and borrowing system, as pointers were not formally verifiable using SPARK before, so Rust must be doing something right!



  • Rossphorus@lemmy.worldtoC Programming Language@programming.devOde to C
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Firstly, I’m not sure where you got the impression that Rust is designed to replace C. It’s definitely targetted at the C++ crowd.

    The string comparison with Rust actually points out one of my problems with C: All those Rust types exist for a reason - they should behave differently. That means that in C these differences are hidden, implicit and up to the programmer to remember. Guess who is responsible for every bug ever? The programmer. Let’s go through the list:

    &str - a reference to a UTF-8 string on the stack, hence fixed size.

    String - a handle to a UTF-8 string on the heap. As a result, it’s growable.

    &[u8] - a reference to a dynamically sized slice of u8s. They’re not even ASCII characters, just u8s.

    &[u8;N] - a reference to an array of u8s. Unlike above they have a fixed size.

    Vec - a handle to a heap-allocated array of u8s.

    &u8 - a reference to a u8. This isn’t a string type at all.

    OsStr - a compatibility layer for stack-allocated operating system strings. No-one can agree on what these should look like - Windows is special, as usual.

    OsString - a compatibility layer for heap-allocated OS strings. Same as above.

    Path - a compatibility layer for file paths, on the stack. Again, Windows being the special child demands special treatment.

    PathBuf - a heap-allocated version of Path.

    CStr - null-terminated stack-allocated string.

    CString - null-terminated heap-allocated string.

    &'static str - a string stored in the data segment of the executable.

    If you really think all of these things ahould be treated the same then I don’t know what to tell you. Half of these are compatibility layers that C doesn’t even distinguish between, others are for UTF-8 which C also doesn’t support, and the others also exist in C, but C’s weaker type system can’t distinguish between them, leaving it up to the programmer to remember. You know what I would do as a C dev if I had to deal with all these different use cases? I would make a bunch of typedefs, so the compiler could help me with types. Oh, wait…

    I dislike C because it plays loosey-goosey with a lot of rules, and not in an opt-in ‘void*’ kind of way. You have to keep in your head that C is barely more than a user-friendly abstraction over assembly in a lot of cases. 90% of the bugs I see on a day to day basis are integer type mismatches that result in implicit casts that silently screw up logic. I see for loops that don’t loop over all the elements they should. I see sentinel values going unchecked. I see absolutely horrible preprocessor macros that have no type safety, often resulting in unexpected behaviour that can take hours or days to track down.

    These are all problems that have been solved in other, newer languages. I have nothing personal against C, but we’ve had 40+ years to come up with great features that not only make the programmer’s life easier, but make for more robust programs too. And at this point the list is getting uncomfortably long: We have errors as types, iterators, type-safe macro systems, compile-time code, etc… C is falling behind, not just in safety, but in terms of ease of use as well.


  • I was surviving with Ubuntu, I had my complaints but I figured ‘that’s just how it is’ on Linux, that it was the same everywhere. I didn’t even realise what I was missing until I switched.

    I got a hardware upgrade at one point, so in order to get those new drivers ASAP I tried an Arch-based distro, with plans to switch back once drivers became available. I never moved back.

    The two big reasons I stayed was ironically enough the lack of good Ubuntu documentation, and the PPA system. Ubuntu is used a lot, but there’s not really formal documentation anywhere, only random tutorials online (most likely out of date and never updated) and people on forums talking about their problems. By contrast the Arch wiki is the gold standard of Linux documentation, there’s just no comparison. Even on Ubuntu I found myself using it as a reference from time to time.

    Regarding PPAs, the official Ubuntu package list is strangely small so if you’re like me and find yourself needing other software, even mainstream software like Docker, you’ll be faffing about with PPAs. So if you want to install Docker, instead of typing sudo apt install docker You instead have to type:

    # Add Docker's official GPG key: 
    sudo apt-get update 
    sudo apt-get install ca-certificates curl gnupg 
    sudo install -m 0755 -d /etc/apt/keyrings 
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg 
    sudo chmod a+r /etc/apt/keyrings/docker.gpg 
    # Add the repository to Apt sources: 
    echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update
    

    These are the official install instructions, by the way. This is intended behaviour. The end user shouldn’t have to deal with all this. This feels right out of the 90’s to me.

    Instead of PPAs, Arch has the Arch User Repository (AUR). Holy moly is the AUR way nicer to work with. Granted, we’re not quite comparing apples to apples here since the AUR (typically) builds packages from source, but bear with me. You install an AUR package manager like yay (which comes preinstalled on my flavour of Arch, EndeavourOS). yay can manage both your system and AUR packages. Installing a package (either official or AUR) looks like yay packageNameHere. That’s it. A full system upgrade like sudo apt update; sudo apt upgrade is a single command: yay -Syu, a bit cryptic but much shorter. The AUR is fantastic not just for the ease of use, but for sheer breadth of packages. If you find some random project on github there’s probably an AUR package for it too. Because it builds from source an AUR package is essentially just a fancy build script based on the project’s own build instructions, so they’re super easy to make, which means there’s a lot of them.

    You might argue ‘but building from source might fail! Packages are more reliable!’, which is somewhat true. Sometimes AUR builds can fail (very rarely in my experience), but so can PPAs. Because PPAs are often made to share one random package they can become out of date easily if their maintainer forgets or simply stops updating it. By contrast AUR packages can be marked out of date by users to notify the maintainer, and/or the maintainer role can be moved to someone else if they go silent. If a PPA goes silent there’s nothing you can do. Also, since an AUR package is just a fancy build script you can edit the build script yourself and get it working until the package gets an update, too. PPAs by comparison are just a black box - it’s broken until it gets updated.

    Moral of the story? Don’t be afraid to just give something a go. Mint will always be waiting for you if you don’t like it.


  • People have made distro recomendations already, so I want to talk a bit about what makes a distro a distro: application repositories and management, update cadence, and what’s installed by default. That’s pretty much it. Anything else can likely be transplanted from distro to distro.

    Out of the default applications by far the most important is the desktop environment. Have a look at Gnome, KDE (and others, cinnamon, etc.). Pick something you like the look of. Gnome is known to be closer to Mac styling and sentiments, including the our-way-or-the-highway philosophy, limited customisability in the name of consistency, etc… KDE is the ‘we heard you like customisation so we put customisations on your customisations’ kind of environment.

    Update cadence really boils down to one of two things - do you want a new OS version every few months where the distro maintainers manually release a bunch of software all tested together (e.g. Debian, Ubuntu, Fedora), or do you want each application released individually after it’s been tested to work with everything else (Arch)? Note that the former are sometimes called ‘stable’ releases but not because they are less likely to crash, but because there are simply fewer updates. The latter are called ‘rolling’ releases.

    The application management philosophies are a lot harder to nail down, especially as a newbie. You will probably just have to accept that the first distro you try will likely not be the one you settle on. For instance I started with Ubuntu until I got fed up how difficult it was to install anything not found in the main repository (a surprising amount of software): In Debian-based distros (like Ubuntu) unofficial software is fragmented across thousands of ‘personal’ repositories that you must manually add URLs and signing keys for, it feels very clunky. Because they are personal respositories it’s easy for the owner to abandon it or just not push updates and you won’t even notice until it breaks after a system update. Once I had some Linux experience under my belt I found the Arch repository style much easier to work with: One central official repository, and one ‘unofficial’ repository. I’ve heard Fedora has a similar system.

    But the single most important piece of advice - just pick something. The great thing about Linux is it makes hopping distros easy: A package manager makes it trivial to export a list of installed programs so you can reinstall them on your next distro. You won’t be enslaved to a distro once you decide, so just pick something and use it for a bit. Learn what you like and what you don’t. Use that to decide on your next pick.


  • Rossphorus@lemmy.worldto3DPrinting@lemmy.worldHow?
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    As people have said already this is a somewhat common failure mode, especially when swapping nozzles. This happened to me twice between three nozzle swaps. The first time was a major leak like yours, the other time it was only slight (which I then made worse in my attempt to fix…). I was obviously doing something wrong, but I came fully prepared the second time with video guides for my specific machine and everything but still couldn’t get it perfect.

    If you never want to think about this failure mode again (like me) then consider swapping your hotend for a Revo. A Revo nozzle is also the heatbreak, so there’s no possibility of a bad connection between them. The ‘nozzles’ are more expensive but they can be hotswapped (coldswapped, even) by hand with no special tools. Before I did everything in my power to avoid nozzle swaps, so I ended up settling for a jack-of-all-trades (but master of none) nozzle that I would never have to swap. Since moving to Revo however I find myself swapping nozzles way more now that it’s easy and with no chance of destroying my hotend. For instance I have a high-flow 1mm nozzle for quickly doing big structural prints, they print in like one third of the time and are way stronger than equivalent prints on a smaller nozzle. I also have a 0.25mm nozzle for miniature model prints with a better resolution than I could ever get before. I’m still waiting for a high-flow abrasion-resistant Revo nozzle, though.


  • Selective enforcement is one of those concepts that isn’t talked about much outside of legal ethics circles unfortunately, but I think it’s an important concept to be aware of and the potential issues with it. I first heard about it from The Dictators Handbook, which explores many behaviours of politicians and those in power, including how and why corrupt nations often employ selective enforcement. It’s an interesting read, would recommend. It definitely changed how I looked at the world.


  • I have no strong feelings on which particular weapons should be legal to carry, even if it’s just pepper spray or brass knuckles or something. The main thing is that it should be legal to carry something.

    Also, selectively enforced laws are a terrible, HORRIBLE concept and should be avoided at all costs. It gives police and those in power the ability to selectively punish (or pardon) whomever they choose, often at the whims of their personal biases. Passing and exploiting selectively enforced laws is a common tactic used by corrupt nations and can be used to silence political opponents, target selected groups, promote agendas and so forth. The law should not rely on cops ‘being nice’ and choosing not to arrest you.


  • For a start it shouldn’t be a crime to merely carry something for self-defense. The current laws say that carrying anything for the express purpose of self-defense is illegal. There’s a bizarre cat and mouse game where the law says ‘its fine to defend yourself’ while simultaneously expressly forbidding you from carrying anything that you might be able to use for self-defense. It puts anyone actually in a life threatening situation at a supreme disadvantage: An attacker is already breaking the law so they’ll be armed to some extent, but under the law the victim is designed to be defenseless. If they do decide to arm themselves against the law and use it to defend themselves they can be prosecuted for carrying a weapon after the fact.