also I just realized that Brazil did NOT make a programming language entirely in Spanish and call it “Si” and that my professor was making a joke about C… god damn it

this post is probably too nieche but I feel like Lemmy is nerdy enough that enough people will get it lol

  • DarkAri@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    9
    arrow-down
    6
    ·
    edit-2
    1 day ago

    C does one thing really well and that’s everything fast with complete control. Python is cool for people just trying to bang out some scripts or learning to program but interpreted languages have no place in mainstream software. Devices are starting to become slower than computers 30 years ago because there is so much garbage being included in apps written in interpreted java and Python and other nonsense. It’s not just bad for the user but it’s bad for the planet. It shouldn’t take a million times the energy to run a simple program because someone doesn’t know how to write in a proper language. Python is okay for some things. The world has become too reliant on it though. Also just for purely selfish reasons if you are the type. Interpreted languages kill your battery life and ram and stuff. Modern android phones besides all their problems with Google ruining them like Microsoft are also just becoming incredibly slow and stupid. You can barely even open two apps without most android phones panicking and closing apps to save memory. A calculator app is 100 MBs now. The phone feels like it’s going to catch on fire when you open a notepad.

    • Jarix@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      15 hours ago

      For us ludite lurkers who couldn’t figure it out from context alone, which one is the interpreted language? I got lost on that detail lol

      • DarkAri@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        2
        ·
        9 hours ago

        Interpreted languages are languages that are compiled at run time. Compiled languages are compiled into binary by a compiler and then distributed as binaries.

        Basically with interpreted languages, there is huge overhead because the code has to be compiled (turned into machine code) as the program is running. This is why games and operating systems are written in C but people learn how to write Python and Java in their college classes. Interpreted languages are often much easier to learn then C and cross platform, but C is fast and powerful.

    • edinbruh@feddit.it
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      I like many of your points, but your comment is facetious.

      You said it yourself, “it’s good for someone trying to bang out scripts”… and that’s it, that’s the main point, that’s the purpose of python. I will argue over my dead body that python is a trillion times better than sh/bash/zsh/fish/bat/powershell/whatever for writing scripts in all aspects except availability and if that’s a concern, the only options are the old Unix shell and bat (even with powershell you never know if you are stuck ps 5 or can use ps 7).

      I have a python script running 24/7 on a raspberry that listens on some mqtt topics and reacts accordingly asynchronously. It uses like 15kiB (literally less than 4 pages) of ram mostly for the interpreter, and it’s plenty responsive. It uses about two minutes of CPU time a day. I could have written it in rust or go, I know enough of both to do it, it would have been faster and more efficient, but it would have taken three times the time to write, and it would have been a bitch to modify, I could have done it in C and it would have been even worse. For that little extra efficiency it makes no sense.

      You argue it has no place in mainstream software, but that’s not really a matter of python, more a matter of bad software engineers. Ok, cool that you recognise the issue, but I’d rather you went after the million people shipping a full browser in every GUI application, than to the guys wasting 10 kiB of your ram to run python. And even in that case, it’s not an issue of JavaScript, but an issue of bad practices.

      P.S. “does one thing well” is a smokescreen to hide doing less stuff, you shouldn’t base your whole design philosophy on a quote from the 70s. That is the kind of shit SystemD hater shout, while running a display server that also manages input, opengl, a widget toolkit, remote desktop, and the entire printer stack. The more a high profile tool does, the less your janky glue code scripts need to do.

      • DarkAri@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        17 hours ago

        Python is okay for some things. It’s just that software in general has become terrible because there is so much wasted power being used because people have access to fast hardware. In the 90s your entire environment would use a few MBs of ram. I know with high res images some of this stuff would increase but people are so wasteful with how they write stuff these days. We are evolving backwards because we spend hundreds or thousands on amazing hardware only to have it run like trash in a world where everything is written in java and python and electron. No longer do developers optimize. They just get their webpage to run at a inconsistent 30 FPS on your $2000 computer, and collect their 150k salary, on a machine that has more computing power than every computer in the world put together in the 90s.

        It’s not just bad for your time and sanity. It’s bad for the environment, it’s bad for the economy, this same rot is working it’s way into operating systems, into game engines. Every game written for UE5 seems to run at 50 FPS regardless of how good your PC hardware is because of these same low quality programmers and terrible tools. Idk Linux to me has been a breath of fresh air in recent times as bad as it can be. It’s mostly C code with tiny binaries that are like 1-3 MB usually. I guess there is a silver lining to it in that all of these evil corporations like Google and meta and apple are dying because of this. Maybe the internet will go back to being centered around user content in a distributed fashion and not just a couple of highly controlled websites that try to brainwash you into supporting your corporate backed government. It already seems like every triple A game studio sucks and all the best games that have come out in the past 15 years have been from small indie studios.

        • squaresinger@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          15 hours ago

          Have you heard of the term “Software crisis”?

          We don’t really talk all that much about it any more, because it’s become so normal, but the software crisis was the point where computers became faster than human programmers. That problem came up in the 1960.

          Up until then a computer was simple enough that a single human being could actually understand everything that happened under the hood and could write near-optimal code by hand.

          Since then computers doubled in performance and memory every few years, while developers have largely stayed human with human performance. It’s impossible for a single human being to understand everything that happens inside a computer.

          That’s why ever since we have tried optimizing for developer time over execution time.

          We have been using higher-level languages, frameworks, middlewares and so on to cut the time it takes to develop stuff.

          I mean, sure, we could develop like in the 90s, the tools are all still there, but neither management nor customers would accept that, for multiple reasons:

          • Everyone wants flashy, pretty, animated things. That takes an ungodly amount of performance and work. Nobody is ok with just simple flat unanimated stuff, let alone text-based tools.
          • Stuff needs to run on all sorts of devices: ARM smartphones, ARM/x86 tablets, ARM/x86 PCs, all supporting various versions of Windows, Mac, Android, iOS and preferrably also Linux. But we also need a web version, at best running on Chrome, Firefox and Safari. You could develop all of these apps natively, but then you’d need roughly 20 apps, all of them developed natively by dedicated experts. Or you develop the app on top of browsers/electron and have a single app.
          • Stuff needs to work. Win95-level garbage software is not ok any more. If you remember Win95/98 fondly, I urge you to boot it up in a virtual machine some time. That shit is unacceptably buggy. Every time the OS crashes (which happens all the time) you are playing russian roulette with your partition.
          • Did I mention that everything needs to be free? Nobody wants to pay for software any more. Win95 was $199 ($432 in 2025 money) and Office was $499 ($1061 in 2025 money). Would you pay 1.5k just for Win11 and the current office?

          So there’s more and more to do with less and less time and money.

          We can square that circle by either reducing software quality into nothingness, or by using higher-level developer tools, that allow for faster and less error-prone develoment while utilizing the performance that still grows exponentially.

          What would you choose?


          But ultimately, it’s still the customer’s choice. You don’t have to use VSCode (which runs on Electron). You can still use KATE. You don’t have to use Windows or Gnome or MacOS. You can use Linux and run something like IceWM on it. You don’t have to use the newest MS Office, you can use Office 2013 or Libre Office.

          For pretty much any Electron app out there, there’s a native alternative.

          But it’s likely you don’t use them. Why is that? Do you actually prefer the flashy, pretty, newer alternative, that looks and feels better?

          And maybe question why it feels so hard to pay €5 for a mobile app, and why you choose the free option over the paid one.

          • DarkAri@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            8 hours ago

            I actually pay for software but I run Linux, I’m not paying for windows because it’s bad. I prefer to pay then have ads. I value my time.

            What you are saying is somewhat true. There are hundreds of thousands of programmers these days if not millions. The quality of the person who writes software just isn’t what it used to be. Not that they don’t work hard, but just that they aren’t capable of writing C.

            You also can understand everything in a system, at least some people can. I understand those people are rare and expensive to hire.

            One thing C really lacks is modern libraries to do these things. It’s not a limitation of C itself it’s just that most modern tools are targeted towards other languages. I understand that writing webapps in C isn’t the best idea because you don’t want web stuff running on hardware directly most of the time if you care about security anyways, but it’s really just a trend where the industry moved away from C with all of its frameworks and stuff which has not been good for the users.

            Windows 98 was really good if you knew how it worked. I never had any issues really with stuff like XP. It always worked, it was always fast, it was always stable. I used XP for probably 10 years and never had any issues with instability and stuff and I was constantly modifying stuff, overclocking, patching drivers, modding bios, doing weird stuff that others didn’t do coming up with my own solutions. It worked really well. It’s modern windows that’s a buggy mess that crashes all the time.

            To get back to the other point though, to move away from C was a mistake. It’s not that much more complicated than using other languages. Most of the complexity was just in setting up the environment which was admittedly terrible under C. Trying to link libraries and stuff. The actual code itself is not really that much more difficult than say python, but it’s a different paradigm. You are getting closer to the hardware, and it’s not automatic that your code is going to be cross platform unless you use platform agnostic libraries. It’s entirely possible to write multiplatform code in C and most programs could be written in a multiplatform way if users use libraries that target multiplatform development and let users compile them ahead of time. It’s just that companies like Microsoft created proprietary junk like .net and direct X which made writing multiplatform code much harder if you didn’t start with libraries like qt or gtk, and openGL. Again, this was never a fault of C. You could even have a standard in CPUs that would run any code to bootstrap a compiler and you could have platform agnostic binaries, which is just something that never happened because there was not really a point to it since so much code was written in lockdown .net and directx.

            Interpreted language were intended to solve those issues. Making platform agnostic code, and to make code that was safe to run from websites without compromising the integrity of the users root filesystem, but these are terrible solutions. Especially as interpreted languages moved beyond web stuff and small simple apps to being used everywhere and integrated into every part of the system.

            Python is a scripting language. It’s best used to call C libraries or to write very lightweight apps that don’t depend on low level hardware access. Java is like C but worse. JavaScript is like the worst of all worlds, strongly typed, verbose, picky about syntax, slow, interpreted, insecure, bloated, but it is cross platform which was originally probably why it was so popular. That should have just been added to C however. When you have code that runs 10x-10,000 times slower and you have bad programmers who don’t know how to write code that doesn’t destroy the bus, or use 100% of your system resources for no benefit, you end up in this mess we have today, for every app that uses 100% of your memory bandwidth, that halves the speed of the next program. If you have 3 programs running that peg then Emory bus, that means your next program is going to run at 0.25 the speed roughly. This is not how software should be written.

            Python can also be great for prototyping algorithms and stuff, automating things that run once, not in loops. However once you figure it out, it should be written in C. All of these libraries that are written for the modern web should have been written to target C.

            The cool thing about C is you can use it like basic if you really want. With a bit more syntax, but you don’t have to use it with classes. You can just allocate memory on stack and heap and then delete all of it with like one class if you really want to. Everything that’s cool about other languages mostly just already exists in C.

            It’s kind of amazing to see the difference between a Linux smartphone and an android smartphone these days. A Linux smartphone running terrible hardware by today’s standard is just instant. 32 GBs of storage is enough to add everything you want to the operating systems because binaries are like 2 MB. Then that all goes away as soon as you open a web browser. A single website just kills it. Then you sit down on a modern windows machine and everything is slow and buggy as shit. It draws 500w of power on a 2nm process node. It’s a real issue. No amount of computer power will ever overcome interpreted languages because people will always do the minimum possible work to get it to run at an unstable 30 FPS and call it good.

            • squaresinger@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              7 hours ago

              You also can understand everything in a system, at least some people can. I understand those people are rare and expensive to hire.

              No. No, you seriously can’t, not even if you are deploying to one single PC. Your code includes libraries and frameworks. With some studying, you might be able to familiarize yourself to the point where you know every single flow through the frameworks and libraries down to each line that’s being executed. Then it goes through the compiler. Compiler building is an art unto itself. Maybe there are a handful of people who understand everything GCC does in the roughly 200MB of its source code. But lets say you are a super crack programmer, who can memorize source code with about as many characters as 42x all of the Harry Potter books.

              Now your code gets executed by the OS. If you are on Windows: Sucks to be you, because it’s all closed source. All you can manage to understand is the documentation, unless you decompile all of Windows. If you are on Linux you at least have the source code. That’s only 300MB of source code, shouldn’t be hard to completely understand everything in there and keep it in memory, right? And you aren’t running your code directly on the bare Linux kernel, so please memorize everything your DE and other relevant components do.

              But we aren’t done yet, we are just through the software part. Hardware is important too, since it might or might not implement everything exactly like in the documentation. So break out your hex editor and reverse-engineer the latest microcode update, to figure out how your CPU translates your x64 calls to whatever architecture your CPU uses internally. An architecture that, btw, doesn’t have any public documentation at all. Might be time to break out the old electron microscope and figure out what the 20 billion transistors are doing on your CPU.

              Now we are done, right? Wrong. The CPU is only one component in your system. Now figure out how all other components work. Did you know that both your GPU and your network interface controller are running full embedded operating systems inside them? None of that is publicly documented or open source, so back to the electron microscope and reading binary code in encrypted update files.

              If you think all this knowledge fits into a single human’s brain in a way that this human actually knows what all of these components do in any given circumstance, then I don’t really know what to say here.

              It’s not a matter of skill. It’s just plain impossible. It is likely easier to memorize every book ever written.

              One thing C really lacks is modern libraries to do these things. It’s not a limitation of C itself it’s just that most modern tools are targeted towards other languages. I understand that writing webapps in C isn’t the best idea because you don’t want web stuff running on hardware directly most of the time if you care about security anyways, but it’s really just a trend where the industry moved away from C with all of its frameworks and stuff which has not been good for the users.

              You can write webapps in C using Webassembly. Nobody does it because it takes much more time and has basically no upsides.

              Windows 98 was really good if you knew how it worked. I never had any issues really with stuff like XP. It always worked, it was always fast, it was always stable. I used XP for probably 10 years and never had any issues with instability and stuff and I was constantly modifying stuff, overclocking, patching drivers, modding bios, doing weird stuff that others didn’t do coming up with my own solutions. It worked really well. It’s modern windows that’s a buggy mess that crashes all the time.

              I would recommend that you revisit these old OSes if you think that. Fire it up in a VM and use it for a few weeks or so. Nostalgia is a hell of a drug. I did run Win98 for a while to emulate games, and believe me, your memory doesn’t reflect reality.


              Reading what you are writing about programming, may I ask about your experience? It sounds to me like you dabbled in a bit of hobby coding a while ago, is that right?

              Because your assessments don’t really make much sense otherwise.

              To get back to the other point though, to move away from C was a mistake. It’s not that much more complicated than using other languages. Most of the complexity was just in setting up the environment which was admittedly terrible under C. Trying to link libraries and stuff. The actual code itself is not really that much more difficult than say python, but it’s a different paradigm.

              No, the problem was not setting up the environment. The main problem with C is that it doesn’t do memory management for you and thus you constantly have to deal with stuff like buffer overflows, memory management issues, memory leaks, pointer overflows and so on. If you try to write past a buffer in any modern language, either the compiler or the runtime will catch it and throw an error. You cannot write past e.g. the length of an array Java, Python or any other higher-level language like that. C/C++ will happily let you write straight across the stack or heap, no questions asked. This leads to C programs being incredibly vulnerable to fitting attacks or instabilities. That’s the main issue with C/C++.

              and it’s not automatic that your code is going to be cross platform unless you use platform agnostic libraries. It’s entirely possible to write multiplatform code in C and most programs could be written in a multiplatform way if users use libraries that target multiplatform development and let users compile them ahead of time.

              C is just as much “inherently multiplatform” as Python: Use pure C/Python without dependencies and your code is perfectly multi-platform. Include platform-specific dependencies and you are tied to a platform that supplies these dependencies. Simple as that. Same thing for every other language that isn’t specifically tied to a platform.

              You could even have a standard in CPUs that would run any code to bootstrap a compiler and you could have platform agnostic binaries, which is just something that never happened because there was not really a point to it since so much code was written in lockdown .net and directx.

              That standard exists, it’s called LLVM, and there are alternatives to that too. And there are enough platform agnostic binaries and stuff, but if you want to do platform-specific things (e.g. use a GPU or networking or threads or anything hardware- or OS-dependant) you need to do platform-specific stuff.

              Python is a scripting language. It’s best used to call C libraries or to write very lightweight apps that don’t depend on low level hardware access. Java is like C but worse. JavaScript is like the worst of all worlds, strongly typed, verbose, picky about syntax, slow, interpreted, insecure, bloated, but it is cross platform which was originally probably why it was so popular. That should have just been added to C however. When you have code that runs 10x-10,000 times slower and you have bad programmers who don’t know how to write code that doesn’t destroy the bus, or use 100% of your system resources for no benefit, you end up in this mess we have today, for every app that uses 100% of your memory bandwidth, that halves the speed of the next program. If you have 3 programs running that peg then Emory bus, that means your next program is going to run at 0.25 the speed roughly. This is not how software should be written.

              I don’t even know what kind of bus you are talking about. Emory bus is a bus line in Atlanta.

              If you are talking about the PCIe bus, no worries, your python code is not hogging the PCIe bus or any other bus for that matter. It’s hard to even reply to this paragraph, since pretty much no single statement in there is based in fact.

              The cool thing about C is you can use it like basic if you really want. With a bit more syntax, but you don’t have to use it with classes. You can just allocate memory on stack and heap and then delete all of it with like one class if you really want to. Everything that’s cool about other languages mostly just already exists in C.

              You cannot use C with classes. That’s C++. C doesn’t have classes.

              It’s kind of amazing to see the difference between a Linux smartphone and an android smartphone these days. A Linux smartphone running terrible hardware by today’s standard is just instant. 32 GBs of storage is enough to add everything you want to the operating systems because binaries are like 2 MB. Then that all goes away as soon as you open a web browser. A single website just kills it.

              Hmm, nope. Linux smartphones run fast because they have no apps. Do a factory reset on your Android phone and disable all pre-installed apps. No matter what phone it is, it will run perfectly fast.

              But if you run tons of apps with background processes, it will take performance.

              Then you sit down on a modern windows machine and everything is slow and buggy as shit. It draws 500w of power on a 2nm process node. It’s a real issue. No amount of computer power will ever overcome interpreted languages because people will always do the minimum possible work to get it to run at an unstable 30 FPS and call it good.

              I use Linux as my main OS, but I have Windows as a dual-boot system for rare cases. My PC draws 5w in idle on Windows or on Linux. The 500w what your PSU is rated for, or maybe what the PC can draw in full load with the GPU running at full speed (e.g. if you play a photo-realistic game), not what is used when the PC idles or just has a few hundred tabs in the browser open.

              • DarkAri@lemmy.blahaj.zone
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                6 hours ago

                I’m on mobile so it’s hard to memorize all the things you wrote. Maybe I’ll clarify a few points that bothered me. You are obviously very knowledgeable in these things, even more than me in many areas. I am a hobbies not professional programmer but I have been programming since I was 12 or so and I’m in my 30s now, and also have always been a white hat hacker.

                I don’t mean you can literally understand everything about a computer, just that you can understand everything you need to in order to do 99% of things and this isn’t some crazy thing. You would obviously use openGL or vulkan or direct X to access the GPU instead of writing binaries.

                Modern machines do use several hundred watts just doing regular things. Not idle sure I less you have tons of junk running in the background, but even basic tasks on modern machines which utilize code written in languages like Python and Java and electron and web stuff, will absolutely use much of your systems hardware for simple tasks.

                Managing memory in C++ is easy but you have to not be stupid. C++ isn’t stupid proof. It’s also not a good fit for some things because people make mistakes or just take advantage of the fact that C is low level and has direct access to exploit things. The issue is really that if you aren’t on a certain level of programming then c++ can be really unsafe. You need to understand concepts like creating your own node graph with inheritance to make managing memory easy. It is easy once you understand these things. Garbage collectors are not a good solution for many things. I would argue most things. It’s easy sure, but also buggy and breaks the idea of just having smooth running software. You should be freeing your memory just as you called it in an organized and thoughtful way.

                By memory bus I mean the front side bus, which if you have programs running at uncapped speeds is bad or just programs running with 100x the overhead that they would have if written in C. Again this is just basic knowledge that any programmer should know without even being taught really. There is no reason to have programs bottleneck your machine when we live in an era of multitasking.

                Writing code for C++ also doesn’t take longer after like 5 minutes, it’s actually much quicker because you can just write it and not have it complain about indentation or anything. It is a bit verbose with brackets and stuff but these are there to facilitate having a powerful language that can do pretty much anything. There is also string libraries and stuff that handle strings without the security issues.

                Linux also is tiny without being devoid of software. It’s because it’s written in C and stuff is only as large as it needs to be. My entire Linux OS for my phone with all its files that I’m working on is less then 10 GBs and it has emulators, many libraries, many applications, several web browsers, wine, virtual machines, servers, several different development environments, different window managers, and all kinds of other stuff. On android installing a web browser could take hundreds of MBs for essentially zero benefit.

                No benefits to web assembly? I guess to you that may be true because you don’t care about optimization, download size, energy use and stuff like this. It does have benefits, because one not everyone has thousands to upgrade their computer every two years, and where I live in a Republican state in America, the internet maxes out at 200 KBps, and on a good day maybe 500 KBs.

                The first step on fixing a problem is admitting you have a problem. Software is only going to get worse if devs are in denial about it.

                • squaresinger@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  6 hours ago

                  There’s one big difference between hobby work and professional work: If you do hobby stuff, you can spend as much time on it as you want and you are doing what you want. So likely, you will do the best you can do with your skill level, and you are done when you are done, or when you give up and stop caring.

                  If you do professional work, there’s a budget and a deadline. There’s a dozen things you need to do RIGHT NOW and that need to be finished until yesterday. There’s not one person working on things, but dozens and your code doesn’t live for weeks or months but for years or decades and will be worked on by someone when you are long gone. It’s not rare to stumble upon 10 or 20 years old code in most bigger applications. There are parts of the Linux kernel that are 30 years old.

                  Also in professional work, you have non-technical managers dictating what to do and often even the technical implementation. You have rapidly shifting design goals, stuff needs to be implemented in crunch time, but then gets cancelled a day before release. Systems are way more complex than they need to be.

                  I am currently working on the backend of the website and app for a large retail business. The project is simple, really. Get content from the content managers, display a webside with a webshop, handle user logins and loyalty program data. Not a ton of stuff.

                  Until you realize:

                  • The loyalty program is handled by what used to be a separate company but got folded into our company.
                  • The webshop used to be run by the same team, but the team got spawned out into its own organisation in the company.
                  • The user data comes from a separate system, managed by a team in a completely different organization unit in the company.
                  • That team doesn’t actually manage the user data, but only aggregates the user data and provides it in a somewhat standardized form for the backends of user-facing services. The data itself lives in an entirely separate environment managed by a different sub-company in a different country.
                  • They actually don’t own the data either. They are just an interface that was made to provide customer data to the physical stores. They get their customer data from another service, managed by another sub-company, that was originally made to just handle physical newsletter subscriptions, 20 years ago.

                  We are trying to overhaul this right now, and we just had a meeting last week, where we got someone from all of these teams around a table to figure out how different calls to the customer database actually work. It took us 6 hours and 15 people just to reverse-engineer the flow of two separate REST calls.

                  If you see bugs and issues in a software, that’s hardly ever due to bad programmers, but due to bad organizations and bad management.

                  I don’t mean you can literally understand everything about a computer, just that you can understand everything you need to in order to do 99% of things and this isn’t some crazy thing. You would obviously use openGL or vulkan or direct X to access the GPU instead of writing binaries.

                  This is exactly what the software crisis is, btw. With infinite time and infinite brain capacity, one could program optimally. But we don’t have infinite time, we don’t have infinite budget, and while processors get faster each year, developers just disappointingly stay human.

                  So we abstract stuff away. Java is slower than C (though not by a ton), mostly because it has managed memory. Managed memory means no memory management issues. That’s a whole load of potential bugs, vulnerabilities and issues just removed by changing the language. Multitasking is also much, much easier in Java than in C.

                  Now you choose a framework like Spring Boot. Yes, it’s big, yes you won’t use most of it, but it also means you don’t need to reimplement REST and request handling. Another chunk of work and potential bugs just removed by installing a dependency. And so on.

                  Put it differently: How much does a let’s say 20% slow down due to managed memory cost a corporation?

                  How much does a critical security vulnerability due to a buffer overflow cost a corporation?

                  Hobby development and professional development aren’t that similar when it comes to all that stuff.

                  • DarkAri@lemmy.blahaj.zone
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    5 hours ago

                    Maybe it’s sort of a tragedy of the commons thing. Maybe new standards should have support for limiting the resource use of stuff and the defaults would be low enough that it would force companies to allow time to write good code or it will be unusable on 90% of machines. This might actually fix the issue. Companies could force programmers to just churn out terrible code as fast as possible but would have to actually allow time to optimize and clean up. Idk. I just deal with it by avoiding all that stuff because I actually have enough willpower to stop using something even when it’s more convenient out of principle, which I realize is rare. Most people just want their tik tok OS and they don’t care if they have to pay $1000 for a device that’s a glorified streaming media player. I’m glad Linux exists and it’s still written in C. I’m going to release a game some day and Im going to target Linux as the native client and I don’t care if I lose 80% of my customers. I want to be part of the solution and not the problem, but I understand survival and keeping a job is important to someone like you. Anyways good talk, and windows XP and 7 were much better then any modern operating system ever will be. Linux is catching up fast and we will probably all be on Linux running C code before long with the state of the industry. I can’t even use windows anymore. Much of the web is becoming that way as well. Purely profit driven, run by publicly traded companies that hate humans. Always brownosing the state and their corporate sponsors so the gestapo doesn’t come for their profits next.

    • realitista@lemmus.org
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      1 day ago

      Uses less memory until it inevitably springs a memory leak. And its not a million times the memory, its ~10x. You should check out assembly language, it beats C in all the metrics you care about.

      • DarkAri@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        17 hours ago

        Assembly isn’t really faster than C in many cases, although it is in some cases, C actually compiles to assembly basically. The speed ups you get from assembly come from the fact that you can hand optimize some things that compilers miss, and it can be useful sometimes, like when writing a very high performance part of software like a game rendering loop or something.

        Python uses 10x the memory but probably 100x-1000x the CPU cycles to do the same thing. Also using libraries written for interpreted languages is going to bloat your memory footprint where c libraries are tiny and efficient.

        Memory leaks are an issue with all programming languages, although some use what’s called a garbage collector to handle memory which can be okay with some things but terrible in other things like real time software, operating systems, video games, or just anything you don’t want to hitch and lag and run like a turd. Garbage collectors aren’t some magic fix to memory management, if so they would be a part of C. There are huge tradeoffs to not managing your own memory. If you are using c with objects, then you are pretty safe. The object oriented nature of the language makes it very easy to manage memory. That’s mostly what it’s there for besides reducing the amount of redundant code, if you are using inheritance like you are supposed to. This is called a node graph. You store your data under objects so when you want to remove your data you just call a recursive free function on the highest parent object.

        The difference really is that C code is efficient, in the sense that it doesn’t waste anything. Every thing that seems low level about C is there for a reason. It came from a time where it was important to write code efficiently. Where every MB and cycle counted. Where having a garbage collector freeing and potentially crashing your operating system was unacceptable as well as extremely slow. It’s still slow btw, because programs have scaled with the ability of hardware to run it, so garbage collectors are still mostly as terrible as they always have been.

        C is only low level in the sense that it actually runs on the hardware. There isn’t layers of stuff in-between it and the hardware. There is no good reason to do so, outside of maybe security in some context. You don’t want web resources running on your hardware directly.

        All the other stuff that comes with modern languages is mostly nonsense. Type checking is for lazy programmers. It multiplies the time needed to do an operation. There is no good reason for it to exist other than programmers being bad at their job. C is loosely typed btw. It checks types in the compiler where it belongs. If your android phone was written in c++, your battery would last for days, and you could play games on it for hours, and everything would be extremely fast, nearly instant loading of stuff. The reason web pages were written in JiT languages was mainly just for comparability across many different types of hardware and browsers. They were also relatively small programs. Scripting can be useful sometimes, garbage collectors can be useful for script kitty stuff. It has no place in mainstream software and definitely not in operating systems. Google went from “Don’t be evil” to let’s build an entire operating system out of java and spyware. It’s not good. At this rate we aren’t even going to have guis anymore in 10 years because no hardware will be able to run it without destroying itself, and needing to be plugged in constantly, and have $1000 worth of ram from some slave economy that has overpowered us as we have become so unproductive since most people are using windows 12 or some nonsense.

        • squaresinger@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          3 hours ago

          Perfect C is faster than perfect Python, same as perfect assembly is faster than perfect C.

          But in the real world we don’t write perfect code. We have deadlines, we have constantly shifting priorities, we have non-technical managers dictating technical implementations. We have temporary prototype code that ended up being the backbone of a 20 year project because management overpromised and some overworked developer had to deliver real fast. We have managers still believing in the mythical man month (“If one woman can make a baby in 9 months, 9 women only need a single month to make one”) and we have constant cycles of outsourcing and then insourcing again.

          With all that garbage going on we end up with a lot of “good enough for now”, completely independent of “laziness” or “low-skill” of developers. In fact, burnout is incredibly common among software developers, because they aren’t lazy and they want to write good software, but they get caught in the gears of the grind until they get to a mental breakdown.

          And since nobody has the time to write perfect code, we end up with flawed and suboptimal code. And suboptimal assembly is much worse than suboptimal C, which is again much worse than suboptimal Python.

          If your Python code is suboptimal it might consume 10x as much RAM as it needs. If your C code is suboptimal, it memory-leaks and uses up all RAM your PC has.

          If your Python code is buggy, something in the app won’t work, or worst case the app crashes. If your C code is buggy, some hacker just took over your PC because they exploited a buffer overflow to execute any code they want.


          The main issues with software performance are:

          • Management doesn’t plan right and developers need to do with what they have
          • Companies don’t want to spend incredible amounts of money on development to make things perfect
          • Companies want products to be released in time
          • Customers aren’t happy with simple things. They want pretty, fancy things
          • Customers don’t want to pay for software. In today’s money, Win95 cost around €500 and Office cost around €1000. Would you want to spend that? Or do you expect everything to be free? How much did you pay for all the software on your phone?
          • DarkAri@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            2 hours ago

            Perfect assembly is marginally faster than perfect C, but perfect C is way faster than perfect python. You have many valid points though. Also C can be very safe if you don’t use the standard string libraries and stuff. It depends on what you are trying to do. Sometimes it’s worth giving up safety to have a better program.

            Also as a side note, when God instructed Terry Davis to build his third temple do you know what language God told him to use? C… That’s right. God himself endorses C as the greatest language mankind has ever developed.

        • realitista@lemmus.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          15 hours ago

          Python uses 10x the memory but probably 100x-1000x the CPU cycles to do the same thing. Also using libraries written for interpreted languages is going to bloat your memory footprint where c libraries are tiny and efficient.

          You’ve obviously never looked at benchmarks because you’re one or two orders of magnitude off.

          The same reason you don’t use assembly is the reason many use Payton instead of C.

          As someone who was trained in C and did most of my programming in it, yes it does everything you need but it’s a major pain in the ass doing it well. It’s slow to get things done and you need decades to get competent at it. Python allows you to get up and running a lot faster.

          As cpu and ram are cheap compared to the days when C was a necessity, most programmers have made the decision that getting things going fast and easy was worth the trade off. The market has spoken. There is still a place for C or Rust, but there’s also a place for Python and other interpreted languages. You can make good programs in both but it’s a lot easier to make a garbage program in C.

          I’ve used at least 20 computer OS’ dating back to the ‘70s, and despite all your fearmongering, computers keep getting cheaper and easier to use, and for the most part, faster. I’ve got old Macs and PC’s and Linux boxes laying around from 20-30 years ago, and trust me, they aren’t faster or easier to use. There were some good OS’ like AmigaOS or windowing systems like FVWM back in the day that were surprisingly responsive for the time, but Windows and MacOS were all pretty garbage until about windows 7 and Mac OS X. And they costed $4000+ in today’s dollars. You can get laptops these days for $150.

          • DarkAri@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            9 hours ago

            Python is really that much slower. It has actually come a long way in the past few years but it’s still a interpreted, strongly typed language. You can use libraries that are written in C or rust or something to make python run much faster but anything you write in actual Python is extremely slow. It can be okay for scripting, like basically bash, but it’s not really good for a programing language and writing applications in it is not good unless it’s a small project made by one programmer that does a specific useful thing.

            Software really is getting terrible. We are hitting a wall in terms of refining process nodes further because we are at 2 nm and it’s really difficult to keep going. There is already way too much terrible code out there just destroying really powerful systems. We are evolving backwards, boot times in the early 2000s on low end hardware were a few seconds for windows XP. When I clicked an application, it either opened nearly instantly or within a couple seconds. It was a much better operating system than windows 10 ever will be.

            The issue is having even a single piece of python or java or electron can just completely saturate your memory bus and halve the speed of every operation you do. i had a PC that had spotty thermal paste but long ago and opening discord would overheat it lol.

            All I’m saying is that writing this type of code for production shouldn’t really be acceptable. It would be nice if we actually benefited from advancing computer technology and new hardware wasn’t just an excuse to write worse software. I think operating systems should warn the users when running terrible code, that this program is low quality and will slow down the system or is taking as much resources as it can. We are in the age of 1000w computers with billions of transitions being taken out by webpages and OS spyware. The standards are just far too low. There is too much terrible software being written because companies are desperate to hire people who have no idea how to program in real languages instead of paying for real programmers or helping people learn to code in those languages and many of these companies are billion dollar companies.

            Like I said, it’s bad for the user, it’s bad for the environment, it bad for your hardrives, and it’s bad for the economy. Not to go full terry Davis on you but computers should boot in under a second these days.