• ulterno@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 hours ago

    If something uses a lot of if else statements to do stuff like become a “COM” player in a game, it is called an Expert System.
    That is what is essentially in game “AI” used to be. That was not an LLM.

    Stuff like clazy and clang-tidy are neither ML nor LLM.
    They don’t rely on curve fitting or mindless grouping of data-points.
    Parameters in them are decided, based on the programming language specification and tokenisation is done directly using the features of the language. How the tokens are used, is also determined by hard logic, rather than fuzzy logic and that is why, the resultant options you get in the completion list, end up being valid syntax for said language.


    Now if you are using Cursor for code completion, of course that is AI.
    It is not programmed using features of the language, but iterated until it produces output that matches what would match the features of the language.

    It is like putting a billion monkeys in front of a typewriter and then selecting one that make something Shakespeare-ish, then killing off all the others. Then cloning the selected one and rinse and repeat.

    And that is why it takes a stupendously disproportionate amount of energy, time and money to train something that gives an output that could otherwise be easily done better using a simple bash script.

    • Legianus@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 hours ago

      To be honest, I feel like what you describe in the second part (the monkey analogy) is more of a genetic algorithm than a machine learning one, but I get your point.

      Quick side note, I wasn’t at all including a discussion about energy consumption and in that case ML based algorithms, whatever form they take, will mostly consume more energy (assuming not completely inefficient “classical” algorithms). I do admit, I am not sure how much more (especially after training), but at least the LLMs with their large vector/matrix based approaches eat a lot (I mean that in the case for cross-checking tokens in different vectors or such). Non LLM, ML, may be much more power efficient.

      My main point, however, was that people only remember AI from ~2022 and forgot about things from before (e.g. non LLM, ML algorithms) that were actively used in code completion. Obviously, there are things like ruff, clang-tidy (as you rightfully mentioned) and more that can work without and machine learning. Although, I didn’t check if there literally is none, though I assume it.

      On the point of game “AI”, as in AI opponents, I wasn’t talking of that at all (though since deep mind, they did tend to be a bit more ML based also, and better at games, see Starcraft 2, instead of cheating only to get an advantage)