• NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    edit-2
    1 day ago

    AI coding tools can do common, simple functions reasonably well, because there are lots of examples of those to steal from real programmers on the Internet. There is a large corpus of data to train with.

    AI coding tools can’t do sophisticated, specific-case solutions very well, because there aren’t many examples of those for any given use case to steal from real programmers on the Internet. There is a small corpus of data to train with.

    AI coding tools can’t solve new problems at all, because there are no examples of those to steal from real programmers on the Internet. There is no corpus of data to train with.

    AI coding tools have already ingested all of the code available on the Internet to train with. There is no more new data to feed in. AI coding tools will not get substantially better than they are now. All of the theft that could be committed has been committed, which is why the AI development companies are attempting to feed generated training material into their models. Every review of this shows that it makes the output from generative models worse rather than better.

    Programming is not about writing code. That is what a manager thinks.
    Programming is about solving problems. Generative AI doesn’t think, so it cannot solve problems. All it can do is regurgitate material that it has previously ingested which is hopefully close-ish to the problem you’re trying to solve at the moment - material which was written by a real thinking human that solved that problem (or a similar one) at some point in the past.

    If you patronize a generative AI system like Claude Code, you are paying into, participating in, and complicit in, the largest example of labor theft in history.

    • undeffeined@lemmy.ml
      link
      fedilink
      arrow-up
      24
      ·
      1 day ago

      Programming is not about writing code… Programing is about solving problems

      Very well put. Good managers know that but those are very rare… And the whole top management in the world is completely bought into this snake oil, makes me feel insane.

    • Scubus@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      arrow-down
      5
      ·
      21 hours ago

      Im not entirely convinced this is accurate. I do see your point and i had not considered that there is no more training data to use, but at the end of the day our current ai is just pattern recognition. Hence, would you not be able to use a hybrid system where you set up billions of use cases(translate point a to point b, apply a force such that object a rolls a specified distance, set up a neural network using backpropogation with 3 hidden layers, etc) and then have two adversarial ais. One of which attempts to “solve” that use case by randomly trying stuff, and the other basically just says “youre not doing good enough and heres why”. Once your first is doing a good job with that very specific use case, index it. Now when people ask for that specifc use case or a larger problem that includes that use case, you dont even need AI. You just plug in the already solved solution. Now your code base becomes basically just AI filling out wvery possibly question on stack overflow.

      Obviously this isnt actual coding with AI, at the end of the day youre still doing all the heavy lifting. Its effectively no different from how most coders code today, just steal code from stack overflow XD the only difference would be that stack overflow is basically filled with every conceivable question, and if youre isnt answered, you can just request that they set up a new set of ad ais to solve the new problem.

      Secondarily, you are the first person to give me a solid reason as to why the current pardigm is unworkable. Despite my mediocre recall i have spent most of my life studying AI well before all this llm stuff, so i like to think i was at least well educated on the topic at one point. I appreciate your response. I am somewhat curious about what architecture changes need to be made to allow for actual problem solving. The entire point of a nerual network is to replicate the way we think, so why do current AIs only seem to be good at pattern recognition and not even the most basic of problem solving? Perhaps the architecture is fine, but we simply need to train up generational ais that specifically focus on problem solving instead of pattern recog?

      • pinball_wizard@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        6 hours ago

        Perhaps the architecture is fine, but we simply need to train up generational ais that specifically focus on problem solving instead of pattern recog?

        I mean, the architecture clearly isn’t fine. We’re getting very clever results, but we are not seeing even basic reasoning.

        It is entirely possible that AGI can be achieved within our lifetime. But there is substantial evidence that our current approach is a complete and total dead end.

        Not to say that we won’t use pieces of today’s solution. Of course we will. But something unknown but also really important and necessary for AGI appears to be completely missing right now.