• towerful@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    3 days ago

    Programming isn’t about syntax or language.
    LLMs can’t do problem solving.
    Once a problem has been solved, the syntax and language is easy.
    But reasoning about the problem is the hard part.

    Like the classic case of “how many 'r’s in ‘strawberry’”, LLMs would state 2 occurrences.

    Just check googles AI Mode.
    The strawberry problem was found and reported on, and has been specifically solved.

    Promoted how many 'r's in the word 'strawberry':

    There are three 'r’s in the word ‘strawberry’. The letters are: S-T-R-A-W-B-E-R-R-Y.

    Prompted how many 'c's in the word 'occurrence':

    The word “occurrence” has two occurrences of the letter ‘c’.

    So, the specific case has been solved. But not the problem.
    In fact, I could slightly alter my prompt and get either 2 or 3 as the answer.

    • Part4@infosec.pub
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      3 days ago

      None of this contradicts anything in my post.

      Edit - but I will add that the ai agent is written to manage the limitations of the LLM. To do the kind of ‘thinking’ (they don’t really think) the LLM can’t do, in a very loose sense (to try and briefly address the point in your post).