• holomorphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    7 hours ago

    Those models will almost certainly be essentially the same transformer architecture as any of the llms use; simply because they beat most other architectures in almost any field people have tried them. An llm is, after all, just classifier with an unusually large set of classes (all possible tokens) which gets applied repeatedly

    • MajinBlayze@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      36 minutes ago

      I’m not talking about the specifics of the architecture.

      To the layman, AI refers to a range of general purpose language models that are trained on “public” data and possibly enriched with domain-specific datasets.

      There’s a significant material difference between using that kind of probabilistic language completion and a model that directly predicts the results of complex processes (like what’s likely being discussed in the article).

      It’s not specific to the article in question, but it is really important for people to not conflate these approaches.

    • FatCrab@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      5 hours ago

      A quick search turns up that alpha fold 3, what they are using for this, is a diffusion architecture, not a transformer. It works more the image generators than the GPT text generators. It isn’t really the same as “the LLMs”.