In early June, shortly after the beginning of the Atlantic hurricane season, Google unveiled a new model designed specifically to forecast the tracks and intensity of tropical cyclones.

Part of the Google DeepMind suite of AI-based weather research models, the “Weather Lab” model for cyclones was a bit of an unknown for meteorologists at its launch. In a blog post at the time, Google said its new model, trained on a vast dataset that reconstructed past weather and a specialized database containing key information about hurricanes tracks, intensity, and size, had performed well during pre-launch testing.

“Internal testing shows that our model’s predictions for cyclone track and intensity are as accurate as, and often more accurate than, current physics-based methods,” the company said.

Google said it would partner with the National Hurricane Center, an arm of the National Oceanic and Atmospheric Service that has provided credible forecasts for decades, to assess the performance of its Weather Lab model in the Atlantic and East Pacific basins.

  • katy ✨@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    19 hours ago

    who could have predicted that hurricanes, which have been getting worse and worse every year, would be worse this year

  • Mereo@piefed.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    1 day ago

    Sigh . It’s not AI; it’s a machine learning algorithm. Nevertheless, this is the right use of the technology. Machine learning is all about finding correlations in big data, and this is a good example of that.

      • Mereo@piefed.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        22 hours ago

        It’s not AI. LLMs are not intelligent, they do not think. It’s only a marketing term.

        • t3rmit3@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          5 hours ago

          How precisely does human thought operate that is distinct from pattern-recognition, inference, and pattern output? I ask this rhetorically, because we don’t actually have a proven model of how our own intelligence functions.

          I agree that obviously neural networks are not AGI (which requires consciousness), but I think the visceral “this isn’t intelligence” reactions I see tend to be more about the belief that human intelligence is special or unique. We know now that humans aren’t really that distinct from other animals in our ability to think, even ones that we would normally assume are “reaction-driven” like insects.

          Unless we can prove that we ourselves are not just really really complex calculators that do pattern-matching, inference, and reproduction, we can’t actually assert that machine learning is not a rudimentary version of intelligence.

        • LukeZaz@beehaw.org
          link
          fedilink
          English
          arrow-up
          5
          ·
          20 hours ago

          Are you commenting on AI as we knew it before LLMs entered the picture, or AI as companies refer to it today? Between your comments, I can’t tell.

          Personally, I’d argue that ML qualifies as AI if we’re using the former definition, but not if we’re using the latter, if only because the latter is a horrifically useless corporate buzzword that has no place in any sane human lexicon.

          • TehPers@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            20 hours ago

            I think their point is that there’s no intelligence here. It’s a bunch of matrix multiplication, functions being executed against elements of vectors and matrices, convolutions, etc. All of it is math.

            “AI” is a meaningless term. With prior definitions of AI, an implementation of Dijkstra’s algorithm could be considered AI.

            • LukeZaz@beehaw.org
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              15 hours ago

              Artificial intelligence as a term has had decades of use in videogames as a word to describe many different imitations and appearances of intelligence, as well as the many stepping stones on the long road toward intelligence. Claiming it was a meaningless term is doing a disservice to history. And something being math doesn’t make it any less real, else our own intelligence would be questionable; after all, sufficiently complicated math can represent our own brains, too.

              I weep for what chatbots have done to the image of this field.

              • TehPers@beehaw.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 hours ago

                Artificial intelligence as a term has had decades of use in videogames as a word to describe many different imitations and appearances of intelligence, as well as the many stepping stones on the long road toward intelligence.

                I am aware. I literally studied AI in university.

                Claiming it was a meaningless term is doing a disservice to history.

                It has never had a meaningful definition. Many people used it to mean ML, while the common usage (the “AI” in a video game) meant something that could perform actions on behalf of a human. Pathfinding, which is one element of that in video games (see literally any NPC that moves in a game), is purely algorithmic, and a lot of people who used “AI” to refer to ML would disagree that Dijkstra’s algorithm is “AI”.

                AI in pop culture generally meant some weird terminator/robocop-esque replacement of people. We do not, nor have we ever, had anything that does this, and the term more accurately used for that is AGI.

                AI has always just been an opaque term meaning “something tech-related I don’t understand”. It’s just the tech word for “magic”.

                • LukeZaz@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  22 seconds ago

                  I am aware. […] Pathfinding, which is one element of that in video games (see literally any NPC that moves in a game), is purely algorithmic, and a lot of people who used “AI” to refer to ML would disagree that Dijkstra’s algorithm is “AI”.

                  Yes, I know what Dijkstra’s is. I’ve spent plenty enough of my life in game development to know that. You don’t need to link it, and it wasn’t relevant to either of my points. I’m not, nor was I ever, arguing that people regularly called pathfinding algorithms “AI.”

                  Anyway: I find it befuddling why you’d claim that AI is an “opaque term” equivalent to technobabble right after describing

                  the common usage (the “AI” in a video game) meant something that could perform actions on behalf of a human

                  …which would indicate that a great number of people would find the word perfectly understandable and useful. I wouldn’t expect it to fit the category of “common usage” otherwise. And I’d further find it strange to believe that a term having multiple meanings is somehow to its discredit, if that’s what your suggesting. “AI” had a solid place in language well before chatbots took off, as have many words describing broad categories.