In early June, shortly after the beginning of the Atlantic hurricane season, Google unveiled a new model designed specifically to forecast the tracks and intensity of tropical cyclones.

Part of the Google DeepMind suite of AI-based weather research models, the “Weather Lab” model for cyclones was a bit of an unknown for meteorologists at its launch. In a blog post at the time, Google said its new model, trained on a vast dataset that reconstructed past weather and a specialized database containing key information about hurricanes tracks, intensity, and size, had performed well during pre-launch testing.

“Internal testing shows that our model’s predictions for cyclone track and intensity are as accurate as, and often more accurate than, current physics-based methods,” the company said.

Google said it would partner with the National Hurricane Center, an arm of the National Oceanic and Atmospheric Service that has provided credible forecasts for decades, to assess the performance of its Weather Lab model in the Atlantic and East Pacific basins.

  • Mereo@piefed.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    1 day ago

    Sigh . It’s not AI; it’s a machine learning algorithm. Nevertheless, this is the right use of the technology. Machine learning is all about finding correlations in big data, and this is a good example of that.

      • Mereo@piefed.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 hours ago

        It’s not AI. LLMs are not intelligent, they do not think. It’s only a marketing term.

        • t3rmit3@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          How precisely does human thought operate that is distinct from pattern-recognition, inference, and pattern output? I ask this rhetorically, because we don’t actually have a proven model of how our own intelligence functions.

          I agree that obviously neural networks are not AGI (which requires consciousness), but I think the visceral “this isn’t intelligence” reactions I see tend to be more about the belief that human intelligence is special or unique. We know now that humans aren’t really that distinct from other animals in our ability to think, even ones that we would normally assume are “reaction-driven” like insects.

          Unless we can prove that we ourselves are not just really really complex calculators that do pattern-matching, inference, and reproduction, we can’t actually assert that machine learning is not a rudimentary version of intelligence.

        • LukeZaz@beehaw.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          17 hours ago

          Are you commenting on AI as we knew it before LLMs entered the picture, or AI as companies refer to it today? Between your comments, I can’t tell.

          Personally, I’d argue that ML qualifies as AI if we’re using the former definition, but not if we’re using the latter, if only because the latter is a horrifically useless corporate buzzword that has no place in any sane human lexicon.

          • TehPers@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            17 hours ago

            I think their point is that there’s no intelligence here. It’s a bunch of matrix multiplication, functions being executed against elements of vectors and matrices, convolutions, etc. All of it is math.

            “AI” is a meaningless term. With prior definitions of AI, an implementation of Dijkstra’s algorithm could be considered AI.

            • LukeZaz@beehaw.org
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              12 hours ago

              Artificial intelligence as a term has had decades of use in videogames as a word to describe many different imitations and appearances of intelligence, as well as the many stepping stones on the long road toward intelligence. Claiming it was a meaningless term is doing a disservice to history. And something being math doesn’t make it any less real, else our own intelligence would be questionable; after all, sufficiently complicated math can represent our own brains, too.

              I weep for what chatbots have done to the image of this field.