In early June, shortly after the beginning of the Atlantic hurricane season, Google unveiled a new model designed specifically to forecast the tracks and intensity of tropical cyclones.
Part of the Google DeepMind suite of AI-based weather research models, the “Weather Lab” model for cyclones was a bit of an unknown for meteorologists at its launch. In a blog post at the time, Google said its new model, trained on a vast dataset that reconstructed past weather and a specialized database containing key information about hurricanes tracks, intensity, and size, had performed well during pre-launch testing.
“Internal testing shows that our model’s predictions for cyclone track and intensity are as accurate as, and often more accurate than, current physics-based methods,” the company said.
Google said it would partner with the National Hurricane Center, an arm of the National Oceanic and Atmospheric Service that has provided credible forecasts for decades, to assess the performance of its Weather Lab model in the Atlantic and East Pacific basins.
I think their point is that there’s no intelligence here. It’s a bunch of matrix multiplication, functions being executed against elements of vectors and matrices, convolutions, etc. All of it is math.
“AI” is a meaningless term. With prior definitions of AI, an implementation of Dijkstra’s algorithm could be considered AI.
Artificial intelligence as a term has had decades of use in videogames as a word to describe many different imitations and appearances of intelligence, as well as the many stepping stones on the long road toward intelligence. Claiming it was a meaningless term is doing a disservice to history. And something being math doesn’t make it any less real, else our own intelligence would be questionable; after all, sufficiently complicated math can represent our own brains, too.
I weep for what chatbots have done to the image of this field.
I am aware. I literally studied AI in university.
It has never had a meaningful definition. Many people used it to mean ML, while the common usage (the “AI” in a video game) meant something that could perform actions on behalf of a human. Pathfinding, which is one element of that in video games (see literally any NPC that moves in a game), is purely algorithmic, and a lot of people who used “AI” to refer to ML would disagree that Dijkstra’s algorithm is “AI”.
AI in pop culture generally meant some weird terminator/robocop-esque replacement of people. We do not, nor have we ever, had anything that does this, and the term more accurately used for that is AGI.
AI has always just been an opaque term meaning “something tech-related I don’t understand”. It’s just the tech word for “magic”.
Yes, I know what Dijkstra’s is. I’ve spent plenty enough of my life in game development to know that. You don’t need to link it, and it wasn’t relevant to either of my points. I’m not, nor was I ever, arguing that people regularly called pathfinding algorithms “AI.”
Anyway: I find it befuddling why you’d claim that AI is an “opaque term” equivalent to technobabble right after describing
…which would indicate that a great number of people would find the word perfectly understandable and useful. I wouldn’t expect it to fit the category of “common usage” otherwise. And I’d further find it strange to believe that a term having multiple meanings is somehow to its discredit, if that’s what your suggesting. “AI” had a solid place in language well before chatbots took off, as have many words describing broad categories.