Why AI hasn’t made the smart home smarter

  • Tinidril@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    The smart home has been broken for over a decade. From day one the goal was always to lock in users to an ecosystem and invade their privacy. Actually providing useful and reliable products didn’t even register as a goal.

    The one way to do decent home automation is with locally run He Assistant and Zigbee or Z-Wave. It should only rely on the Internet for resources that are truly non-local like weather reports.

    Thread/Matter might also be becoming an option. At this point I’m still watching to see what they do with it.

  • SkyNTP@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 hours ago

    Society has been steadily forgetting the importance of reliability, all in the name of convenience. And in the end, you get neither.

    “They don’t make it like they used to”. Sure. Sure. Old man yelling at clouds. Blah blah. But when your light switches stop working because of some overly complex system that requires the switching data to travel twice around the world just to fucking turn a light on (or an AI to invent 15 Python scripts and a mathematical proof just to add two integers together), you’ve got a really fucking fragile system.

    And you know what isn’t convenient? Fucking fragile products that break as soon as you touch them. Who the fuck wants a hammer made out of salami? Sure, it might look like a hammer, it might taste great, but it can’t drive a nail for shit. That’s a garbage product that belongs in the garbage.

    An LLM can tell me a (lame) joke. So can Bob. Bob can also turn on the lights, and is pretty good at that. But those things together don’t automatically mean an LLM is good at turning on lights. They are fragile, by design, like the salami is!

    Stay in your fucking lane tech companies.

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 hours ago

    I have a gripe with this article and it’s the way that their “expert” Riedl talks about AI and the anthropomorphic personification inherent in the language he uses.

    AI doesn’t think. It can’t overthink. It doesn’t “misunderstand”. It doesn’t understand. It doesn’t do context. So while I understand that this person is trying to communicate the differences between these two types of technology, this gives an unreasonable overestimation of the techs capabilities, making some people believe the tech is more than it is.

    Some people on another thread about the same article were upset that this writer bought a coffee machine with AI integration. But that’s to be expected of people who write about tech. They try that tech out. Experience it so they can write about it. See what it does. What it’s good at. What it’s bad at. This is how we get reviews.