• null_dot@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    1 day ago

    There’s loads to dislike about gen AI but its not completely without virtue.

    Making generalisations like this is just lazy thinking.

  • dream_weasel@sh.itjust.works
    link
    fedilink
    arrow-up
    12
    arrow-down
    7
    ·
    1 day ago

    I’m totally happy to self separate from people who make AI dislike a personality trait. This is bordering on deranged.

    Sure, if you chat me up and generative AI is writing the messages you can get lost. If you are generating complex SQL queries through an API written in another language? Whatever, get after it.

    The amount of people who see a photoshopped picture now and then be like “oh my God AI slop!” is getting ridiculous; take a breath and touch some grass.

  • Boomer Humor Doomergod@lemmy.world
    link
    fedilink
    English
    arrow-up
    113
    arrow-down
    9
    ·
    2 days ago

    I just cannot imagine forming a deep, lasting connection with someone who regularly interacts with a technology that’s kneecapping our collective attention spans

    This could have come straight from an article in the 80s about how she won’t date folks who watch television.

    Such a strange article, even if I agree with the premise.

      • Jax@sh.itjust.works
        link
        fedilink
        arrow-up
        31
        ·
        edit-2
        2 days ago

        Especially considering all the other empirically proven ways that attention spans have been shortening for decades prior to this.

        10 bucks this woman has used Twitter, Facebook, Instagram, (insert social media here) a hell of a lot more than this guy has used ChatGPT.

        That being said, yes — you probably shouldn’t trust someone that puts blind faith in ChatGPT. But definitely not for that stupid reason.

        Edit: I mean, shit, wasn’t there an odd correlation between Spongebob’s release and attention spans shortening? I think the study was proven inaccurate in the sense that it negatively affected attention spans of children below the target demographic, but still — none of this is a new phenomena associated with the advent of AI.

        • obsoleteacct@lemmy.zip
          link
          fedilink
          arrow-up
          6
          ·
          edit-2
          2 days ago

          I’m in my forties and I’ve been hearing about how XY and z have been shortening attention spans literally since the 1980s.

          Nintendo is ruining your attention span they shouted as we locked in for hours of Zelda, hand drawing maps. Music videos are ruining people’s attention span. CDs being able to skip tracks was going to ruin our attention span. Instant messaging was definitely going to ruin our attention span. When I was a kid they were saying our attention spans could be measured in seconds so if it’s gotten worse I don’t know how anyone finishes a sentence.

          This year I heard how cocommelon commands too much attention from toddlers. So I guess our brains can be undercooked or over cooked.

          The one thing I wouldn’t expect to be blamed for undermining our attention span would be a long form text-based back and forth conversation with a chatbot.

          • Jax@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            I wasn’t arguing that AI doesn’t harm people’s brains, I was arguing that there are so many other things to be morally opposed to AI about that attention spans might as well not even be a consideration.

    • TubularTittyFrog@lemmy.world
      link
      fedilink
      arrow-up
      21
      arrow-down
      12
      ·
      edit-2
      2 days ago

      it’s virtue signalling.

      ‘i do not participate in the unvirtuous activity. therefore I am superior to those that do’.

      people love to virtue signal based on what they do/don’t do, and do/don’t consume.

      dating profiles on dating apps are loaded with virtue signal nonsense, because it makes the signaler feel they are ‘above’ other people, or that such signalling will ‘prevent’ the non-virtuous heathens from trying to date them.

      • 4grams@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        There are lines though and it can get blurry. As an example, my family accuses me of virtue signaling because I refuse to engage in their racism. That I refuse to judge people by the color of their skin or their religion, I am called every name in the book.

        Now, this is clearly not my problem, and I am not virtue signaling, but it’s a semi-loaded term these days, used CONSTANTLY by reactionaries.

        Maybe the article is a bit, I’ll give you that. However, I can’t condemn her for her lines, no skin off my nose. I’m not upset by virtue signalling, if anything it’s good information on how to treat and interact with those people. The virtues my family signals for example, have been quite valuable to how I interact with them (as little as possible).

      • Jhex@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        2 days ago

        people love to virtue signal based on what they do/don’t do, and do/don’t consume.

        Hmmm that depends… if I don’t participate in sex trafficking, I think that’s beyond virtual signaling and into being plain old decent.

        Personally I have no issues watching TV (for example) but I wouldn’t want to date someone who tells me their #1 hobby is to watch TV.

        Particular to this case, I hate AI but I would not discard people in my life if they use AI… however, the people that claim out loud that ChatGPT (or any other) is their best friend would raise serious red flags in my head.

      • SpacetimeMachine@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        2 days ago

        I mean, I feel like signaling the things you value to people you are actually going to potentially date is the entire purpose of dating apps? Like if you are just doing it on social media to people at large then sure I agree, but dating apps seems like the perfect place to say what you value in other people.

    • SCmSTR@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      2 days ago

      They target the most influential ones that are not in their favor, is my hypothesis. So if a place would normally breed leftness, those places are under attack.

      Also, the nature of the fediverse is availability and permanence. So, if you can blanket the earth with your fecal matter automatically, it’s worth.

      Honestly, until we as society start crushing this behavior, values, and idealism in general, it’s only going to get worse. Enjoy.

    • wizblizz@lemmy.worldM
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      2 days ago

      Its pretty annoying, honestly. This is the FUCK AI space, not the iTs JUsT sOFtWare bro space.

  • NotMyOldRedditName@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    2 days ago

    So, at first from the headline I thought it was about online dating and the messages being sent back and forth were from chatgpt.

    But no, this is just about using it at all.

  • ryven@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    10
    arrow-down
    16
    ·
    2 days ago

    Honestly, I don’t use ChatGPT but I’m under no delusions that my “original thoughts” are any better than its. Most of them are just lifted wholesale from books, TV, social media, conversations I had with people who are smarter than me, etc.

    • TBi@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      2 days ago

      Could be you are having new ideas. But just someone in the past had thought of something similar. Your idea might be unique. But you may convince yourself it isn’t due to the similarities.

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      “Original thoughts” matter much less than what you do with them. The steam engine wasn’t an original thought at the time it was developed, the ancient Romans understood the principles of steam power nearly 2 millennia ago, but it took until the 19th century for someone to take that idea and develop industrialization.

  • vga@sopuli.xyz
    link
    fedilink
    arrow-up
    7
    arrow-down
    36
    ·
    edit-2
    2 days ago

    Women racing to find new excuses to stay single. I mean I get it, I wouldn’t date men either. And not just because of the sexual attraction thing.

    But I agree with the premise too. It’s distasteful to mention what technology you used to gain some piece of information. It’s just that this is a kind of an upper class thing to do, so it’s kinda funny to me that an article in The Guardian is supporting such snobbery.

  • korendian@lemmy.zip
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    63
    ·
    2 days ago

    “Why I refuse to date someone who uses a calculator”. This is basically what this article equates to.

      • nfreak@lemmy.ml
        link
        fedilink
        English
        arrow-up
        56
        arrow-down
        6
        ·
        2 days ago

        Calculators also don’t uphold fascist agendas or drain entire cities’ worth of energy and water just to hallucinate wrong answers in the name of “convenience”

        • korendian@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          17
          ·
          2 days ago

          Do you have research to support your claim that AI in general upholds fascist ideologies (aside from those specifically tuned to do so, like Grok)? I don’t condone the current model of data center driven AI, but there is such a thing as self hosted LLMs. Some Linux distro even have them available out of the box.

          • FireRetardant@lemmy.world
            link
            fedilink
            arrow-up
            19
            ·
            2 days ago

            You included all the proof we need in your comment. Grok has been proven to do it, others may be doing it as well. And if its isn’t promoting fascism maybe its promoting some other ideology. The point is these models are not unbiased, and in many cases are being manipulated by their owners.

            • Bane_Killgrind@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              8
              ·
              2 days ago

              It’s not maybe they have been researched and are highly manipulable.

              You are trusting the host of the model to not introduce biases. Getting the model to regurgitate it’s hard coded prompt info has already happened and different providers have been doing that

      • vga@sopuli.xyz
        link
        fedilink
        arrow-up
        7
        arrow-down
        2
        ·
        2 days ago

        I don’t think the premise of the article is related to how correct or incorrect LLM answers are.

      • Grimy@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        5
        ·
        edit-2
        2 days ago

        Sure, but what kind of loser chooses his friends based on if they use one or not.

        • WoodScientist@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          2 days ago

          People who use LLMs are just obnoxious to be around. They’re genuinely unpleasant people to interact with. Imagine casually asking someone’s opinion or thoughts on a topic, and they pull out a phone and ask the llm. I didn’t ask the robot. I asked you. And the same applies to written communication. People addicted to LLMs are just mouth pieces for a soulless machine, fools who have sold their souls and become nothing more than robots themselves.

          And no, I don’t date clankers.

          • Grimy@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            2 days ago

            I think you may be concentrating on a very small percentage of users, and using that false impression and your own bias to pass broad faulty judgements.

            I use AI but I’ve never even thought of doing the behavior you speak of.

        • ToiletFlushShowerScream@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          3
          ·
          2 days ago

          Well me. I’m that loser. People addicted to LLMs and their affirmative chatbot nonsense to guide their life and professional choices have proven to be poor colleagues and poorer mates. But I guess I’m the loser.

            • BeeegScaaawyCripple@lemmy.world
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              edit-2
              2 days ago

              Nah, my idiot brother uses llms for everything in his legal practice. He has yet to get in trouble for hallucinated citations, but it’s merely a matter of time

              He just got dumped by wife 4 and is looking for wife 5. Make of that what you will

      • korendian@lemmy.zip
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        32
        ·
        2 days ago

        Unless you hit the wrong key or do the problem wrong. That’s why you should always check your work, even when using a calculator. Same with AI.

        • Carnelian@lemmy.world
          link
          fedilink
          arrow-up
          28
          arrow-down
          3
          ·
          2 days ago

          Checking your work with a calculator is just making sure you pressed the buttons correctly, possibly running through the process a second time if it’s important enough.

          “Checking your work” with an llm is literally just doing the thing you should have done initially when you wanted the answer you were looking for. Involving the llm at all is a totally nonsensical waste of time

          • korendian@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            13
            ·
            2 days ago

            It gives you a good starting point. If its something simple, like “What are the best night clubs in my area”, then it is useful. It may not be 100% accurate in that case, but you were going to go through them 1 by 1 anyway, and it can give you a quick summary of what they are, so you can decide if you want to look into them more. Or you can further narrow down your results in a way that a simple google search couldn’t. I’m not saying its the be all end all, but this whole “AI is totally useless” thing is just ridiculous.

            • Carnelian@lemmy.world
              link
              fedilink
              arrow-up
              11
              arrow-down
              3
              ·
              2 days ago

              I would actually take it a step further and say it’s worse than useless tbh.

              What good is it to have a summary generated about night clubs when literally zero of the details generated can be presumed accurate? Like it will just full on ass pull basic details even down to the hours of operation. This constant confident misinfo actually harms your process.

              you were going to go through them 1 by 1 anyway

              And furthermore, we’re ignoring the fact that no, you were not. Nobody in the history of time has ever run a detailed comparative analysis on a massive list of nightclubs in their area for the purpose of optimizing their night out. You just look at the map for whatever’s gonna be cheapest to uber to and quickly check the reviews lol. Or more likely someone in your group started out wanting to check out a specific place, and that’s that.

              The mere concept of employing AI in this instance was delivered to you by a marketing firm. That’s the bread and butter of these companies: pretending a trivial, routine task that we’ve performed without friction for many years is actually a large project that justifies investment in and deployment of their bloated expensive product.

              You can go back and forth with me all day trying to contrive different random examples where you think maaaaaybe the AI saves you ten seconds of time if you squint, but in reality people who often use it just waste a bunch of their time floundering and walk away less informed than when they started

              • korendian@lemmy.zip
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                9
                ·
                2 days ago

                So you just basically admit that no amount of argument against your point will change your mind. So thanks for letting me know I’m wasting my time here. Have a good one.

                • ZDL@lazysoci.al
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 day ago

                  You’re in a group that’s literally called “Fuck AI”.

                  Yes, you’re wasting your time here if you’re pitching AI. Go find a group that doesn’t have “Fuck” immediately preceding “AI”.

                • Carnelian@lemmy.world
                  link
                  fedilink
                  arrow-up
                  10
                  ·
                  2 days ago

                  Nice attempt at a copout. Where did I admit anything similar to that?

                  My mind can very easily be changed with evidence. The problem with “AI” is all you have is marketing without substance.

                  The fact that users are wasting their time and ending up confidently ill informed is why I consider it worse than worthless. Literally every study indicates that people are less efficient when they adopt the tech (even the people who incorrectly self report that their numbers are better lmao). Companies across the board are failing to get ROI on this. The results speak.

                  So yes, I am unfortunately not interested in wasting all day on an endless string of improvised hypothetical situations written from the perspective of LLMs being great and then working backwards from there. It’s fruitless and irrational

            • snooggums@piefed.world
              link
              fedilink
              English
              arrow-up
              9
              arrow-down
              2
              ·
              2 days ago

              How does the AI know the “best night clubs”?

              It just regurgitates, with a level of randomness, the existing user reviews you would have gotten on a search result. Plus AI leaves a ton of opportunity to obscure advertising by influencing the summary output to favor certain locations in a way that is less obvious than ads and search result ordering.

              Yeah, it is great at getting an answer that looks plausible as long as you don’t care about accuracy. At that point just do a web search for nearby clubs and pick one randomly, the end result is the same except the latter isn’t driving up costs and increasing pollution for everyone else nearly as much as AI.

            • vrek@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              I actually find the opposite is true. I don’t mean it’s bad but you get much better results with some basic starting point.

              For example at my last company we had a database which for a VERY small example had a table with device serial, test type, test result(pass/fail), and an ID to another table. For each test in the other table there was a series of rows with that id which had all the details of test. For example unit 123 might be circuit board with 5 test points testing voltages at various points, all tested at one test station.

              So you would go into table 1, select all lines with serial 123 and test type “electrical test”, copy the test ID, go into table 2 and select all results for that ID.

              One day my boss sent me a list of 500 serials and told me to pull all the details and present it in a table.

              Doing that manually would be hours. So people with some sql knowledge might recognize you could use a sub query. Problem being the list sent to me was just a table copied and sent over teams. Would probably take atleast half an hour to copy that into ssms and correct all the formatting to be valid sql.

              I wrote a script that pulled the details for 1 serial using a sub query and pivot the results , copied that and the list of serials into chatgpt and asked it to modify the query to include all the serials in the table in correct sql format. It worked great( I got results for 500 unique serials and a test of a random 10 of them got the same results). It took maybe 5 minutes.

              Now trying to get chatgpt to do that from scratch would be painful but with some idea of structure of data, an idea of what I wanted to do and an example to follow it worked wonderfully.

    • WoodScientist@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      2 days ago

      If someone uses a calculator to add 3 plus 7 together, I wouldn’t want to date them. I do have some standards.

      • Spacehooks@reddthat.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Amount of test points i lost over the years from the simple stuff while focusing on the hard stuff taught me to never underestimate what’s simple.

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    39
    ·
    2 days ago

    If you are ready to drop a friend because he used chatgpt instead of google, you were never a good friend to begin with. Wtf is this.

    • the_q@lemmy.zip
      link
      fedilink
      arrow-up
      31
      arrow-down
      8
      ·
      2 days ago

      How is your assessment of a relationship any different than this author’s? You both are setting the rules of the friendship’s quality based on your opinions then presenting those opinions as more.

      • Grimy@lemmy.world
        link
        fedilink
        arrow-up
        14
        arrow-down
        7
        ·
        edit-2
        2 days ago

        Im saying their rules are stupidly shallow.

        It’s all opinions unless you have the official big book of friendship rules open in front of you.

        Instead of saying “hihi, both of these are opinions”, why don’t you try to justify theirs instead?

        You aren’t actually making a point.

        • chloroken@lemmy.ml
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          8
          ·
          2 days ago

          Okay, I’ll justify their point: people who use LLMs are, on average, dumb as fuck. And more insidiously, they will get more dumb over time.

          • Grimy@lemmy.world
            link
            fedilink
            arrow-up
            8
            arrow-down
            8
            ·
            2 days ago

            He was a good intelligent friend, and then became “dumb” because he used something you don’t like.

            Reminds me of teenagers hating each other for the brand of clothing they wear. Incredibly shallow.

            • chloroken@lemmy.ml
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              4
              ·
              edit-2
              2 days ago

              He “became dumb” when he started giving up on knowing the truth about things and instead believing the output of LLMs, yes. And anybody who doesn’t understand that and won’t listen to reason when it comes to how that stunts their intellectual development deserves to be ditched.

              If you stick by your friends regardless of what they do, cool, but some of us have standards and enough friends to enforce those standards without going lonely. It must suck knowing you’re stuck no matter what.

              • Grimy@lemmy.world
                link
                fedilink
                arrow-up
                3
                arrow-down
                3
                ·
                2 days ago

                Bro, he used the thing to search for a venue for a show, he didn’t try to get it to spit out a physics paper.

                I dare you to send all your friends a message telling them you can’t be friends with them if they use AI. I bet a lot of them will respond with just question marks because it’s literally unhinged behavior.

                If it affects you this much, it’s because you have become way too emotional about it. Touch grass bro, AI isn’t the greatest evil since Hitler.

                • chloroken@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  2 days ago

                  You’re getting ratioed because you are acting delusional about something rather important, FYI.

                • jjjalljs@ttrpg.network
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  2 days ago

                  Bro, he used the thing to search for a venue for a show,

                  That’s an incredibly stupid use for an LLM. If someone’s that stupid in this way, they’re probably stupid in other ways. Some upstream decision making process is broken.

                  There are more fish in the sea. If you’re traveling and looking for a hotel, you could stay at the one with the broken windows. Maybe it’s fine! Maybe there’s a good reason, and the windows aren’t even in the guest rooms. But you could also just not bother, and stay someplace that doesn’t have obvious red flags.

                • Catoblepas@piefed.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  2 days ago

                  It’s really fascinating to me how AI pushers inevitably fall back to accusing anyone who isn’t singing the praises of LLMs as being “too emotional”.

          • Grimy@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            3
            ·
            2 days ago

            It isn’t much of a challenge if your statement is meaningless.

            Seems like you are falling back to rhetoric. I’m guessing you can’t actually justify their opinion because it is actually brutally shallow.

              • Grimy@lemmy.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                3
                ·
                edit-2
                2 days ago

                Do you typically start running around in circles when challenged?

                I mean, if you have a point to make, then make it. It’s just that your previous comment didn’t have any.

                • the_q@lemmy.zip
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  2 days ago

                  I’ve been pretty clear with my responses, including my original one seeing that it was a question.

    • nfreak@lemmy.ml
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      4
      ·
      2 days ago

      Anyone actually using that shit is either ignorant and completely out of the loop, doesn’t care about the numerous ethical issues it has, or welcomes said issues with open arms.

      The only acceptable scenario would be someone who genuinely hasn’t learned about why this shit sucks so much, and is willing to completely drop it after they learn. Someone who’s aware and still uses it isn’t someone to associate with.

      • oopsgodisdeadmybad@lemmy.zip
        link
        fedilink
        arrow-up
        2
        arrow-down
        3
        ·
        2 days ago

        I don’t actually think many people are open to changing anything, even with information that may indicate good reasons to. Even with good reasons proving they should.

        This entire argument could be had over every divisive societal split.

        At first they seem rational.

        “Letting common people learn how to read books is a bad idea.”

        “Listening to the radio is the down fall of this world.”

        “TV is a really bad idea.”

        “I don’t make friends with Nazis.”


        Then they run the gamut of becoming they sound ever so slightly smarter.

        “I don’t make friends with conservatives.”

        “I don’t make friends with Republicans.”


        Skipping ahead:

        “I don’t hangout with people who spend all their time watching (insert streaming service here)”

        “I don’t like people who don’t hate AI.”

        “I don’t date people who use AI.”

        “AI use will prevent me from being friends with someone.”

        This is just a smattering of divides, there are plenty in between all these if I had to make a spectrum of them, but you get the point.

        Anyway, somewhere on this spectrum you find your spot, and everything previous to that spot seems absolutely obvious, your exact spot seems reasonable, and everything beyond you seems utter lunacy.

        Currently I’m pretty strong in the “all AI is bad AI” end of it. I think translating can be useful, but still isn’t great, but I can easily see how a perfectly-preserved translation can be useful. But I don’t see much actual use or value beyond that. And given it’s enormous power and water drain just to support something that might be valuable later, this approach is ass-backwards.

        Previous world shaking technologies were easy to find value in pretty quick. Language, printing press, radio, TV… So could be used for brain rot, but information sharing is generally good (if it’s honest).

        But AI doesn’t really have a killer application (yet, anyway) and devoting this much to it before we figure out any potential way to use it that makes it worth what we’re giving up to use it is absolutely bonkers.

        I don’t personally currently know of anything that’s even possible that it can be used for, but I’m willing to hear use cases.

        Meanwhile we’ve got lazy thinkers that have less than zero reason to believe in God that still do. So just having evidence isn’t all there is to it. You have to be open enough to acknowledge and change with that evidence.

    • WoodScientist@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      2 days ago

      I don’t associate with filthy clankers. And if you use LLMs, you are a clanker, as you’ve sold your soul to the machine.

    • korendian@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      edit-2
      2 days ago

      Good to know where most of the people here on the fediverse lie with this topic, that they view people who use AI in any way as worthless humans.

      • Catoblepas@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        You’re the only one that seems to be saying that? Someone not wanting to date or be friends with you doesn’t mean you’re worthless. It’s unreasonable to expect to get along with everyone, or have everyone open to dating you.

  • Blue_Morpho@lemmy.world
    link
    fedilink
    arrow-up
    41
    arrow-down
    52
    ·
    2 days ago

    The article’s author is ridiculously pretentious. Yes ai can be garbage but the premise was that they were disgusted because their friend used chatGPT to search for a venue. Using AI as a search engine is no different than using Google.

    Using AI as a search engine has become almost a necessity because Google and Bing have destroyed the usefulness of search engines with ads.

    • snooggums@piefed.world
      link
      fedilink
      English
      arrow-up
      62
      arrow-down
      6
      ·
      2 days ago

      Using AI as a search engine is no different than using Google.

      Using AI is like having your friend who may or may not understand what you are looking for provide what they remember off the top of their head and if you get lucky they might have a link to what you are looking for.

      Google (not the AI part) is more like using a phone book where you can find the thing you are looking for and get the answers directly.

      AI search is fucking terrible and amplifies the problem with ads.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        2 days ago

        What? Have you used google any time in the last year? Its nothing like a phone book. Its more similar to an infomercial. Completely useless, a waste of time, MALICIOUSLY BAD AT ITS JOB, and leaves you without anything useful for having interacted. Its impossible to use. I don’t know what software you are using, but googles AI is a million times better than its search engine.

        Earlier today i googled something and got NO RESULTS AT ALL. Google tried to tell me it doesnt exist. Yet, their AI had the exact info i was looking for and even linked me to its source, which is what i had spent the last hour googling to try and find. I know youre going to downvote because it goes against your narrative, but your narrative is factually inaccurate

        • snooggums@piefed.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          2 days ago

          No I haven’t used Google in years because the results started to suck.

          I’m talking about how the results are presented.

          • Scubus@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            arrow-down
            4
            ·
            edit-2
            2 days ago

            🤦

            Im notusing the siftware because it looks nice. I dont care how its presented. Im looking for results, which google simply cant do.

            Edit: forgot a word

      • korendian@lemmy.zip
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        21
        ·
        2 days ago

        I’m not saying that AI is not without many serious flaws, but your simplification is highly inaccurate. If AI was based on getting “lucky” it would not be a marketable product, but rather just a parlor trick. What it is actually is a computer that can search the web much faster than you could, and provides results based on that search. It is not 100% accurate, but using a direct web search or more advanced model is pretty damn close for most purposes (especially something simple like the case in the article). It’s like calling someone who uses a calculator lazy. Ridiculous thing to say.

        • Leon@pawb.social
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          1
          ·
          edit-2
          2 days ago

          That’s a misrepresentation of what LLMs do. You feed them a fuckton of data and they, to oversimplify it a bit, put these concepts in a multi-dimensional map. Then based on input, it can give you an estimation of an output by referencing said map. It doesn’t search for anything, it’s just mathematics.

          It’s particularly easy to demonstrate with image models, where you could take two separate concepts, like say “eskimo dog” and “daisy” and add them together.

          When you query ChatGPT for something and it “searches” for it, it’s either fitted enough that it can reproduce a link directly, or it calls a script that performs a web search (likely using Bing) and compiles the result for you.

          You could do the same, just using an actual search engine.

          Hell, you could build your own “AI search engine” with an open weights model and a little bit of time.

          • korendian@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            7
            ·
            2 days ago

            It depends on the model and who made it, as well as what you are asking. If its a well known fact, like “who was president in 1992”, then it is math as you say, and could be wrong, but is more often right than not, but if it’s something more current and specific, like “what is the best Italian restaurant in my area” then it does in fact so the search for you, using google maps and reviews and other data.

          • korendian@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            11
            ·
            2 days ago

            They’re accurate enough for simple questions like “when was Bill Clinton President”? Go ahead and prove me wrong and ask that question to an AI, and show me one that gets it wrong.

            • chloroken@lemmy.ml
              link
              fedilink
              English
              arrow-up
              9
              arrow-down
              1
              ·
              2 days ago

              “Accurate” and “accurate enough” have completely different meanings. Calculators are not “accurate enough”, they are accurate, and the idea that you’re conflating the two notions is exactly why LLMs are useless for most things people employ them for.

              • korendian@lemmy.zip
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                5
                ·
                2 days ago

                I’m not conflating the two notions. I have said that they are not completely accurate, but they are absolutely accurate enough. It is really very clear the experience of those who actually use AI, vs those who just regurgitate sensationalized headlines. If you think AI is literally “useless”, then you are not living in reality.

                • chloroken@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  2 days ago

                  You are indeed conflating the two ideas, and I said “useless for most things they’re utilized for”, but if you quoted the entire sentence your argument would fall apart and you realized that.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        15
        ·
        edit-2
        2 days ago

        Google (not the AI part) is more like using a phone book where you can find the thing you are looking for and get the answers directly.

        That’s the ideal but the reality today is the list of results aren’t what you searched for but what companies paid for you to see even if it isn’t relevant.

        AI doesn’t yet have ads which is why it is useful. They are working hard to enshitify AI but for right now it’s better than Google at search.

    • Axolotl@feddit.it
      link
      fedilink
      arrow-up
      19
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Google and Bing are not the only search engines though, use something else like DuckDuckGo, Ecosia or Searx

    • Get_Off_My_WLAN@fedia.io
      link
      fedilink
      arrow-up
      16
      ·
      2 days ago

      I think the point of the article people seem to be missing is that she doesn’t like how people are letting themselves be lazy to the point that they want to offload any and all of their thinking and creativity to an LLM, and not being able to see a problem with that can be quite a turn-off.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        2 days ago

        It wasn’t “This person used an Internet Service to write me a poem.” It was , “This person searched for a winery using an Internet service so I wouldn’t date them.”

        There’s an absolutely enormous difference between the two use cases.

        • Catoblepas@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 days ago

          If you read the article, it also talks about people using ChatGPT for online dating.

          Ali Jackson, a dating and relationship coach based in New York, uses ChatGPT for some tasks – but she is not an evangelist. In the past six months or so, she says “every one” of her clients has come to her complaining about “chatfishing” or people who use AI to generate everything on their dating apps – all the way down to the DMs they send.

          I wouldn’t consider any kind of relationship (romantic or not) with someone that didn’t even want to talk with me.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            I agree. But the first paragraph was “I wouldn’t date anyone who picked a winery using the help of an Internet service.”

      • Grimy@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        5
        ·
        edit-2
        2 days ago

        Do you drop a friend if he sets up a route in Google maps instead of using a paper one?

        It has been shown that using Google maps actually reduces your ability to get around too and think for yourself when it comes to orientation.

        I think a lot in the comments are missing the point. It’s okay to think this, but letting it affect your friendships in such a way just makes you a shitty friend.

        • Get_Off_My_WLAN@fedia.io
          link
          fedilink
          arrow-up
          4
          ·
          2 days ago

          I know, but I understand and acknowledge that using Google Maps makes my natural navigation ability worse.

          Similarly with not being able to remember phone numbers because of saving contact info.

          It’d be problematic if I pretended these aren’t issues.

          (But I also don’t think AI is the level of usefulness of Google Maps or even the contacts app on your phone.)

          The friends in article seem to still be friends, just they’ll be the target of a little teasing for asking AI instead of just thinking or even Googling it.

          In the context of dates, people have just met, so I don’t know why people keep talking about dropping friends.

          • Grimy@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            if my future spouse came to me with wedding input courtesy of ChatGPT, there would be no wedding.

            I think the author’s position is actually very extreme tbh. It’s talking about dropping a fiance.

            If someone drops me, fiance or friend, because of what software I choose to use, I would really be questioning if they ever cared for me. I personally can’t think of anything so mundane I would drop someone over. It seems borderline unthinkable.

            Teasing is fine but that’s not what the headline or even the article seems to be about for the most part.

            I think it’s okay to be aware of the issues, but this seems to be about choosing friends based on this. It seems very wrong, like being told by a vegan that we can’t be friends anymore because I eat meat.

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      Using AI as a search engine has become almost a necessity because Google and Bing have destroyed the usefulness of search engines with ads.

      What? Using AI for search is even worse than using a conventional search engine. All the LLM is doing is summarizing data that it did a Google search to get, and its summarization obscures the obvious ads and astroturfing that’s easy to spot when you’re doing the search yourself.

      AI is complete garbage for search unless you know that all of the data you’re searching through is accurate and trustworthy. Data from the public Internet is very much not that.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        4
        ·
        2 days ago

        Llm’s don’t read the SEO keywords and then give you a result filtered through Google’s adsense. Llm’s read absolutely everything and the results are (as of now) not filtered by who paid the most to show you a particular result.

        • very_well_lost@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          Llm’s don’t read the SEO keywords and then give you a result filtered through Google’s adsense.

          Maybe not, but if you don’t think people are already doing “AI optimization” to get AI search tools to prefer their shitty content, then I have a trillion dollar data center I’d like to sell you.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            2 days ago

            Yeah its in the news that AI companies are working to add ads. And while SEO’s are trying, its not like Google’s algorithm which can be easily gamed. Google used number of links to an url as a measure of quality. AI’s train by injesting the entire contents of the entire internet. They don’t care what is popular or what keywords are in the html title. It’s only a chain of text based on probability of the next token. It’s much harder to game a system where everything is read, not just hyperlinks and keywords.

            • very_well_lost@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              2 days ago

              I think you’re misunderstanding how AI search actually works. When you ask it to do something timely like “find me a good place to eat”, it’s not looking through its training data for the answer. There might be restaurant reviews in the training data, sure, but that stuff goes stale extremely quickly, and it’s way too expensive to train new versions of the model frequently enough to keep up with that shifting data.

              What they do instead is a technique called RAG — retrieval assisted generation. With RAG, data from some other system (a database, a search engine, etc) is pushed into the LLM’s context window (basically it’s short-term memory) so that it can use that data when crafting a response. When you ask AI for restaurant reviews of whatever, it’s just RAGing in Yelp or Google data and summarizing that. And because that’s all it’s doing, the same SEO techniques (and paid advertising deals) that push stuff to the top of a Google search will also push that same stuff to the front of the AI’s working memory. The model’s own training data guides it through the process of synthesizing a response out of that RAG data, but if the RAG data is crap, the LLMs response will still be crap.

              • ZDL@lazysoci.al
                link
                fedilink
                arrow-up
                2
                ·
                1 day ago

                Further, you can inject more text into the LLMbecile’s hidden prompt to cause some things to show up more often. Think Grok’s weird period where it was attaching the supposed plight of white people in South Africa into every query, but more subtle.

    • chloroken@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      2 days ago

      It’s much different. If you can’t tell why, you’re not getting a date.

    • Randomgal@lemmy.ca
      link
      fedilink
      arrow-up
      7
      arrow-down
      9
      ·
      2 days ago

      The fact that their dating seems to be defined by a single superficial, out of context, meaningkess in a relationship, choice of someones use of technology kinda gave it away there.