I grew up in TN, but I’ve never been to Oak Ridge, and never knew much about it. Was just glancing at the Wikipedia page, out of curiosity and came across this odd tidbit in the History section:

A popular legend holds that John Hendrix (1865-1915), a largely unknown local man, predicted the creation of the city of Oak Ridge around 40 years before construction on the project began. Hendrix lacked any formal education and was a simple logger for much of his life. Following the death of his youngest daughter, Ethel, to diphtheria, and the subsequent departure of his wife and three remaining children, Hendrix began hearing voices in his head. These voices urged him to stay in the woods and pray for guidance for 40 days and 40 nights, which Hendrix proceeded to do. As the story is told, following these 40 days spent in rugged isolation, Hendrix began seeing visions of the future, and he sought to spread his prophetic message to any who would listen.[19] According to published accounts,[20] one vision that he described repeatedly was a description of the city and production facilities built 28 years after his death, during World War II.

The version recalled by neighbors and relatives reported:

In the woods, as I lay on the ground and looked up into the sky, there came to me a voice as loud and as sharp as thunder. The voice told me to sleep with my head on the ground for 40 nights and I would be shown visions of what the future holds for this land… And I tell you, Bear Creek Valley someday will be filled with great buildings and factories, and they will help toward winning the greatest war that ever will be. And there will be a city on Black Oak Ridge and the center of authority will be on a spot middle-way between Sevier Tadlock’s farm and Joe Pyatt’s Place. A railroad spur will branch off the main L&N line, run down toward Robertsville and then branch off and turn toward Scarborough. Big engines will dig big ditches, and thousands of people will be running to and fro. They will be building things, and there will be great noise and confusion and the earth will shake. I’ve seen it. It’s coming.

Hendrix, in light of his tales of prophetic visions, was considered insane by most and at one point was institutionalized. His grave lies in an area of Oak Ridge now known as the Hendrix Creek Subdivision. There are ongoing concerns over the preservation of his gravestone, as the man who owns the lot adjacent to the grave wishes to build a home there, while members of the Oak Ridge Heritage and Preservation Association are fighting to have a monument placed on the site of his grave.

https://web.archive.org/web/20071025005049/http://www.oakridger.com/stories/031506/com_20060315023.shtml

  • MotoAsh@piefed.social
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    2
    ·
    edit-2
    1 day ago

    Interesting that ONE of his predicitons sorta’ came true, but the monument thing is CRAZY from modern people!

    Humans REALLY need to grow past glorifying paychopaths who happen to have ONE correct hallucination out of thousands.

    … Y’know, everyone sucking off “AI” while it’s still wrong a great number of times kinda makes sense, when you factor in how fucking stupid most people are…

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      25
      ·
      1 day ago

      … Y’know, everyone sucking off “AI” while it’s still wrong a great number of times kinda makes sense, when you factor in how fucking stupid most people are…

      Everyone who clowns on AI for being wrong sometimes sound like my hysterical middle school teachers talking about how you can’t trust wikipedia because anyone could edit it.

      There are lots of systems that know they will inherently have errors, and need to rely on error correction mechanisms to accomodate. The computer RAM in satellites is constantly being bombarded with cosmic rays and having bits randomly flipped, but it can account for this by having error correcting memory. At a simplified level, Error Correcting functions like this: when you write data you can write your output to three bits instead of just one, then to read a single bit, you instead check all three and discard any outliers. ECC memory actually uses more complicated math so that it only has to store 8 error correcting bits for every 64 normal ones, but that is the general principle of error correction.

      Similarly, quantum computers have been proven to have inherent fluctuations and unpredictability in their results due to the underlying nature of quantum mechanics. But they are still so much faster at solving certain problems that you can run them multiple times, discard the outliers, and still get your answer orders of magnitude faster than a classical computer.

      AI being wrong sometimes is like this and this is why not everyone thinks it’s a huge deal. Copilot web can still parse and search the nightmarish spider web of salesforce docs and give me an answer orders of magnitude faster than I can, even using Google. It doesn’t matter if it’s occasionally wrong and it’s answers require me to double check them when it’s that much faster to give each answer.

      • MotoAsh@piefed.social
        link
        fedilink
        English
        arrow-up
        19
        ·
        edit-2
        1 day ago

        You misunderstand how LLMs work. If it were simply true that is was capable of determining truth like an algorithm can reliably produce a result, you would have a point.

        But that’s not how LLMs work. At all. What so ever. The information that is supposed to come out of them IS NOT derrived from some static process that simply needs to not have some bits flip sometimes.

        The problem with LLMs is they LITERALLY DO NOT THINK. At all. Period. They’re WORSE than a crazy person. They’re like listening to the ramblings of someone talking in their sleep, and pretending that maybe some day that sleeping person will be able to write a collegiate dissertation on a new topic… In their sleep.

        Add on top of that the fact that people DO think, CAN reason away falsehoods, sarcasm, and misinformation, yet still constantly come to wrong conclusions… To think LLMs are close to replacing humans is the true ignorant fool position.

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          14
          ·
          edit-2
          1 day ago

          I don’t misunderstand how they work at all.

          Quite frankly what you’re saying doesn’t matter in the context of my point. It literally does not matter whatsoever that they are not logic based but language based, as long as they produce helpful results, and they inarguably do. You are making the same types of point that my middle school librarians made about Wikipedia. You’re getting hung up on how it works, and since that’s different than how previous information sources work, you’re declaring that they cannot be trusted and ignoring the fact that regardless of how they work, they are still right most of the time.

          As I said, it is far faster to ask copilot web a question about salesforce and verify its answers, then it is to try and manually search through their nightmarish docs. Same goes for numerous other things.

          Everyone seems so caught up in the idea that it’s just a fancy text prediction machine and fail to consider what it means about our intelligence that those text prediction machines are reasonably correct so much. All anthropological research has suggested that language is a core part of why humans are so intelligent, yet everyone clowns on a language based collection of simulated neurons like it can’t have anything remotely to do with intelligence.

          • MotoAsh@piefed.social
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            1 day ago

            rofl you claim to understand how they work, yet fail to realize how they cannot produce truth… Pathetic.

            • masterspace@lemmy.ca
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              3
              ·
              1 day ago

              Google doesn’t produce truth either, that doesn’t mean it’s not useful for finding information.

              Again, you’re hung up on the idea that it either has to give you a perfect answer or its useless, but the situation is not binary.

              • MotoAsh@piefed.social
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                22 hours ago

                I agree it’s not a useless tool, but only a fool believes it hasn’t been marketed far, far beyond its actual capability.

                • masterspace@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  edit-2
                  22 hours ago

                  I mean I agree that it’s probably vastly overvalued as a whole, the leap between current LLM capabilities and an actual trusted engineer is pretty big and it seems like a lot of people are valuing them at the level of engineer capabilities.

                  But the caveats are that simulated neural networks are a technological avenue that theoretically could get there eventually (probably, there’s still a lot of unknowns about cognition, but large AI models are starting to approach the scale of neurons in the human brain and as far as we can tell there’s no quantum magic involved in cognition, just synapses firing which neural networks can simulate).

                  And the other caveat is like the bear trash can analogy… the whold park ranger story where they said that it’s impossible to make a bear-proof trash can because there’s significant overlap between the smartest bears and the dumbest humans.

                  Now I don’t think AI is even that close to bear level in terms of general intelligence, but a lot of current jobs don’t require that much intelligence to do them, we just have people doing them because theres some inherent problem or step in the process that’s semantic or fuzzy pattern matching based and computers / traditional software just previously couldn’t do it, so we have humans doing stuff like processing applications where they’re just mindlessly reading, looking for a few keywords and stamping. There are a lot of industries where AI could literally be the key algorithm needed to fully automated the industry, or radically minimize the number of human workers needed.

                  Crypto was like ‘hey that decentralized database implementation is pretty cool’, in what situations would that be useful? And the answer was basically just ‘laundering money’.

                  Neural network algorithms on the other hand present possible optimizations for a truly massive number of tasks in society that were otherwise unautomatable.

          • peopleproblems@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 day ago

            I think you fell for the trick they’re trying to sell that the LLMs are capable of “reasoning.”

            They are not. They are gigantic matricies of numbers that are applied to text after the text has been assigned numbers.

            For example, if we stick a fork in an outlet, we will not do it again. If you don’t believe me, try it yourself.

            An LLM has no means of testing whether or not it is a bad idea.

            Additionally, an LLM doesn’t have the capability to observe others who test an action.

            If LLMs could test their outputs to find the real consequences of their statements, then we would be talking about actual intelligence. For now, they’re just number maps.

            • masterspace@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              4
              ·
              edit-2
              1 day ago

              I’m not falling for any trick about reasoning. I’m pointing out it doesn’t matter whether or not it’s using reasoning or not using reasoning if more often than not it arrives at the right answer, which it does.

  • xxce2AAb@feddit.dk
    link
    fedilink
    English
    arrow-up
    29
    ·
    1 day ago

    they will help toward winning the greatest war that ever will be.

    Well, I certainly hope he was right about that one.

      • xxce2AAb@feddit.dk
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        But anyway, getting back on topic – it’s probably for the best we stopped using metal to fill cavities and switched to UV-hardened polymers. Of course, broadcast radio isn’t as prevalent as it once was either. :)

  • Basic Glitch@sh.itjust.worksOP
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 day ago

    This is really interesting but I have to say this dude also did sound legit nuts.

    His vision quest in the woods started bc he was grieving his family leaving him. But, his wife took the surviving kids and left bc she blamed him for the death of their 2 yo daughter who died of diphtheria. She blamed him bc he had “corrected” or “disciplined” her right before she died. Maybe this is jumping to conclusions but that honestly sounds pretty awful, and I really can’t imagine a rational reason you would need to “discipline” a 2 year old suffering from diphtheria… Honestly, good for the wife.

    Then once he started telling people about his visions, he was sent to a mental institution but escaped. After he escaped, he said he had another vision that God would destroy that place. Several weeks later it was “struck by lightening” and burned to the ground… K.

    Then he re-married a lady with her own children. His step kids would recount to their own kids their memories of their step dad, and how he would often need to leave and go out into the woods by himself for a while to have his visions…

    I’m sorry to judge, but when reading between the lines being presented about the legend, I have to say he kinda gives off bad vibes. It is definitely an interesting story though.