And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling
The Kavernacle has videos on this. He talks about how it’s eroding emotional connection in society and having people offload their thikcing onto chatgpt. I think this is a problem But my main issue i’m most passionate about is the issue of misinformation. In the process of writing thsi post i did an experiment and asked it some questiosn about autism. I asked them waht autsitic burnout is. They gave an explanation that’s incorrect, and furthers the incorrect assumption alot of pepole make that i’ts something specific to autistic people. But it’s a wider phenomon of physiological neurocongitive burnout. I confronted them on this they refined their position then I asked them why they said it. It constnatly contradicts itself and will just be like yeah you are correct i am wrong, while continuing to not repeat the same incorrect claim. https://i.imgur.com/KINH7lV.png https://i.imgur.com/EHtDwNj.png According to chatgpt their own sentence contradicts itself. They also proceeded to tell invent a new usage of a very obscure medical term that is not widely used then try to gaslight me into believing it’s a commonly jused term among autsitic people whne it isn’t https://i.imgur.com/LStZdNg.png
And what frustrates me even more is a couple months ago i had someone swear to me up and down that, the hallucinations in chatgpt were fixed and they ain’t that bad anymore. Granted, they were far worse in the past. It litaerlly tol dme autims level system was something that no longer exists despite it being currently widely used.
But here’s the problem. I am an expert on this topic. Most people aren’t asking chatgpt questions about things they are an expert in, and they also are using it as a therapist.
All in all i wasn’t expecting it to have no hallucinations but i was atelast expecting it to not still be a massive issue in just basic information retrival on topics that aren’t even super obscure and information si widely available about.
Ultimately here’s the issue. The vast majority of pro-genai people don’t know what genai actually is and why it is bad to use it in the way they are as a result. GenAI is a very advanced from of predictive text function. It just predicts what it thinks the words following that queery is based on the tereabytes maybe evne petabytes of infromation it’s scrapped from the internet. Which means it’s not really useful for anything beyond very basic things like asking it to generate simple ideas or summarize an article or video and very basic coding. I only dabble very lighlty in programming but frmo hwat i’ve heard actaul experienced programmers say it trying to use chatgpt for major coding just means having to rewrite most of the code.
Imgur links were detected in your comment. Here are links to the same locations on alternative frontends that protect your privacy.
Link 1:
Link 2:
Link 3:
GenAI is the highest form of commodification of culture so far. It treats all text, images, videos, songs, speech and all other forms of organic cultural expression as slop to be generated over and over without its original context. It provides little to no serious improvement in industry, and is only propped up despite no profits due to either artificial growth in internet platforms or unrealistic expectations from the AGI folks.
And it’s inneficient. We could easily have more therapists rather than wasteful chatbots that cost billions. Such technology can only exist as a bandage to the ailments of neoliberalism, and is not a solution to anything. And that’s not even going into the worsening impact of cultural imperialism due to the tendency of these models to reproduce Northwestern cultural hegemony.
The alternative is actually pretty simple: measures to lower unemployment. Most capitalist countries have issues with unemployment or underemployment. And most tasks of Gen"AI" can be done by paid humans quite well, possibly even at actually lower costs than what the informatics cartel is tanking in order to ride the bubble.
Human labour is what produces value. All else is secondary.
People fear that they’re gonna lose their job that consists 99% of sending and receiving emails and doing zoom meetings. They know their job is bullshit and replaceable.
This is the correct take from an ML perspective (essentially an extension of the fact that we should not lament the weaver for the loom):
https://redsails.org/artisanal-intelligence/
The problem is not the technology per se (criticisms such as energy consumption or limitations of the tools just means there’s room for improvement for the tech or how we use it) but capitalism. If you want a flavour of opinions on this this click on my username and order comments by most controversial for the relevant threads.
Artisans that claim they are for marxist proletariat emancipation but fear the socialisation of their own labour will need to explain why their take is not Proudhonist.
That post really is an excellent article in truly understanding the Marxist critique of reaction and bourgeoisie mindsets. Another one that people here should read along with it is Stalin’s Shoemaker; it highlights the dialectical materialist journey of a worker developing revolutionary potential:
Class consciousness means understanding where one is in the cog of the machine and not being upset because one wasn’t proletariat enough. This is meant to be Marxism not vibes-based virtue signaling.
Meanwhile in a socialist country: China’s AI industry thrives with over 5,300 enterprises https://lemmygrad.ml/post/9357646
Marxism is a science. People should treat it is as such and take the opportunity to study and learn, to develop their human potential beyond what our societies consider is acceptable.
Why would you want instant feedback when you’re journaling? The whole point of journaling is to have something that’s entirely your own thoughts.
I dont like writing my own thoughts down and just having them go into the void lol and i want a real hoomin to talk to about these things but i dont have one TwT
I would be extremely cautious about that sort of usage of AI. Commercial AI’s are psychopathic sychophants and have been known to drive people insane by constantly gassing them up.
Like you clearly want someone to talk to about your life and such (who doesn’t?) and I understand not having someone to talk to (fewer and fewer do these days). But you’re opting for a corporate machine which certainly has instructions to encourage your dependence on it.
Also i delete my convos about these things after 1 prompt so i dont have a lasting convo on that But tbh exposure to the raw terms of the topic has let me go from tech allegories to T9 cipher to where i am now where i can at least prompt a robot using A1Z26 or hex to obscure the raw terms a bit
Have there been cases of deepseek causing ai psychosis or is it just chatgpt
No idea. But I’d say its less likely. Especially if you’re running a local model with Ollama.
I think key here is to prevent the AI from developing a “profile” on you and self controlled ollama sessions are the surest bet for that.
What does “go into the void” mean? The LLM may use them as context for a while or it may not use them as context at all, it may even periodically erase its memory of you.
I find talking about heavy or personal things way easier with strangers than with people you know. There’s no stakes with a stranger you can literally walk up to someone on the street or in a park who doesn’t look busy and ask them if they want to talk.
Is it okay if I push back a bit? Since your last comment just feels a little dismissive? I don’t know the Free Penguin, but I will point out to other things why someone might not be able to easily talk to someone? Like for example, if someone can’t can’t walk or get around, they won’t be able to just talk to someone like that. Mainly speaking about my mom before she died since she had copd and her health decline after something happened to her at her former work place. But anyways she really hurt her spine and couldn’t really get around. I remember her being very upset with how alone she felt.
Then also speaking for myself, I have a speech impediment, + anxiety, so it is really difficult for me to just approach someone and talk to them depending on various factors. along with that, another thing to, but some strangers can be outright hostile and make things worse and someone else might just have a lot of bad interactions with strangers. Since to go back to myself, people do judge how someone speaks and tends to see little of you, like if you have an accent or have trouble speaking.
Chronic loneliness and anxiety are a function of societal arrangements that are exacerbated by capitalist solutions, not inherent and unavoidable parts of the human condition until they are cured by a panacea ex machina.
Believe it or not, before 2022 we did have lots of different approaches around the world to these things. And we are poorer for turning away from all those approaches.
I am a rather awkward person in many ways, I am instantly recognizable by many people as “weird”, I have my own share of anxiety that I’ve gotten better at masking over the years. If I spent ages 19-25 interacting with a digital yes-man instead of with humans, I would have no social skills.
Your response sounds closely analogous to when car proponents use the disabled as a shield. We don’t need everyone to drive, we need to minimize the distance between each other, and making driving (or LLM usage) a necessity for getting by in society only creates bigger problems, because the root problem is not being adequately addressed.
I feel like you might be taking me at bad faith here or misinterpreting me.
Chronic loneliness and anxiety are a function of societal arrangements that are exacerbated by capitalist solutions, not inherent and unavoidable parts of the human condition until they are cured by a panacea ex machina.
I agree? I’m very aware.
Believe it or not, before 2022 we did have lots of different approaches around the world to these things. And we are poorer for turning away from all those approaches.
I would argue that depends. Not everywhere has a lot of different approaches to these things. If anything, if we go to LLM’s, all they did was take inherit contradictions and brought them to new heights, but that these things were already there to begin with, maybe smaller in form.
Your response sounds closely analogous to when car proponents use the disabled as a shield. We don’t need everyone to drive, we need to minimize the distance between each other, and making driving (or LLM usage) a necessity for getting by in society only creates bigger problems, because the root problem is not being adequately addressed.
Again, where do I say that besides being taken at bad faith or misread into? All I’m simply is trying to point out that there usually reasons why someone would turn to something like an LLM or might not easily talk to someone else. As you said, the root problem is not being addressed. To add, it also just leaves a bad taste in my mouth and kind of hurts, to be that what I said sounds closely analogous to using the disable as a shield, especially when I was talking about myself or my mom.
Since for example, when my mom was in the hospital before the last few weeks she died. She had to communicate on a white board for staff since they couldn’t understand her. I also had to use the same white board to because staff couldn’t understand what I was saying either. Just to give you an idea of how I have trouble speaking to others. I’m not saying someone shouldn’t try to interact with others you know and just go talk to a chatbot. People should have another person to talk to.
Yeah but what if i make em uncomfyyyyy
The ability to self-actualize and shape the world belongs to those who are willing to potentially cause momentary discomfort.
Also the default status of many people is lonely and/or anxious; receiving social energy from someone often at least takes their mind off that.
Advancements in material technology in the past half century have often ended up stunting our social development and well-being.
Yeah but the things i ask the robot about a real hoomin would find it really creepy to talk to a stranger about
Do you worry about AI psychosis?
“And i don’t mean stuff like deepfakes/sora/palantir/anything like that” bro, we don’t live in a world where LLMs are excluded from those uses
the technology itself isn’t bad, but we live in a shitty capitalist world where every instance of automation, rather than liberating mankind, fucks them over. a thing that can allow one person to do the labor of many is a beautiful thing, but under capitalism increases of productivity only lead to unemployment; though, on the bright side, it consequently also causes a decrease in the rate of profit.
Because we can see what it does without properly regulation and also it’s very overhyped by tech companies in how much utility it actually has
Ye imo they’re not regulating it in the right places They’re uber-focused on making it reject making how-to guides for things they don’t like that they don’t see the real problem: Technofascist cults like palantir being able to kill random people with the press of a button
It’s a toy. I’m not against toys, but the amount of energy and resources we are pouring into this toy is alarming.
My impression is that a lot of people realize this tech will be used against them under capitalism, and they feel threatened by it. The real problem isn’t with the tech itself, but with capitalist relations, and that’s where people should direct their energy.
For myself, it is the projected environmental impact. The power demand for data centers has already been on the rise due to the growth of the internet. With the addition of AI and the training thereof, the amount of power is rising/will rise at an unsustainable rate. The amount of electricity used creates strain on existing power grids, the amount of water that goes into cooling the hardware for the data centers creates strain on water supply, and this all plays into a larger amount of carbon emissions.
Here is a good link that speaks to the environmental impact: genAI Environmental Impact
Beyond the above the threat of people losing jobs within an already brutal system is a bit terrifying to me. Though others have already wrote more in length here regarding this.
We have to be careful how we wield the environmental arguments. In the first phase, it’s often used to demonize Global South countries that are developing. Many of these countries completely skipped the personal computer step and are heavy consumers of smartphones and 4G data because it came around the time they could begin to afford the infrastructure (it’s why China is developing 6G already), but there’s a lot of arguments people make against smartphones (how the materials for them are produced, how you have to recharge a battery, how they get disposed of, how much electricity 5G consumes etc), but if they didn’t have smartphones then these countries would just not have the internet.
edit: putting it all under the spoiler dropdown because I ended up writing an essay anyway lol.
environmental arguments
In the second phase in regards to LLM environmental impact it really depends and can already be mitigated. I’ll try not to make a huge comment because I don’t want to write an essay, but the source’s claims need scrutiny. Everything consumes energy - even we as human bodies release GHG. Going to work requires energy and using a computer for work requires energy too. If AI can do in 10 seconds what takes a human 2 hours, then you are certainly saving energy, if that’s the only metric we’re worried about.
So it has to be relativized which most AI environmental articles don’t do. A chatGPT prompt consumes five times more electricity than a google search, sure, but that amount is close to 0 watts. Watching Youtube also consumes energy, a minute of youtube consumes much more energy than an LLM query does.
Some people will say that we need to stop watching Youtube, no more treats or fun for workers, which is obviously not something we take seriously (deleting your emails to make room in data centers was a huge thing on linkedin a few years ago too).
And all of this pales in comparison to the fossil fuel industry that we keep pumping money into in the west or obsolete tech that does have greener alternatives but we keep forcing on people because there’s money to be made.
edit - and the meat and animal industry… Beef is very water-intensive and polluting, it’s not even close to AI. If that’s the metric then those that can should become vegan.
Likewise for the water usage, there was that article about texas telling people to take fewer showers because it needs the water for data centers… I don’t know if you saw it at the time, it went viral on social media. It was a satirical article against AI, that people used as a serious argument. Texas never said to take fewer showers, these datacenters don’t use a lot of water at all as a share of total consumption in their respective geographical areas. In the US a bigger problem imo is the damming of the Colorado River so that almost no water reaches Mexico downstream, and the water is given out to farmers for free in arid regions so they can grow water-intensive crops like rice or dates (and US dates don’t even taste good)
It also has sort of an anti-civ conclusion… Everything consumes energy and emits pollution, so the most logical conclusion is to destroy all technology and go back to living like the 13th century. And if we can keep some technology how do we choose between AI and Youtube?
Rather I believe investments in research make things better over time, and this is the case for AI too (and we would have much better, safe nuclear power plants too if we kept investing in research instead of giving in to fearmongering and halting progress but I digress). I changed a lot of my point of view on environmentalism when back in 2020 people were protesting against 5G because “microwaves” and “we don’t need it” and I was on board (4G was plenty fast enough) until I saw how in some places they use 5G for remote surgery and that’s a great thing that they couldn’t do with 4G because there was too much latency. A doctor in China with 6G could perform remote surgery on a child in the Congo.
In China electricity is considered a solved problem; at any time the grid has 2-3x more energy than it needs. The west has decided to stop investing in public projects and instead concentrate all surplus value in the hands of a select few. We have stopped building housing, we stopped building roads and rail, but we find the money to build datacenters that could be much greener, but why would they be when that costs money and there’s no laws that mandate it?
Speaking of China they use a lot of coal still (comparatively speaking) but they also see it just an outdated means of energy production that can be replaced by newer, better alternatives. It’s very different, they’re doing a lot of solar and wind - in the west btw chinese solar panels are tariffed to hell and back, if they weren’t every single building in europe would be equipped with solar panels - and even pioneering new methods of energy production and storage, like the sodium battery or gravity storage. Gravity battery storage (raising and lowering heavy blocks of concrete over the day) is not necessarily Chinese but in Europe this is still just a prototype. In China they’re already building them as part of their energy strategy. They don’t demonize coal as uniquely evil like liberals might, but rather that once they’re able to, they’ll ditch coal because there’s better alternatives now.
In regards to AI in China there’s been a few articles posted on the grad and it’s promising. They are careful about efficiency because they have to be. I don’t know if you saw the article from a few days ago about Alibaba Cloud cutting the number of GPUs needed to host their model farm by 82%. The test was done on NVidia H20 cards which is not a coincidence, it’s the best China can get by US decree. The top of the line model is the H100 (the H20 having only 20% of the capabilities) but the US has an order not to export anything above the H20 to China, so they find creative ways to stretch it. And now they’re developing their own GPU industry and the US shot itself in the foot again.
Speaking of model farm… it’s totally possible to run models locally. I have a 16GB GPU and I can generate realistic pictures (if that’s the benchmark) in 30 seconds, the model only needs 5GB Vram but the architecture inside the card is also important for speed. For LLM generation I can run 12B models, rarely higher, and with new efficiency algorithms I think over time that will stretch to bigger and bigger models, all on the same card. They run model farms for the cloud service because so many people connect to it at the same time, but it’s not a hard requirement for running LLMs. In another comment I mentioned how Iran is interested in LLMs because like 4G and other modern tech that lags a bit in the west, they see it as a way to stretch their material conditions more (being heavily sanctioned economically).
There’s also stuff being done in the open source community, for example LORAs are used in image generation and help skew the generation towards a certain result. This means you don’t need to train a whole model, loras are usually trained by people on their machines with like 100 images. As for training time it can be done in 30 minutes to train a lora. So what we see is comparatively few companies/groups making full models (either LLM or image gen, called checkpoints) and most people making finetunes for these models.
Meanwhile in the West there’s a 500 billion $ “plan” to invest in the big tech companies that already have a ton of money, that’s the best they can muster. Give them unlimited money and expect that they won’t act like everything is unlimited. Deepseek actually came out shortly after that plan (called Stargate) and I think pretty much killed it before it even took off lol. It’s the destiny of capitalism to con the government into giving them money, of course they were not going to say “no actually if we put some personal investment we could make a model that uses 5x less energy”, because they would not get 500 billion $ if they did. They also don’t care about the energy grid, that’s an externality for them - the government will take care of it, from their pov.
Anyway it’s not entirely a direct response to your comment because I’m sure you don’t believe in all the fearmongering, but it’s stuff I think is important to keep in mind and I wanted to add here. And I ended up writing an essay anyway lol.
isn’t providing an alternative where you can get instant feedback when you’re journaling
ELIZA was written in the 60s. It’s a natural language processor that’s able to have reflective conversations with you. It’s not incredible but there’s been sixty years of improvements on that front and modern ones are pretty nice.
Otherwise, LLMs are a a probabilistic tool: the input doesn’t determine the output. This makes them useless at things tools are good at, which is repeatable results based on consistent inputs. They generate text with an authoritative voice but all domain experts find that they’re wrong more often than they’re right, which makes them unsuitable as automation for white-collar jobs that require any degree of precision.
Further, LLMs have been demonstrated to degrade thinking skills, memory, and self-confidence. There are published stories about LLMs causing latent psychosis to manifest in vulnerable people, and LLMs have encouraged suicide. They present a social harm which cannot be justified by their limited use cases.
Sociopolitically, LLMs are being pushed by some of the most evil people alive and their motives must be questioned. You’ll find oceans of press about all the things LLMs can do that are fascinating or scary, such as the TaskRabbit story (which was fabricated entirely). The media is culpable in the image that LLMs are more capable than they are, or that they may become more capable in the future and thus must be invested in now.
“Providing and alternative where you can get instant feedback while you’re journaling” Forgive me, could you elaborate? I’m a little confused
I sometimes put some of my more sensitive thoughts into an LLM mainly cuz i dont want em going into the void and i dont want a hoomin getting uncomfy reading them either
It has a flattening effect. The things that come out the other end don’t sound human. They sound like the collective mouth of reddit and blog spam.
I don’t know why you’d use it for journaling. what feedback do you even need for journaling? Shouldn’t that be your thoughts and not your thoughts filtered through the machine of averages and disembodied?
That’s kind of like the how people where talking about how genai always fucks up the hands in pictures. I don’t think that’s permanent
Yea so i type my natural thoughts in and tbh i have some of em that i currently dont share w humans cuz they’re kinda sensitive and i dont use genai for emotional attachment just to see them written in a different way i guess
There doesn’t need to be an alternative option to offer. I don’t support genAI because its flooded the internet with fake content thatnhas no label to differentiate it. It’s unreversible.
What I don’t like is that they’re selling a toy as a tool, and arguably as the One And Only Tool.
You’re given a black box and told to just keep prompting it to get lucky. That’s fine for toys like “give me a fresh low-quality wallpaper every morning.” or “pretend you’re Monkey D. Luffy and write a song from his perspective.”
But it’s not appropriate for high-stakes work. Professional tools have documented rules, behaviours, and limits. They can be learned and steered reliably because they’re deterministic to a fault. They treat the user with respect and prioritixe correctness. Emacs didn’t wrap it in breathess sycopantic language when the code didn’t compile. Lotus 1-2-3 didn’t decide to replace half the “7’s” in your spreadsheet with some random katakana becsuse it was close enough. AutoCAD didn’t add a spar in the middle of your apartment building because it was statistically probable after looking at airplane wings all day.
I mean software glitches all the time, some widespread software has long-standing bugs in it that its developers or even auditors can’t figure out and people just learn to work around the bug. Photoshop is made on 20 year old legacy code and also uses non-deterministic algorithms that predate AI (the spot healing brush for example which you often have to redo several times to get a different result). I agree that there’s a big black box aspect to LLMs and GenAI, can’t say for all AI, but I don’t think it’s necessarily inherent to the tech or means it shouldn’t be developed more.
Actually image AI is severely simple in its methods. Provide it with the exact same inputs (including the seed number) and it will output the exact same image every time, with only very minor variations. Should it have no variations? Depends; image gen AI isn’t an engineering tool and doesn’t profess to have a 0.1mm margin of error like other machines might need to.
Back in 2023 already China used an AI (they didn’t say what type exactly) to blueprint the electrical cabling on a new ship model, and it did it with 100% accuracy. It used to take a team of engineers one year to do this and an AI did it in 24 hours. There’s a lot of toy aspects to LLMs but this is also a trap of capitalism as this is what tech companies in startup mode are banking on. It’s not all neural models are capable of doing.
You might be interested that the Iranian government has recently published guidelines on AI in academia. Unfortunately I don’t have a source as this comes from an Iranian compsci student I know, they say that you can use LLMs in university but need to note the specific model used, time of usage, and can prove you understand the topic then that’s 100% clean for Iranian academic standards.
Iran is investing a lot in tech under heavy sanctions, and making everything locally (it is estimated 40-50% of all uni degrees in Iran are science degrees). To them AI is a potential way to improve their conditions under this context, and that’s what they’re exploring.
Back in 2023 already China used an AI (they didn’t say what type exactly) to blueprint the electrical cabling on a new ship model, and it did it with 100% accuracy.
Do you have a link to the story? I ask because AI is a broad umbrella that many different technologies fall under, so it isn’t necessarily synonymous with generative AI/machine learning (even if that’s how the term has been used the past few years). Hell, machine learning isn’t even synonymous with neural networks.
Circling back to the Chinese ship, one type of AI I could plausibly see being used is a solver for a constraint satisfaction problem. The techniques I had to learn for these in college don’t even involve machine learning, let alone generative AI.
I sent the story on perplexity and looked at its sources :P (people often ask me how I find sources, I just ask perplexity and then look at its links and find one that fits)
https://asiatimes.com/2023/03/ai-warship-designer-accelerating-chinas-naval-lead/ they report here that a paper was published in a science journal, though Chinese-language.
I did find this paper: https://www.sciencedirect.com/science/article/abs/pii/S004579492400049X but it’s not from the same team and seems to be about a different problem, though still in ship design (hull specifically) and mentions neural networks.
This is sort of the issue with “AI” often just meaning “good software” rather than any specific technique.
From a quick read the first one seems to refer to a knowledge-base or auto-CAD solution which is fundamentally different from any methods related to LLMs.
The second one is some actually really impressive feature engineering used to solve an optimization problem with Machine Learning tools, which is actually much closer to a statistician using linear regressions and data mining than somebody using an LLM or a GAN.
Importantly, neither method is as computationally intensive as LLMs, and the second one at least is a very involved process requiring a lot of domain knowledge, which is exactly the opposite of how GenAI markets itself.
I mean software glitches all the time, some widespread software has long-standing bugs in it that its developers or even auditors can’t figure out and people just learn to work around the bug
yeah my dad can kill a dozen people if something goes wrong at work. Yet they use windows and proprietary shit.
If software isn’t secured it shouldn’t be used.
We can make software less prone to errors with proper guidelines and procedures to follow, as with anything. Just to add that it’s not solely on software devs to make it failproof.
I would make the full switch to Linux but I need Windows for photoshop and premiere lol. And I never got Wine to work on Mint, but if I could I would ditch windows today. I think helping people get acquainted with linux is something AI can really help with, and may help more people make the switch.
I never got Wine to work on Mint, but if I could I would ditch windows today.
apologies if this is annoying, but have you tried Lutris?
it’s designed for games, but i use it for everything that needs wine because it makes it easy to manage prefixes etc. with a nice guiNo worries, I haven’t tried it but I also don’t have my Mint install anymore lol (Windows likes to delete the dual boot file when it updates and I never bothered to get it working again). I might give it another try down the line but I’m not ready to ditch Adobe yet. I’ll keep it in mind for if I make the switch in the future.
yes. It’s a tool that can (and must) be seized and re-appropriated imo. But it’s not magic. Main issue is that capitalists are selling it as some kind of genius in a bottle.



















