

A very nuanced and level-headed response, thank you.


A very nuanced and level-headed response, thank you.


I do agree with your point that we need to educate people on how to use AI in responsible ways. You also mention the cautious approach taken by your kids school, which sounds commendable.
As far as the idea of preparing kids for an AI future in which employers might fire AI illiterate staff, this sounds to me more like a problem of preparing people to enter the workforce, which is generally what college and vocational courses are meant to handle. I doubt many of us would have any issue if they had approached AI education this way. This is very different than the current move to include it broadly in virtually all classrooms without consistent guidelines.
(I believe I read the same post about the CEO, BTW. It sounds like the CEO’s claim may likely have been AI-washing, misrepresenting the actual reason for firing them.)
[Edit to emphasize that I believe any AI education we do to prepare for employment purposes should be approached as vocational education which is optional, confined to those specific relevant courses, rather than broadly applied]


While there are some linked sources, the author fails to specify what kind of AI is being discussed or how it is being used in the classroom.
One of the important points is that there are no consistent standards or approaches toward AI in the classroom. There are almost as many variations as there are classrooms. It isn’t reasonable to expect a comprehensive list of all of them, and it’s neither the point nor the scope of the discussion.
I welcome specific and informed counterarguments to anything presented in this discussion, I believe many of us would. I frankly find it ironic how lacking in “nuance or level-headed discussion” your own comment seems.


I appreciated this comment, I think you made some excellent points. There is absolutely a broader, complex and longstanding problem. I feel like that makes the point that we need to consider seriously what we introduce into that vulnerable situation even more crucial. A bad fix is often worse than no fix at all.
AI is a crutch for a broken system. Kicking the crutch out doesn’t fix the system.
A crutch is a very simple and straightforward piece of tech. It can even just be a stick. What I’m concerned about is that AI is no stick, it’s the most complex technology we’ve yet developed. I’m reminded of that saying “the devil is in the details”. There are a great many details in AI.


This is also the kind of thing that scares me. I think people need to seriously consider that we’re bringing up the next wave of professionals who will be in all these critical roles. These are the stakes we’re gambling with.


I get where he’s coming from… I do… but it also sounds a lot like letting the dark side of the force win. The world is just better with more talent in open source. If only there was some recourse against letting LLM barons strip mine open source for all it’s worth and only leave behind ruin.
Some open source contributors are basically saints. Not everyone can be, but it still makes things look more bleak when the those fighting for the decent and good of the digital world abandon it and pick up the red sabre.


Congrats! Gaming was the only thing keeping me before I switched over completely as well, though I had been using Linux for years like you. It’s like becoming cancer free or something when you finally get there.


They need to stick the landing. America will threaten and bully. I’ve also heard some are afraid of the cost and complexity of doing something like this. Hopefully they do realize the necessity of it and stay the course despite all of that.


I share this concern.


One of Big Tech’s pitches about AI is the “great equalizer” idea. It reminds me of their pitch about social media being the “great democratizer”. Now we’ve got algorithms, disinformation, deepfakes, and people telling machines to think for them and potentially also their kids.


I see these as problems too. If you (as a teacher) put an answer machine in the hands of a student, it essentially tells that student that they’re supposed to use it. You can go out of your way to emphasize that they are expected to use it the “right way” (since there aren’t consistent standards on how it should be used, that’s a strange thing to try to sell students on), but we’ve already seen that students (and adults) often choose to choose the quickest route to the goal, which tends to result in them letting the AI do the heavy lifting.


Thank you. The American sources I referenced here seemed the best suited to the topic, largely because of how informative they were. But if anyone has good info from other countries (or America) to add to the discussion I’m more than happy to hear it.


Great to get the perspective of someone who was in education.
Still, those students who WANT to learn will not be held back by AI.
I think that’s a valid point, but I’m afraid that it’s making it harder to choose to learn the “old hard way” and I’d imagine fewer students deciding to make that choice.


Thank you for kicking this hornet’s nest. There is a lot of great info and enthusiasm here, all of which is sorely needed.
We have massive and widespread attention paid to every cause under the sun by social and traditional media, with movements and protests (deservedly) filling the streets. Yet this issue which is as central and crucial to our freedoms as any rights currently being fought for (it intersects with each of them directly), continues to be sidelined and given the foil hat treatment.
We can’t even adequately talk about things like disinformation, political extremism, and even mental health without addressing the role our technologies play, which has been hijacked by these bad actors, robber barons selling us ease and convenience and promises of bright, shiny, and Utopian futures while conning us out of our liberty.
With the widespread, rapidly declining state of society, and the dramatic rise and spread of technologies like AI, there has never been a more urgent need to act collectively against these invasive practices claiming every corner of our lives.
We need those of you recognize this crisis for what it is, we need your voices in the discussions surrounding the many problems and challenges we face at this critical moment. We need public awareness to have hope of changing this situation for the better.
As many of you have pointed out, the most immediate step we need to take is disengagement with the products and services that are surveiling, exploiting, and manipulating us. Look to alternatives, ask around, don’t be afraid to try something new. Deprive them of both your engagement and your data.
Keep going, keep resisting, do the small things you can do. As the saying goes, small things add up over time. Keep going.
[Edited slightly for clarity]


Hilarious. I bite my tongue so often around these kinds of situations it has permanent tooth imprints in it. But you’re right, someone needs to figure out how to get them to stop tolerating this horrific nonsense.


The more people who demand better out of their employers (and services, governments, etc.), the better we’ll get of those things in the long run. When you surrender your rights, you worsen not only your own situation, but that of everyone else, as you validate and contribute to the system that violates them. Capitulation is the single greatest reason we have these kinds of problems.
We need more people doing exactly as you did, simply saying no. Thank you for fighting, and thank you for sharing. Best wishes in your job hunt.


I do think you’re absolutely right. I know people doing exactly that — checking out — and it does seem like a common response. It is understandable, a lot of people just can’t deal with all that garbage being firehosed into their faces, and the level of crazy ratcheting up through the ceiling. And that reaction of checking out is one of the intended effects of the strategy of “flooding the zone”. Glad you pointed that out.


No secret, ML is Marxist-Leninist. They tend to have a similar focus and way of framing things as what I’m picking up from you.


Odd statement to cut and flip around out of all of that text. Reminds me a lot of ML.
If the problem you have is specifics, I could flip your tactic around and ask you to point to a specific “kind of AI [that] is being discussed or how it is being used” that supports your stance on why we shouldn’t be discussing this, which is what you’ve implied. But that’s playing games with you, like what you’re doing with us.
Your engagement on this issue is still clearly in bad faith, so instead I will point out that the burden of proof you’re demanding is weak within the context of this discussion. It reads like a common troll play where they attempt to draw a mark down a rabbit hole. It shouldn’t be too difficult for you to do an internet search or tap your own personal experiences, especially as intensely passionate about this issue you are.
Understand that I don’t play these games. This is me leaving you to your checkerboard. Take care.