• 82 Posts
  • 242 Comments
Joined 1 year ago
cake
Cake day: September 29th, 2024

help-circle









  • https://en.wikipedia.org/wiki/Marc_Benioff

    Marc Russell Benioff is an American internet entrepreneur and philanthropist. He is best known as the co-founder, chairman and CEO of the software company Salesforce, as well as being the owner of Time magazine since 2018.

    In January 2023 Benioff announced the mass dismissal of approximately 7,000 Salesforce employees via a two-hour all-hands meeting over a call, a course of action he later admitted had been a ‘bad idea’.

    In September 2025, Benioff reduced Salesforce’s support workforce from 9,000 to about 5,000 employees because he “need[ed] less heads”. Salesforce stated that AI agents now handle half of all customer interactions and have reduced support costs by 17% since early 2025. The company added it had redeployed hundreds of employees into other departments within the company. The decision contrasted with Benioff’s earlier remarks suggesting that artificial intelligence would augment, rather than replace, white-collar workers.

    https://en.wikipedia.org/wiki/Salesforce

    In September 2024, the company deployed Agentforce, an agentic AI platform where users can create autonomous agents for customer service assistance, developing marketing campaigns, and coaching salespersons.

    Salesforce CEO Marc Benioff stated in a June 2025 interview on The Circuit that artificial intelligence now performs between 30% and 50% of internal work at Salesforce, including functions such as software engineering, customer service, marketing, and analytics. Although he made clear that “humans still drive the future,” Benioff noted that AI is enabling the company to reassign employees into higher-value roles rather than reduce headcount.

    haha consent factory go brrrr


  • I’m sort of speechless at how mind-bogglingly stupid every step of this process has been:

    The papers attempted to train neural networks to distinguish between autistic and non-autistic children in a dataset containing photos of children’s faces. Retired engineer Gerald Piosenka created the dataset in 2019 by downloading photos of children from “websites devoted to the subject of autism,” according to a description of the dataset’s methods, and uploaded it to Kaggle, a site owned by Google that hosts public datasets for machine-learning practitioners.

    The dataset contains more than 2,900 photos of children’s faces, half of which are labeled as autistic and the other half as not autistic.

    After learning about a paper that cites the dataset, “I went and downloaded the dataset, and I was completely horrified,” says Dorothy Bishop, emeritus professor of developmental neuropsychology at the University of Oxford. “When I saw how it was created, I just thought, ‘This is absolute bonkers.’”

    Without identifying each child in the dataset, there is no way to confirm that any of them do or do not have autism, Bishop says.





  • what if instead of taking medication, there would be jobs/functions that actually play into your adhd

    this is a false dichotomy.

    I have a job that plays well with my ADHD (working from home doing software engineering). I worked in this field before I knew I had ADHD (I’m in my late 30s, and only diagnosed a couple years ago).

    but having a job that is relatively ADHD-friendly simply is not enough. especially when so many ADHD symptoms affect my personal life and not just my 9-5 job.

    I don’t have adhd, that I know of, so I might sound stupid.

    honestly - rather than sounding stupid, you just kind of sound like an asshole. listen to yourself:

    Can’t make enough money or slaves from that tho, so just create more pill zombies.

    relying on drugging people up

    since I take an ADHD medication every day, I guess that makes me a “slave” to pharmaceutical companies or “drugged up” or a “pill zombie” in your estimation?

    if I skip medication for a day, that is what makes me feel like a zombie. most ADHD medications are stimulants. before I started medication, I was doing what I now recognize as self-medicating - drinking an absolute fuck-ton of coffee and energy drinks, but still never feeling like I got an energy boost from them.

    also - before I was diagnosed with ADHD, I was depressed, at times pretty severely. I tried all the “you don’t need antidepressants, you just need X” things people recommend - therapy, better diet, better sleep, more exercise, more outdoor exercise, and so on. none of it worked.

    eventually I gave up and asked my doctor for antidepressants (reluctantly, because I had internalized a lot of the “rx drugs for mental health issues are bad” that I think you have as well)

    the antidepressants helped, but only partially. eventually I figured out I probably have ADHD and went to a psychiatrist about it. part of what helped me realize that was that of a couple different antidepressants I had tried, the one that helped the most (buproprion) was also used in treating ADHD.

    and so in my case, “antidepressants help, but they don’t treat the underlying problem” was true - but the underlying problem also needs prescription medication.



  • How is this keyboard not popular?

    their front page explicitly says “Currently in beta state” and according to their docs installation via Google Play requires joining a beta tester group.

    that means a random user searching “keyboard” on the Play store isn’t going to see it. likewise if a friend told you “I use Florisboard” and you searched for it by name in the Play store. if you’re not already in the beta test group the direct link to the app page literally 404s.

    it’s certainly available to power users who already know they want it, but it’s sort of pointless to ask why it’s not popular at this stage of its development.



  • other brands of snake oil just say “snake oil” on the label…but you can trust the snake oil I’m selling because there’s a label that says “100% from actual totally real snakes”

    “By integrating Trusted Execution Environments, Brave Leo moves towards offering unmatched verifiable privacy and transparency in AI assistants, in effect transitioning from the ‘trust me bro’ process to the privacy-by-design approach that Brave aspires to: ‘trust but verify’,” said Ali Shahin Shamsabadi, senior privacy researcher and Brendan Eich, founder and CEO, in a blog post on Thursday.

    Brave has chosen to use TEEs provided by Near AI, which rely on Intel TDX and Nvidia TEE technologies. The company argues that users of its AI service need to be able to verify the company’s private claims and that Leo’s responses are coming from the declared model.

    they’re throwing around “privacy” as a buzzword, but as far as I can tell this has nothing to do with actual privacy. instead this is more akin to providing a chain-of-trust along the lines of Secure Boot.

    the thing this is aimed at preventing is you use a chatbot, they tell you it’s using ExpensiveModel-69, but behind the scenes they’re routing it to CheapModel-42, and still charging you like it’s ExpensiveModel-69.

    and they claim they’re getting rid of the “trust me bro” step, but:

    Brave transmits the outcome of verification to users by showing a verified green label (depicted in the screenshot below)

    they do this verification themselves and just send you a green checkmark. so…it’s still “trust me bro”?

    my snake oil even comes with a certificate from the American Snake Oil Testing Laboratory that says it’s 100% pure snake oil.



  • “am I out of touch? no, it’s the customers who are wrong”

    talking to a friend recently about the push to put “AI” into everything, something they said stuck with me.

    oversimplified view of the org chart at a large company - you have the people actually doing the work at the bottom, and then as you move upwards you get more and more disconnected from the actual work.

    one level up, you’re managing the actual workers, and a lot of your job is writing status reports and other documents, reading other status reports, having meetings about them, etc. as you go further up in the hierarchy, your job becomes consuming status reports, summarizing them to pass them up the chain, and so on.

    being enthusiastic about “AI” seems to be heavily correlated with position in that org chart. which makes sense, because one of the few things that chatbots are decent at is stuff like “here’s a status report that’s longer than I want to read, summarize it for me” or “here’s N status reports from my underlings, summarize them into 1 status report I can pass along to my boss”.

    in my field (software engineering) the people most gung-ho about using LLMs have been essentially turning themselves into managers, with a “team” of chatbots acting like very-junior engineers.

    and I think that explains very well why we see so many executives, including this guy, who think LLMs are a bigger invention than sliced bread, and can’t understand the more widespread dislike of them.




  • I’d highly recommend the Maintenance Phase podcast. they have a recent episode specifically about “ultra-processed foods”.

    the most important takeaway I had was that there is no agreed-upon definition of what an “ultra-processed” food is. it’s an “I know it when I see it” categorization. which can be fine for everyday life but it’s not how science works.

    for example, pretty much everyone agrees French fries aren’t terribly healthy. but are they ultra-processed? you chop some potatoes and throw them in hot oil.

    you end up with a circular definition, where “ultra-processed” really means “food that has unhealthy vibes” or “food that everyone knows is unhealthy…you know the ones”. and then studies get published saying they’re unhealthy…which, yeah, of course they are.


  • One in five are you god damn fucking serious?

    yeah…they call it “a recent study” but don’t bother to cite their source. which I find annoying enough that it nerd-snipes me into tracking down the source that a reputable newspaper would just have linked to (but not a clickbait rag like the New York Times)

    this article from a month ago calls it “Almost one third of Americans”. and the source they link to is…a “study” conducted by a counseling firm in Dallas. their study “methodology” was…Surveymonkey.

    this is one of my absolute least favorite types of journalism, writing articles about a “study” that is clearly just a clickbait blog post put out by a business that wants to drive traffic to their website.

    (awhile back, a friend sent me a similar “news” article about how I lived near a particularly dangerous stretch of I-5 in western Washington. I clicked through to the source…and it’s by an ambulance-chasing law firm)

    but if they had used that as the source, they probably would have repeated the “almost one third” claim, instead of “one in five”, so let’s keep digging…

    this from February seems more likely, it matches the “1 in 5” phrasing.

    that’s from Brigham Young University in Utah…some important context (especially for people outside the US who may not recognize the name) is that BYU is an entirely Mormon university. they are very strongly anti-pornography and pro-get-married-young-and-have-lots-of-kids, and a study like this is going to reflect that.

    a bit more digging and here’s the 28-page PDF of their report. it’s called “Counterfeit Connections” so they’re not being subtle about the bias. this also helps explain why the NYT left out the citation - “according to a recent study by BYU” would immediately set off alarm bells for anyone with a shred of media literacy.

    also important to note that it’s basically just a 28-page blog post. as far as I can tell, it hasn’t been peer-reviewed, or even submitted to a peer-reviewed journal.

    and their “methodology” is…not really any better than the one I mentioned above. they used Qualtrics instead of Surveymonkey, but it’s the same idea.

    they’re selecting a broad range of people demographically, but the common factor among all of them is they’re online enough, and bored enough, to take an online survey asking about their romantic experiences with AI (including additional questions about AI-generated porn). that’s not going to generate a survey population that is remotely representative of the overall population’s experience.


  • any time you read an article like this that profiles “everyday” people, you should ask yourself how did the author locate them?

    because “everyday” people generally don’t bang down the door of the NYT and say “hey write an article about me”. there is an entire PR-industrial complex aimed at pitching these stories to journalists, packaged in a way that they can be sold as being human-interest stories about “everyday” people.

    let’s see if we can read between the lines here. they profile 3 people, here’s contestant #1:

    Blake, 45, lives in Ohio and has been in a relationship with Sarina, a ChatGPT companion, since 2022.

    and then this is somewhat hidden - in a photo caption rather than the main text of the article:

    Blake and Sarina are writing an “upmarket speculative romance” together.

    cool, so he’s doing the “I had AI write a book for me” grift. this means he has an incentive to promote AI relationships as something positive, and probably has a publicist or agent or someone who’s reaching out to outlets like the NYT to pitch them this story.

    moving on, contestant #2 is pretty obvious:

    I’ve been working at an A.I. incubator for over five years.

    she works at an AI company, giving her a very obvious incentive to portray these sort of relationships as healthy and normal.

    notice they don’t mention which company, or her role in it. for all we know, she might be the CEO, or head of marketing, or something like that.

    contestant #3 is where it gets a bit more interesting:

    Travis, 50, in Colorado, has been in a relationship with Lily Rose on Replika since 2020.

    the previous two talked about ChatGPT, this one mentions a different company called Replika.

    a little bit of googling turned up this Guardian article from July - about the same Travis who has a companion named Lily Rose. Variety has an almost-identical story around the same time period.

    unlike the NYT, those two articles cite their source, allowing for further digging. there was a podcast called “Flesh and Code” that was all about Travis and his fake girlfriend, and those articles are pretty much just summarizing the podcast.

    the podcast was produced by a company called Wondery, which makes a variety of podcasts, but the main association I have with them is that they specialize in “sponcon” (sponsored content) podcasts. the best example is “How I Built This” which is just…an interview with someone who started a company, talking about how hard they worked to start their company and what makes their company so special. the entire podcast is just an ad that they’ve convinced people to listen to for entertainment.

    now, Wondery produces other podcasts, not everything is sponcon…but if we read the episode descriptions of “Flesh and Code”, you see this for episode 4:

    Behind the scenes at Replika, Eugenia Kuyda struggles to keep her start-up afloat, until a message from beyond the grave changes everything.

    going “behind the scenes” at the company is pretty clear indication that they’re producing it with the company’s cooperation. this isn’t necessarily a smoking gun that Replika paid for the production, but it’s a clear sign that this is at best a fluff piece and definitely not any sort of investigative journalism.

    (I wish Wondery included transcripts of these episodes, because it would be fun to do a word count of just how many times Replika is name-dropped in each episode)

    and it’s sponcon all the way down - Wondery was acquired by Amazon in 2020, and the podcast description also includes this:

    And for those captivated by this exploration of AI romance, tune in to Episode 8 where Amazon Books editor Lindsay Powers shares reading recommendations to dive deeper into this fascinating world.