• 0 Posts
  • 25 Comments
Joined 3 years ago
cake
Cake day: July 1st, 2023

help-circle
  • The Epstein files obviously contain a lot of information about rape and trafficking, which is very understandably and rightly in the spotlight. But what the files also contain is very detailed information about exactly how our laws and financial systems are being actively exploited to maintain the power of a select few. That is something that is much harder to write a quick article about, by design, but we haven’t even seen some of these names mentioned in the media:

    • de Rothschild (with a very illustrative diagram in EFTA01114424)
    • Thiel
    • Rockefeller
    • Murdoch
    • von Habsburg

    And those are just individuals, not companies. We haven’t heard anything about JP Morgan Chase, Sotheby’s, Goldman Sachs… Or even the universities like Harvard.

    You can’t usually pull a single short damning quote from an email for them because it’s not as simple as the horror of one person raping children, but it lays the foundation of how this horror was allowed to continue at such a large scale by so many people.







  • This is the theme of almost all of the “toppling”. Mostly they’ve just… resigned. They probably keep all the perks, and then take up a corporate advisor position once there’s less heat.

    Headlines like this make it sound like there’s been real impact beyond generating articles about a few of the more public figures. But reading article, it’s really just a few politicians and bureaucrats resigning. Mandelson’s firing was already months ago. The investigation into a former Norwegian PM sounds like that’s as harsh as it’s got so far for politicians this time. And nothing except one law firm board member resigning for private companies?

    They’re all getting away with it, and all the victims get is a hundred headlines about Musk being named in the files, and having their lives endangered from the terrible Don-centric redaction.




  • Perhaps something like this: https://lemmy.world/post/42528038/22000735

    Deferring responsibility for risk and compliance obligations to AI is bound to lead to some kind of tremendous problem, that’s a given. But the EU has been pretty keen lately on embedding requirements into their newer digital-realm laws about giving market regulators access to internal documents on request.

    This is not to suggest it’s anywhere close to a certainty that an Enron will happen though. There is still the exceptionally large problem of company executives are part of the power structures which selectively target who to prosecute, if their lobbyists haven’t already succeeded in watering the legislation down to be functionally useless before they’re in force. So it will take a huge fuck up and public pressure for any of the countries to make good on their own rules.

    Given that almost all the media heat from the Trump-Epstein files has been directed at easy target single public personalities, completely ignoring the obvious systemic corruption by multiple corporate entities, I don’t have high hopes for that part. But if the impending fuck up and scale of noncompliance is big enough, there’s a chance there will be audits and EU courts involved.



  • I read both of these and what struck me was how both studies felt remarkably naive. I found myself thinking: “there’s no way the authors have any background in the humanities”. Turns out there’s 2 authors, lo and behold, both with computer science degrees. This might explain why it feels like they’re somehow incredulous at the results - they’ve approached the problem as evaluating a system’s fitness in a vacuum.

    But it’s not a system in a vacuum. It’s a vacuum that has sucked up our social system, sold to bolster the social standing of the heads of a social construct.

    Had they looked at the context of how AI has been marketed, as an authoritative productivity booster, they might have had some idea why both disempowerment and reduced mastery could be occurring: The participants were told to work fast and consult the AI. What a shock that people took the responses seriously and didn’t have time to learn!

    I’d ask why Anthropic had computer scientists conducting sociological research, but I assume this part of output has just been published to assuage criticism of trust and safety practices. The final result will probably be adding another line of ‘if query includes medical term then print “always ask a doctor first”’ to the system prompt.

    This constant vacillation between “it’s a revolution and changes our entire reality!” and “you can’t trust it and you need to do your own research” from the AI companies is fucking tiresome. You can’t have it both ways.




  • Actually, just because the document is not awesomely formatted, and because mentioning the thanking part undersells the contradiction:

    Epstein: is there a 501 c3 that i could give the 50k to/?
    Goertzel: Yes: the Singularity Institute for AI (redacted).
    Epstein: please send details i will send 50k monday.
    Goertzel: many many thanks! … You won’t regret it ;) The AI we build will thank you one day! I am driving now and will send details when I get home





  • Risk and limitations: the study is inherently risky. While it is highly likely that STIs that alter female sexual behavior exist in the wider mammalian order, whether or not they current infect humans remains unclear. Challenges exist in successfully culturing newly identified STIs and adapting microbes to standardized lab models for testing. Finally, any new STIs will be relatively easy to test for efficacy in animals but costly and otherwise challenging to test in humans, and it is possible that success in animal models will not translate into human efficacy. Risks can be mitigated by simultaneously conducting animal and human studies, increasing the probability of identifying at least a single mammalian agent that modifies female sexual behavior.

    Fucking terrifying.


  • That was my take as well. It’s basically anyone in academia/tech who had a PR machine working for them at the time, and a couple of weird extras.

    How Gromov only landed the underwhelming summary of “American” is interesting, I assume the list copy paste was cut short and the next word was “mathematician”.

    If these people did all end up in the same location it’s probably safe to assume it was a private and unpublicized event. Some of them seem to have been in and around silicon valley at the time, so maybe one of the tech fake charity “foundation” events.