• 0 Posts
  • 50 Comments
Joined 7 months ago
cake
Cake day: May 16th, 2025

help-circle
  • Yeah, it’s not like reviewers can just write “This paper is utter trash. Score: 2” unless ML is somehow an even worse field than I previously thought.

    They referenced someone who had a paper get rejected from conferences six times, which to me is an indication that their idea just isn’t that good. I don’t mean this as a personal attack; everyone has bad ideas. It’s just that at some point, you just have to cut your losses with a bad idea and instead use your time to develop better ideas.

    So I am suspicious that when they say “constructive feedback”, they don’t mean “how do I make this idea good” but instead “what are the magic words that will get my paper accepted into a conference”. ML has become a cutthroat publish-or-perish field, after all. It certainly won’t help that LLMs are effectively trained to glaze the user at all times.


  • AI researchers are rapidly embracing AI reviews, with the new Stanford Agentic Reviewer. Surely nothing could possibly go wrong!

    Here’s the “tech overview” for their website.

    Our agentic reviewer provides rapid feedback to researchers on their work to help them to rapidly iterate and improve their research.

    The inspiration for this project was a conversation that one of us had with a student (not from Stanford) that had their research paper rejected 6 times over 3 years. They got a round of feedback roughly every 6 months from the peer review process, and this commentary formed the basis for their next round of revisions. The 6 month iteration cycle was painfully slow, and the noisy reviews — which were more focused on judging a paper’s worth than providing constructive feedback — gave only a weak signal for where to go next.

    How is it, when people try to argue about the magical benefits of AI on a task, it always comes down to arguing “well actually, humans suck at the task too! Look, humans make mistakes!” That seems to be the only way they can justify the fact that AI sucks. At least it spews garbage fast!

    (Also, this is a little mean, but if someone’s paper got rejected 6 times in a row, perhaps it’s time to throw in the towel, accept that the project was never that good in the first place, and try better ideas. Not every idea works out, especially in research.)

    When modified to output a 1-10 score by training to mimic ICLR 2025 reviews (which are public), we found that the Spearman correlation (higher is better) between one human reviewer and another is 0.41, whereas the correlation between AI and one human reviewer is 0.42. This suggests the agentic reviewer is approaching human-level performance.

    Actually, now all my concerns are now completely gone. They found that one number is bigger than another number, so I take back all of my counterarguments. I now have full faith that this is going to work out.

    Reviews are AI generated, and may contain errors.

    We had built this for researchers seeking feedback on their work. If you are a reviewer for a conference, we discourage using this in any way that violates the policies of that conference.

    Of course, we need the mandatory disclaimers that will definitely be enforced. No reviewer will ever be a lazy bum and use this AI for their actual conference reviews.



  • Referencing the telephone game does not prove anything here. The telephone game is shows that humans are not good at copying something exactly without changes, which computers are better at. But the question here is if AI can achieve deeper understanding of a work, which is needed to produce a good summary. This is something humans are far better at. The AI screws up the summary here in ways that no reasonable person who has watched the TV series (or played the games) would ever screw up.




  • The most obvious indication of AI I can see is the countless paragraphs that start with a boldfaced “header” with a colon. I consider this to be terrible writing practice, even for technical/explanatory writing. When a writer does this, it feels as if they don’t even respect their own writing. Maybe their paragraphs are so incomprehensible that they need to spoonfeed the reader. Or, perhaps they have so little to say that the bullet points already get it across, and their writing is little more than extraneous fluff. Yeah, much larger things like sections or chapters should have titles, but putting a header on every single paragraph is, frankly, insulting the reader’s intelligence.

    I see AI output use this format very frequently though. Honestly, this goes to show how AI appeals to people who only care about shortcuts and bullshitting instead of thinking things through. Putting a bold header on every single paragraph really does appeal to that type.



  • If rationalists could benefit from just one piece of advice, it would be: actions speak louder than words. Right now, I don’t think they understand that, given their penchant for 10k word blog posts.

    One non-AI example of this is the most expensive fireworks show in history, I mean, the SpaceX Starship program. So far, they have had 11 or 12 test flights (I don’t care to count the exact number by this point), and not a single one of them has delivered anything into orbit. Fans generally tend to cling on to a few parlor tricks like the “chopstick” stuff. They seem to have forgotten that their goal was to land people on the moon. This goal had already been accomplished over 50 years ago with the 11th flight of the Apollo program.

    I saw this coming from their very first Starship test flight. They destroyed the launchpad as soon as the rocket lifted off, with massive chunks of concrete flying hundreds of feet into the air. The rocket itself lost control and exploded 4 minutes later. But by far the most damning part was when the camera cut to the SpaceX employees wildly cheering. Later on there were countless spin articles about how this test flight was successful because they collected so much data.

    I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc. Now, I choose to look at the actions of the AI companies, and I can easily see that they do not have any ethics. Meanwhile, the rationalists are hypnotized by the Anthropic critihype blog posts about how their AI is dangerous.




  • After the bubble collapses, I believe there is going to be a rule of thumb for whatever tiny niche use cases LLMs might have: “Never let an LLM have any decision-making power.” At most, LLMs will serve as a heuristic function for an algorithm that actually works.

    Unlike the railroads of the First Gilded Age, I don’t think GenAI will have many long term viable use cases. The problem is that it has two characteristics that do not go well together: unreliability and expense. Generally, it’s not worth spending lots of money on a task where you don’t need reliability.

    The sheer expense of GenAI has been subsidized by the massive amounts of money thrown at it by tech CEOs and venture capital. People do not realize how much hundreds of billions of dollars is. On a more concrete scale, people only see the fun little chat box when they open ChatGPT, and they do not see the millions of dollars worth of hardware needed to even run a single instance of ChatGPT. The unreliability of GenAI is much harder to hide completely, but it has been masked by some of the most aggressive marketing in history towards an audience that has already drunk the tech hype Kool-Aid. Who else would look at a tool that deletes their entire hard drive and still ever consider using it again?

    The unreliability is not really solvable (after hundreds of billions of dollars of trying), but the expense can be reduced at the cost of making the model even less reliable. I expect the true “use cases” to be mainly spam, and perhaps students cheating on homework.




  • Promptfans still can’t get over the Erdős problems. Thankfully, even r/singularity has somehow become resistant to the most overhyped claims. I don’t think I need to comment on this one.

    Link: https://www.reddit.com/r/singularity/comments/1pag5mp/aristotle_from_harmonicmath_just_proved_erdos/

    alt text (original claim)

    We are on the cusp of a profound change in the field of mathematics. Vibe proving is here.

    Aristotle from @HarmonicMath just proved Erdos Problem #124 in @leanprover, all by itself. This problem has been open for nearly 30 years since conjectured in the paper “Complete sequences of sets of integer powers” in the journal Acta Arithmetica.

    Boris Alexeev ran this problem using a beta version of Aristotle, recently updated to have stronger reasoning ability and a natural language interface.

    Mathematical superintelligence is getting closer by the minute, and I’m confident it will change and dramatically accelerate progress in mathematics and all dependent fields.


    alt text (comments)

    Gcd conditions removed, still great, but really hate the way people shill their stuff without any rigor to explaining the process. A lot of things become very easy when you remove a simple condition. Heck reimann hypothesis is technically solved for function fields over finite fields. But nowadays in the age of hype, a tweet post would probably say “Reimann hypothesis oneshotted by AI” even though that’s not true.

    Gcd conditions removed

    So they didn’t solve the actual problem?




  • We will secure energy dominance by dumping even more money and resources into a technology that is already straining our power grid. But don’t worry. The LLM will figure it all out by reciting the Wikipedia page for Fusion Power.

    AI is expected to make cutting-edge simulations run “10,000 to 100,000 times faster.”

    Turns out it’s not good to assume that literally every word that comes out of a tech billionaire’s mouth is true. Now everyone else thinks they can get away with just rattling off numbers where their source is they made it the fuck up. I still remember Elon Musk saying a decade ago that he could make rockets 1,000 times cheaper, and so many people just thought it was going to happen.

    We need scientists and engineers. We do not need Silicon Valley billionaire visionary innovator genius whizzes with big ideas who are pushing the frontiers of physics with ChatGPT.




  • In my experience most people just suck at learning new things, and vastly overestimate the depth of expertise. It doesn’t take that long to learn how to do a thing. I have never written a song (without AI assistance) in my life, but I am sure I could learn within a week. I don’t know how to draw, but I know I could become adequate for any specific task I am trying to achieve within a week. I have never made a 3D prototype in CAD and then used a 3D printer to print it, but I am sure I could learn within a few days.

    This reminds me of another tech bro many years ago who also thought that expertise is overrated, and things really aren’t that hard, you know? That belief eventually led him to make a public challenge that he could beat Magnus Carlsen in chess after a month of practice. The WSJ picked up on this, and decided to sponsor an actual match with him and Carlsen. They wrote a fawning article about it, but it did little to stop his enormous public humiliation in the chess community. Here’s a reddit thread discussing that incident: https://www.reddit.com/r/HobbyDrama/comments/nb5b1k/chess_one_month_to_beat_magnus_how_an_obsessive/

    As a sidenote, I found it really funny that he thought his best strategy was literally to train a neural network and … memorize all the weights and run inference with mental calculations during the game. Of course, on the day of the match, the strategy was not successful because his algorithm “ran out of time calculating”. How are so many techbros not even good at tech? Come on, that’s the one thing you’re supposed to know!