• flavia@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    The paper is so bad…

    the agent’s policy π … the environment ε

    What is up with AI papers using fancy symbols to notate abstract concepts when there isn’t a single other instance of the concept to be referred to

    They offer a bunch of tables with numbers in a metric that isn’t explained, showing that they are exactly the same for “random” and “agent” policy, in other words, inputs don’t actually matter! And they say they want to use these metrics for training future versions. Good luck.

    For the sample size they are using 60% seems like a statistically significant rate, and they only tested at most 3 seconds after real gameplay footage.

    Sidenote: Auto-regressive models for much shorter periods are really useful for when audio is cutting out. Those use really simple math, they aren’t burning any rainforests

    I’m willing to retract my statement that these guys don’t have any ulterior motives.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      There are serious problems with how easy it is to adopt the aesthetic of serious academic work without adopting the substance. Just throw a bunch of meaningless graphs and equations and pretend some of the things you’re talking about are represented by Greek letters and it’s close enough for even journalists who should really know better (to say nothing of VCs who hold the purse strings) to take you seriously and adopt the "it doesn’t make sense because I’m missing something* attitude.