• 0 Posts
  • 15 Comments
Joined 23 hours ago
cake
Cake day: August 25th, 2025

help-circle



  • Interesting connection I just made:
    I was wondering about the Hamburg reference and if there was a connection to the Beatles (from Liverpool) mania of the 60s that started in Hamburg…
    Turns out I have been right, the linked page basically says that the pic was part of a Beatles photo report of the Stern (famous German magazine from Hamburg) depicting the English youth culture of that time.
    The other pics in the series are also worth a look!




  • Yeah, Spotify sucks for many reasons, the ones mentioned not being the least.

    Some while ago I returned back to classic MP3 radio streams, grey-area streaming workarounds and occasionally sailing the high seas again.
    And instead of throwing my money at Spotify (which then doesn’t give it to the artists I listen to), I make sure to support the bands directly.
    Go to concerts, buy merch, support promising projects on e.g. Startnext.

    Win-Win-Win-scenario:
    I get my music, artists get more money, and the creedy corporate music structures around Spotify are not supported any more.





  • Multiplexer@discuss.tchncs.detoTechnology@lemmy.zipAI 2027
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 hours ago

    You are probably quite right, which is a good thing, but the authors take that into account themselves:

    “Our team’s median timelines range from 2028 to 2032. AI progress may slow down in the 2030s if we don’t have AGI by then.”

    They are citing an essay on this topic, which elaborates on the things you just mentioned:
    https://www.lesswrong.com/posts/XiMRyQcEyKCryST8T/slowdown-after-2028-compute-rlvr-uncertainty-moe-data-wall

    I will open a champagne bottle if there is no breakthrough in the next few years, because than the pace will significantly slow down.
    But still not stop and that is the thing.
    I myself might not be around any more if AGI arrives in 2077 instead of 2027, but my children will, so I am taking the possibility seriously.

    And pre-2030 is also not completely out of the question. Everyone has been quite surprised on how well LLMs were working.
    There might be similar surprises for the other missing components like world model and continuous learning in store, which is a somewhat scary prospect.

    And alignment is even now already a major concern, let’s just say “Mecha-Hitler”, crazy fake videos and bot-armies with someone questionable’s agenda…
    So seems like a good idea to try and press for control and regulation, even if the more extreme scenarios are likely to happen decades into the future, if at all…



  • Multiplexer@discuss.tchncs.detoTechnology@lemmy.zipAI 2027
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    18 hours ago

    I think the point is not that it is really going to happen at that pace, but to show that it very well might happen within our lifetime. And also the authors have adjusted the earliest possible point of a possible hard to stop runaway scenario to 2028 afaik.

    Kind of like the atomic doomsday clock, which has been oscillating between a quarter to twelve and a minute before twelve during the last decades, depending on active nukes and current politics. Helps to illustrate an abstract but nonetheless real risk with maximum possible impact (annihilation of mankind - not fond of the idea…)

    Even if it looks like AI has been hitting some walls for now (which I am glad about) and is overhyped, this might not stay this way. So although AGI seems unlikely at the moment, taking the possibility into account and perhaps slowing down and making sure we are not recklessly risking triggering our own destruction is still a good idea, which is exactly the authors’ point.

    Kind of like scanning the sky with telescopes and doing DART-style asteroid research missions is still a good idea, even though the probability for an extinction level meteorite event is low.