In the early 1990s, internetworking wonks realized the world was not many years away from running out of Internet Protocol version 4 (IPv4) addresses, the numbers needed to identify any device connected to the public internet. Noting booming interest in the internet, the internet community went looking for ways to avoid an IP address shortage that many feared would harm technology adoption and therefore the global economy.

A possible fix arrived in December 1995 in the form of RFC 1883, the first definition of IPv6, the planned successor to IPv4.

The most important change from IPv4 to IPv6 was moving from 32-bit to 128-bit addresses, a decision that increased the available pool of IP addresses from around 4.3 billion to over 340 undecillion – a 39-digit number. IPv6 was therefore thought to have future-proofed the internet, because nobody could imagine humanity would ever need more than a handful of undecillion IP addresses, never mind the entire range available under IPv6.

  • Max@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 days ago

    Isn’t the recommended strategy to delegate a larger prefix to the gateway and then make smaller subnetworks from that for each interface? Then you don’t have to deal with separate prefixes.

    • osaerisxero@kbin.melroy.org
      link
      fedilink
      arrow-up
      9
      ·
      6 days ago

      It is. How At&t handles it is they hand out only 1 /64 of the delagted /60 (by default) per explicit IA-PD request, rather than the full /60 they allocate by default (which, note, is not on a nibble boundry like it’s supposed to be, and you only get half of that as usable).

      But on every gateway device I’ve used, even if you get a full /56 prefix, you still have to explicitly assign out each /64 to sub interfaces. Really, ipv6 is a bunch of great ideas which were ruined by shitty implementations everywhere.

      • Max@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        Wow that’s extremely annoying.

        On openwrt, you just tell the interface to grab a /64 from any other interface that tags its delegation as shareable. And on the source interface you can specify with what priority those /64s are given out.

        • osaerisxero@kbin.melroy.org
          link
          fedilink
          arrow-up
          1
          ·
          6 days ago

          That seems reasonable to me as far as implementations go. The ones where they will autoassign always just overload pd index 0 which is worse than doing nothing imo lmao

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      6 days ago

      Exactly. Most good ISPs will give you a /56 or /60 range if your router asks for it, and then you can subnet it into multiple /64 ranges (16 /64 networks for a /60, or 256 networks for a /56).

      I have three VLANs with internet access (main, guest, and IoT), and each one gets its own /64 range.

      Note that you shouldn’t use subnets smaller than a /64, as several features (such as SLAAC and privacy extensions) rely on it.

      • WhyJiffie@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        Note that you shouldn’t use subnets smaller than a /64, as several features (such as SLAAC and privacy extensions) rely on it.

        it seems so silly an oversight of ipv6. sometimes you just can’t have /64 subnets because the ISP only gives you a single /64