There are lots of legacy things in tcp/ip headers. One of them can be for the extra octlet.
When ipv4 legacy flies around, that oclet will be null or 0. The entire internet could route just fine, especially if you put the extra octlet at the end. 1.1.1.1 gets an extra 1.1.1.1.newoctlet.
So every existing IP gets a bonus 255 new IPs, and for now, routing of those is hardlocked to that IP, and it works with all legacy gear.
In 30 years or something, we can care about the mobility of those new IPs.
You're at the very beginning, baby steps stage of inventing IPv6 there.
You aren't the first person to come up with the idea of adding extra bits to IP addresses to make them longer. The problem isn't finding somewhere to stash the extra bits in the packet format (which is trivial; you can simply set the next-protocol field to a special value and then put the bits at the start of the payload), it's getting all software to use those extra bits -- and getting that to work requires doing all of the new AF family, new sockaddr struct, new DNS records, dual stack/translation/tunnels etc etc that v6 does.
Please consider that maybe the people working on v6 weren't actually complete imbeciles and did in fact think things through.
Please consider that maybe the people working on v6 weren't actually complete imbeciles and did in fact think things through.
It is possible for the world to change, and for designs and plans and viewpoints 30+ years ago to be less correct today.
This world is not that world. That world had massive concerns about the processing cost of NAT. That was one reason for ipv6. It also had different ideas about where the net would go. We now know that the "internet of things" and "having your fridge online", as well as "5G in everything so people can't firewall it off" is just insane and malign.
We also know that tying an IP address to a person (compared to an ISP using NAT) reduces privacy. That devious and devilish actors abound.
Even though they thought these things might be neat, many of them aren't.
None of that has anything to do with what you said in the post I replied to. "Add an extra octet to v4 addresses" has hard technical barriers to deal with if you want it to work, regardless of what the world looks like or what you're designing for.
> We now know that the "internet of things" and "having your fridge online", as well as "5G in everything so people can't firewall it off" is just insane and malign
None of this is really relevant either. IP's job is to handle the addressing used when sending data over the Internet, and it should do this job well regardless of what people end up doing with it.
> We also know that tying an IP address to a person (compared to an ISP using NAT) reduces privacy
We don't tie IP addresses to people. PI allocations might sort of count, but regular users don't get those.
None of that has anything to do with what you said in the post I replied to.
Of course not, why would it? I quoted what I was replying to, and all of my comments made perfect sense in that context. In that context, I was discussing the winning ipv6's original design considerations, and yes "IPs for everything" was one of them, hence me talking about it.
I intended the quoted part to mean something like "they did consider adding extra octets to v4 addresses and setting those octets to zero to mean v4".
It's not like they weren't able to come up with that idea. It's just that if you follow that train of thought through to its conclusion, you'll either decide it can't work or you'll make enough changes to end up with something that works basically the same way v6 does.
But yes, having enough IPs for everything was obviously a design goal. It would be excessively silly to go through all the work to increase the address size and not increase it by enough to handle whatever people ended up wanting to do with it.
> That world had massive concerns about the processing cost of NAT
The processing cost of NAT is still a problem. There's that classic post by a Native American tribal ISP where it was cheaper for them to pay to replace their clients IPv4-only Roku devices with IPv6 capable Apple TVs than to upgrade their CGNAT appliance to handle the video traffic.
The concerns about the "processing cost of NAT" were edge concerns. Companies, homes, edge-devices with 100 or 1000 RFC1918 addressed devices behind them. When ipv6 was created, NAT wasn't a thing, as processing power just wasn't there.
And it was thought the processing power would never be there.
Yet now everyone has NAT in little devices at home. So the need to route 100 IPs into every person's home isn't a thing. Which is inline with my comment about how the world looked different 30 years ago, and how the concept of "IPs for everything" is the reverse of what people even want now.
We have that variant of IPv8, it's what CGNAT gives you, especially if you run MAP-E or MAP-T (which are technically not quite NAT, but kinda are, it's… complicated). You take some bits from the port number and essentially repurpose them into part of the address.
It's a nice band-aid technology, no less and no more.
have that be the invisible bottom layer. come up with a list of 256 common words, one per byte, and have that be the human visible IP address. mentally reading a string of words, however nonsensical, is way easier than a soup of undifferentiated hex digits.
That would cause worse confusion when working with teams from different localisations. Not to mention the complexity of now adding localisations to the address parser.
Awesome. If you think that is stopping anyone, here's a challenge for you:
GNU Bash is GPL. You can run Bash (and many other Linux commands) in Windows through Windows Subsystem for Linux. In fact, WSL is a nice example of Microsoft doing embrace & extend.
The challenge: find the Microsoft's published code for Bash.
WSL does not include bash. When you use bash from within WSL, it is using the version of bash that was included in the upstream distribution of linux you have installed. If you are using a Debian based image, to get the source code run the following:
My point exactly (notice I didn't say MS distributes bash - it doesn't, as you pointed out).
Bash being GPL doesn't stop MS from benefiting from it by providing it to WSL users which make WSL more valuable for them. It also (as we talked in the other comment) doesn't prevent Amazon from running a database and charging people for it.
So what's this great advantage of GPL that it would make it worthwhile to keep the entire copyright system just so we could still have GPL?
If you dig around in its origin, GPL was concieved as a tool to "fight system from within system". If there's no system, you don't have to fight the system.
Then why did you ask for something if you knew it didn't exist???
Overall I think you are mistaken about the purpose of the GPL. It does not, nor has it ever intended to prohibit commercial activities. RMS and FSF have been pretty clear about this for many decades. And in fact, they are against the idea of licenses that prohibit commercial use.
The reason that large successful projects like Linux are so capable is not because it has a price tag of zero (and it often does not), but because of the feedback loop created by the viral-nature of the software license.
The vast majority of Linux is not a volunteer project -- but software developed by commercial software engineers who are being paid by a company to write software. Before copyleft, the idea that they would voluntarily share source code was laughable. The only reason they do is because they are legally required to do so.
This viral nature of copyleft creates a positive feedback loop:
1. Company uses software because it is free and solves a problem
2. they need a modification so they make it
3. they contribute back to the project because it is required by the copyright license
4. the project becomes more valuable at solving more problems that other companies have
5. Go to step 1
Breaking this feedback loop would put companies back to their natural state of not sharing. The result is that the software landscape would start to look a lot like the 80s and 90s again.
Without copyright, copyleft would not exist. And without copyleft, Linux would have been a hobby OS that died out in the early 90s. We'd be using things like Windows Server, Unix, etc. And to protect their business in the absence of copyright, they'd have heavy DRM schemes, obfuscation, cryptographic licensing, etc.
This entire comment is completely backwards. Linux gained momentum first, then it was adopted by the wider industry.
It's much easier to upstream your desired changes than maintain a separate fork (closed or otherwise) long-term. Additionally, many of the contributors have been using it for own servers, not required to contribute back.
Things like NVidia and other closed drivers show you can bolt a non-open part to the GPL code if you try enough.
> And without copyleft, Linux would have been a hobby OS that died out in the early 90s. We'd be using things like Windows Server, Unix, etc.
This ignores the entire existence of FreeBSD, NetBSD, OpenBSD.
> they'd have heavy DRM schemes, obfuscation, cryptographic licensing
This ignores the existence of heavy DRM schemes, obfuscation, kernel-level anticheat spyware, criminalisation of copyright-circumvention schemes, etc.
At this point, I think you're just trolling, so I'll stop here.
> It's much easier to upstream your desired changes than maintain a separate fork (closed or otherwise) long-term. Additionally, many of the contributors have been using it for own servers, not required to contribute back.
Then why is ~75% of the kernel from corporate commits today? You think large tech companies just started to become coincidentally generous with the advent of Linux?
> This ignores the entire existence of FreeBSD, NetBSD, OpenBSD.
The BSD are quite niche in install base and highly rely on GPL'd ports from Linux.
And, by far the most popular OS in the BSD family tree is MacOS, which is primarily closed source.
> This ignores the existence of heavy DRM schemes, obfuscation, kernel-level anticheat spyware, criminalisation of copyright-circumvention schemes, etc.
I'm not ignoring it, I'm telling you that would be more common, if you remove the all of the other mechanisms by which a company could choose. Without any legal controls whatsoever, the only option to control the use of a company's software would be through technical means. Removing other options would be incentivizing this.
What’s funny is that it can answer that correctly, but it fails on ”A plane crashes right on the border between Austria and Switzerland. Where do you bury the dead?”
For me when I asked this (but with respect to the border between Austria and Spain) Claude still thought I was asking the survivors riddle and ChatGPT thought I was asking about the logistics. Only Gemini caught the impossibility since there’s no shared border.