Archive for May, 2011

Another reason ipv6 is stupid

Monday, May 30th, 2011

I recently heard a talk about the demise of the internet as a result of the exhaustion of ip addresses with ipv4.

I always figured, ‘aahhhh, what a load of crap, just NAT the shit out of everything.’ But the speaker pointed out you run into the problem of port exhaustion on the internet-facing machine. Okay, point taken, I concede, the NAT forever thing won’t work. Although it certainly could last a long long time if they bothered to organize a little better, but I’ll let that go, we really are running out of addresses.

Still, the sky is far from falling, I have one really simple thought that made all of ipv6 really pointless and a terribly complicated exercise in wasting everybody’s time.

ipv4 has its share of problems, but the biggest one is that we’re running out of addresses, or rather in February, the IANA actually handed out the last batch. That’s it no more.

IPV6 was designed starting 15 years ago or so, and nobody lifted a finger to fix something that wasn’t broken. But in all that time, like c++ and everything else, they had grand plans, and they added features. IPV6 was going to streamline all sorts of byte wasting excessive packet size, it was going to enable ipsec at the ip layer (or something like that I forget the details) and they were going to add this useful feature, and that useful feature and so on and so forth for all 15 years that everybody was ignoring them and not implementing it.

But fast forward to now, and it turns out the only problem we ACTUALLY have to solve is that we’re running out of addresses.

ipv6 offers a 128 bit source and destination address, and the current rollout of ipv6 as it is being adopted is pretty much doing absolutely nothing other than solving the problem of running out of addresses. All that ipsec and all that other grand vision feature stuff is all gone. People are implementing ipv6 because they need more addresses and that’s it.

ipv6 was supposed to be many things to many people, but as it turned out, we only really needed the bigger address space.

Well if you look at the ipv4 header there’s got to be 3-5 bytes of shit that nobody ever uses for anything (like the fragment stuff), that just go to waste and could have been repurposed for an extra byte or two of source and dest addresses. It may not get you 128 bits of address but it would push out the address exhaustion problem a few centuries. It would have taken 1 guy maybe 2 days to hack it into the linux kernel (and you could even swipe a bit from the version to say whether or not this is a new-address-style packet so it could be backward compatible.) Microsoft would wait 2 years, then add support and say they invented it and are responsible for saving the world from the collapse of the internet.

But no. Instead everybody and their mother had to implement ipv6 which does nothing but add address space.

You almost can’t blame all those fucking morons. If they had just set out to solve the problem that needed solving, they could have implemented the hack ipv4 solution YEARS ago and there would never have been a problem, people would have had plenty of time to implement it before we started using the ‘extra’ address space.

But no, they had to design the next great thing which was going to solve all the problems of networking in one fell swoop. And because they’re fucking morons, they’re too dim to see that every other fucking process in the world falls apart the exact same way, and therefore could not have predicted what actually happened that ipv6 would be pared down to its one useful feature.

No, you can’t blame them, because they’re too fucking dumb.

A theory about easy programming.

Monday, May 9th, 2011

I do most of my programming in java and I use eclipse which just makes everything real easy. I can refactor, flip between source files, look up every instance of something with a few keystrokes and in a matter of seconds.

This past week I had to work on fixing a C program I wrote a few years ago. My dev environment for C programs isn’t quite as snappy as my java environment and I had to do a bunch of things the old school way. A text editor, a compiler that I actually have to invoke rather that being run automatically every time I hit save and so on.

It was a much smaller program than anything I usually work on but because of the environment involved, everything went a lot slower. I had to wait longer between writing something and testing, and between saving and compiling, and between debugging a line and seeing the watch variables get updated. Every little step of the entire process is slower.

And it makes the whole thing take longer as a result, but interestingly, it keeps my attention. While you’re in the thick of programming, getting distracted is the kiss of death, so I have to maintain my concentration for much longer periods of time, which gives me more time to think in the context in which I’m working, and gives
me time to think through some of the things I’m doing so that I do the right thing the first time instead of doing it wrong and then doing it again, but it’s so quick it doesn’t matter.

And the time just whipped right by, and I’m wondering if that’s not where my love of programming went. Killed by the rapid application development environment.

Making programming brainlessly easy (java combined with eclipse) takes the fun out of it. When was the last time I had to worry about a deallocating memory to avoid a memory leak? Years. Have I had a need to make something go so fast that it could only be done in a non-java language? Not in a long long time. 1991 I think. They just keep making hardware faster and faster.

I’m a dying breed I guess.