The makers

February 15th, 2012

In the long term, our society will either have to change drastically or collapse. As technology progresses, more can be accomplished
by fewer people in less time. This means less work to do for an ever growing population.

It is true that new ‘things to do’ will come up, but as we apply technological progress to them, again there will be less people needed to do them and more people on the planet. This is called progress. (likewise by the way, if we don’t get off the planet before we use
this one up, we’re doomed too, and that seems more likely to happen first)

The reason I bring this up is because of the makers. You may or may not have heard about this wonderful technology called 3-D printing. The basic idea is that instead of molding plastic, or cutting wood, you print the thing you want or the part you need on a 3D printer. Sounds like death to manufacturers to me. Until now these things have been big and expensive but recently there have been printers that cost less than $2000. There are plenty of limits don’t you worry. At the moment they can only make things out of certain kinds of plastic that have limited size (I think they said a breadbox is the largest thing you can make right now) and it’s really slow.

But. That will end quickly. They’re working on making high temperature plastics, and even metals for printing. So the problem quickly becomes that you no longer need to buy anything you only have to buy the plans and print it yourself. With your expensive
personal printer. Except that the printers at this point can almost make all the parts they need to reproduce themselves. (humans are what 3D printers use to reproduce themselves.)

So the only commodity left on the planet are fuels for energy and fuels for printers. Everybody else is out of a job.
Now of course there will be plenty of work in assembling parts and there’s endless office work to do. People can push meaningless paper around forever, so that will occupy some people. But at some point the printers will be able to print robots that can do the parts assembly. Right now they can make all the parts of the printer, but they can’t make the circuitry or the programming so this is still a long way off, that’s still really high tech stuff, but you can see a black cloud forming.

Now forget all that, here’s the real problem: The people who make the plans that tell the printer what to print are going to be the powermasters for a little while. They hold the keys to the kingdom. Except that like music and movies, actual physical STUFF has now been turned into digital media that can easily be copied and pirated. There’s the free software people that will spend endless hours designing plans for a cam shaft for a 1968 corvette for free which will rob chevy of any future sales of corvette cam shafts.
Of course there’s the business people who (will) expect to make a business out of selling plans for parts for everything you would ever otherwise buy and end product of. for. of. Never end a sentence with a preposition. But as soon as one person buys the plans, he can netflix it over to his best buddy and then quickly there’s no market for plans either.
So how will anything ever get done if there’s no value in any work or any products? Nobody will be able to work for money and nobody will have any money to spend on materials. Remember the only thing you have to buy anymore is ink for your printer, everything else is free or stealable. So things will have to change. One way or another, something is going to happen. Probably not in my lifetime, but it will happen, you can’t unmake these printers. You can make them illegal, but that will just foster a black market or
a regime change.

But like I say, we’ll probably burn up the planet first.

Who knows, maybe land will become scarce, since you can’t print that.
Buy land, they ain’t makin’ any more.

 

A solution to the wikipedia problem.

December 17th, 2011

I just came up with a solution to the wikipedia problem. Every year wikipedia goes on about the millions of dollars they need to keep running. Wikipedia is a volunteer effort just like the local volunteer fire department. The labor is free, but the equipment and resources are what cost the money. Most people don’t have fire equipment to donate to the local fire department, but they do have computers that are idle most of the time.

Wikipedia is the perfect system to run as a world-wide-distributed application. People volunteer content, and they can volunteer cpu and disk too. How big could all the wikipedia data possibly be? It’s mostly text. You know there’ll be some geeks out there more than happy to have copies of the entire thing, and everybody else who contributes disk and cpu (just by running a little application on their pc) would host caches of sections of the whole database. Not outrageous to imagine, and given the state of peer systems nowadays, not that hard to do. If wikipedia started building that system and transferring all their current data to it, they’d never have to ask for money again.

 

— later comments —

 

Light O’Matic  –  Well, my first thought is that they would have to have some way of protecting the content from just anyone being able to make their own version of it.. for example, javascript which does a checksum of the page against a per-page hash that is either fetched from a trusted server, or calculated cryptographically with a master key from a trusted server. Second thought was that they’d have to either make the whole wiki editing system work distributed.. or they’d have to keep editing centralized. Then I realized there are actually a lot of systems out there already that at least partly solve these problems and maybe one of them totally solves it…
Stu M's profile photo

Stu M  –  Well firstly realize, that there wouldn’t be much point to putting of fake copies of your section of the database, because… you can just edit the real thing. The effect is the same. But yeah, you could make it easier with trusted servers. What happens now? There are people who scour the changehistory list and just go and edit and validate and remove and stop flamewars. The same thing would happen, but the changes would have to propagate around instead of all being in one place. Not trivial, but I think in the case of wikipedia, it’s a lot easier than say bank records.
Light O'Matic's profile photo

Light O’Matic  –  They could distribute it with git… But maybe it would be simpler to just distribute reads and keep writes centralized. More of a caching scenario. The problem with people being able to modify their copies of pages is that I am assuming that any given page can be served from a lot of different places.. so if one or some of them have tainted versions, it might take a while to even notice it. Then you’d have to have a system to do something about removing that bad data. Whereas now, if you edit a page, everyone sees it, it’s very clear what happened. If I can server any data I want and pretend it’s from wikipedia, I could serve a worm or virus in otherwise totally legit looking pages. So, there has to be protection.
Stu M's profile photo

Stu M  –  I suppose you could go with the ‘signed by one of the trusted authorities’ type of thing, which would mean a certificate-like data included with all changes, but the trusted part would come from a top-down delegated authority, so the root ‘certificate’ would be signed by mr wikipedia himself and everybody in the chain would be trusted by him or the guy in the chain above him.

 

I have invented the fastest computer in the world.

December 1st, 2011

The super zippy multi core crazy fast microprocessor in your computer spends well over 99% of its lifetime doing absolutely nothing.

On the rare occasion when you can manage it keep it a little busy you might hear the fan in your PC or laptop spin a little faster, but by and large your processor is idle most of the time.

What a waste. Most of the time the computer is waiting for you to read a web page or your email while it sits there and hums and waits for you to click the next button.

The problem though is that when you DO click something, you want it to respond quickly. So you have this incredible amount of processing capacity at your fingertips, so it can dance like crazy for you once every few minutes for a few fractions of a second and sit there useless the rest of the time.

But I have a solution. “What’s the problem?” you’re probably asking yourself…

I have designed a processor that takes all that idle processing capacity and stores it up, and then blasts through it when you want the computer to do something. In this way you can actually buy a lower capacity processor that functions much better than the current top of the line screamer. So it can be had for a lot less money and can be added to, to store more idle capacity for a lot less than the cost of a new processor or new computer.

If your processor fills up its processor capacity cache, you can sell the excess to big company server farms who are always for want of more capacity, or even “push” it over to your iphone or android machine. The market for this cache trade will be astronomical in size as more and more systems come online and intel and amd become less capable of enacting more and more of moore’s law.

You read it here first.

 

Linux vs Windows

September 2nd, 2011

It’s starting to sound to me like the cost of a windows license is cheaper than the cost of a lawyer to figure out if any and all
software you’re going to be writing software for/against/with will conflict with the zillions of linux related licenses.

I never thought of it before, but it sounds like the free software people are shooting themselves in the foot by having so many
different incompatible licenses. Actually I don’t know if they’re incompatible or not, but I’m certainly not going to pay a lawyer to
find out.

Now that’s just a cost-of-business kinda thing. I fully support anybody who wants to write any software and put as many or as few licenses on it having to do with statically building or non distribution or sale, etc… But you gotta figure, the end user (a software development company) is going to take a short soft look at “buy a windows license or figure out what we can and can’t easily use in the free software world” and they’re going to see that the windows license is an easier deal.

I tell ya, I’m a unix guy through and through, but at this point after hearing about all these different licenses, I’d lean towards
going with windows.

Print going away.

August 22nd, 2011

Is it just me or does anybody else also think that any publication that’s online only, isn’t as serious as something in print.

I’m sorry but there’s so much free shit and other pay shit on the net, why would I take your piece of shit any more seriously than just a list of links posted on facebook?

What makes it a magazine, and not just a bunch of pages that link to each other on a website?

It just seems lame and pathetic. Not cohesive at all.

If it was ‘an experience’ of some kind other than just linking from one article to another so easily pulled away by an errant ad placed here and there, I might be more inclined, but every magazine on the web is like every other magazine on the web. Just a bunch of free floating content to be found by google.

Which I guess makes the point: A print magazine is physically cohesive. You can’t accidentally look at an ad and end up reading a different magazine. You can’t find an interesting phrase and easily look up the phrase in the search bar and get drawn away by the wikipedia article on the subject. A magazine is a lot more than just a paper collection of articles. It’s a grouped pile of related information that is logically and physically tied together.

That exclusivity of grouping and physical attachment is what makes a magazine attractive over jumping from link to search box to link to search box.

Magazines that go online only do so because they can’t afford to print paper given that most of their readers are giving up paper for randomly flitting about … well, let’s face it… facebook. And it’s a dying art form and get used to it yada yada yada.

But I bet you won’t see the economist or the new york times going online-only until well after my generation is dead.

The end of swap.

August 9th, 2011

The other day I wanted to check out the new gnome 3 desktop for linux that everybody has been saying sucks so bad.
So I fired up a 4th vbox vm on my machine and installed it. Asking for another gig of memory for the vm I finally used up all 8gig of ram on my machine, and the most interesting thing happened…
It started using swap. I’ve had this machine for a year or two now I think, and I got 8 gig because I found swap annoying, and now I have proof. The problem is disk is getting bigger and bigger, and programs are getting bigger and bigger and memory is getting bigger and bigger, but the speed at which you can swap memory in to and out of disk hasn’t really changed much, certainly not in line with the memory and disk sizes, so what ended up happening was the machine would just freeze and the disk would spin for 15 seconds or so while a gig or two was swapped in or out of memory.

This made me realize that I think we’ve finally seen the end of swap. There’s no point. Memory is so cheap, you might as well just buy more memory and keep everything in it. Now I realize that using huge memory sucking vms is pretty much the worst case scenario, and there’s probably lots of small things that can be swapped out to disk due to lack of use, but when you start opening firefox (2gig resident at the moment) and chrome (another gig or two resident) you really run into the swap problem the same way, the VM just makes it worse faster.

Anyway, so along that stream, since I’ve decided never to use swap again, SSDs become a lot more interesting because you don’t have to worry about burning them out because of swap… So I found this.

http://www.newegg.com/Product/Product.aspx?Item=N82E16820227515

Another reason ipv6 is stupid

May 30th, 2011

I recently heard a talk about the demise of the internet as a result of the exhaustion of ip addresses with ipv4.

I always figured, ‘aahhhh, what a load of crap, just NAT the shit out of everything.’ But the speaker pointed out you run into the problem of port exhaustion on the internet-facing machine. Okay, point taken, I concede, the NAT forever thing won’t work. Although it certainly could last a long long time if they bothered to organize a little better, but I’ll let that go, we really are running out of addresses.

Still, the sky is far from falling, I have one really simple thought that made all of ipv6 really pointless and a terribly complicated exercise in wasting everybody’s time.

ipv4 has its share of problems, but the biggest one is that we’re running out of addresses, or rather in February, the IANA actually handed out the last batch. That’s it no more.

IPV6 was designed starting 15 years ago or so, and nobody lifted a finger to fix something that wasn’t broken. But in all that time, like c++ and everything else, they had grand plans, and they added features. IPV6 was going to streamline all sorts of byte wasting excessive packet size, it was going to enable ipsec at the ip layer (or something like that I forget the details) and they were going to add this useful feature, and that useful feature and so on and so forth for all 15 years that everybody was ignoring them and not implementing it.

But fast forward to now, and it turns out the only problem we ACTUALLY have to solve is that we’re running out of addresses.

ipv6 offers a 128 bit source and destination address, and the current rollout of ipv6 as it is being adopted is pretty much doing absolutely nothing other than solving the problem of running out of addresses. All that ipsec and all that other grand vision feature stuff is all gone. People are implementing ipv6 because they need more addresses and that’s it.

ipv6 was supposed to be many things to many people, but as it turned out, we only really needed the bigger address space.

Well if you look at the ipv4 header there’s got to be 3-5 bytes of shit that nobody ever uses for anything (like the fragment stuff), that just go to waste and could have been repurposed for an extra byte or two of source and dest addresses. It may not get you 128 bits of address but it would push out the address exhaustion problem a few centuries. It would have taken 1 guy maybe 2 days to hack it into the linux kernel (and you could even swipe a bit from the version to say whether or not this is a new-address-style packet so it could be backward compatible.) Microsoft would wait 2 years, then add support and say they invented it and are responsible for saving the world from the collapse of the internet.

But no. Instead everybody and their mother had to implement ipv6 which does nothing but add address space.

You almost can’t blame all those fucking morons. If they had just set out to solve the problem that needed solving, they could have implemented the hack ipv4 solution YEARS ago and there would never have been a problem, people would have had plenty of time to implement it before we started using the ‘extra’ address space.

But no, they had to design the next great thing which was going to solve all the problems of networking in one fell swoop. And because they’re fucking morons, they’re too dim to see that every other fucking process in the world falls apart the exact same way, and therefore could not have predicted what actually happened that ipv6 would be pared down to its one useful feature.

No, you can’t blame them, because they’re too fucking dumb.

A theory about easy programming.

May 9th, 2011

I do most of my programming in java and I use eclipse which just makes everything real easy. I can refactor, flip between source files, look up every instance of something with a few keystrokes and in a matter of seconds.

This past week I had to work on fixing a C program I wrote a few years ago. My dev environment for C programs isn’t quite as snappy as my java environment and I had to do a bunch of things the old school way. A text editor, a compiler that I actually have to invoke rather that being run automatically every time I hit save and so on.

It was a much smaller program than anything I usually work on but because of the environment involved, everything went a lot slower. I had to wait longer between writing something and testing, and between saving and compiling, and between debugging a line and seeing the watch variables get updated. Every little step of the entire process is slower.

And it makes the whole thing take longer as a result, but interestingly, it keeps my attention. While you’re in the thick of programming, getting distracted is the kiss of death, so I have to maintain my concentration for much longer periods of time, which gives me more time to think in the context in which I’m working, and gives
me time to think through some of the things I’m doing so that I do the right thing the first time instead of doing it wrong and then doing it again, but it’s so quick it doesn’t matter.

And the time just whipped right by, and I’m wondering if that’s not where my love of programming went. Killed by the rapid application development environment.

Making programming brainlessly easy (java combined with eclipse) takes the fun out of it. When was the last time I had to worry about a deallocating memory to avoid a memory leak? Years. Have I had a need to make something go so fast that it could only be done in a non-java language? Not in a long long time. 1991 I think. They just keep making hardware faster and faster.

I’m a dying breed I guess.

nazis and closures

April 1st, 2011

It has been long known that the NA in nazi stands for network administrator, but it wasn’t until last week that a friend of mine enlightened me to the full meaning of the acronym: Network Administrator Zero Internet.

In other news, having recently learned about the details of how closures are implemented (in javascript anyway) it seems to me that closures isn’t a terribly good word to describe the effect and instead I think the technique should be called “Flying Scope.”

How to save an iphone

March 28th, 2011

I just had the best idea for an iphone application. Go ahead steal my idea but give me credit, and some money.
A friend of mine just lost his iphone to the washing machine.
First of all, I can’t imagine why they’re not waterproof. They have no moving parts and there’s no externally accessible battery compartment, I can’t imagine why it’s not waterproof.
But anyway. Here’s my idea: an app that uses the motion sensor (and maybe even the camera) to detect when it’s being bounced around (as in a washing machine) and sends you an email saying “Help! I think I’m in the washing machine! Save me!”
If you have any amount of home automation set up, the iphone could power off the washing machine itself and save itself.
Just an idea there.