Today I realized we write our numbers backwards.

December 26th, 2015

We write our letters from left to right, but we write our numbers from right to left.

When we write the number 100, we write the digits 1, 0, 0 from left to right, but that’s not what I mean.

When we add up a series of numbers, we align them right shifted so that all the positions with the same value are lined up in a row. In order to make that work, we effectively write the numbers from right to left.

Wouldn’t it make more sense to write the least significant digit first, so 100 would be written 001 so that we could write from left to right and the numbers would still line up correctly. Adding 100 and 2036 for example…

001
6302
------
6312

THEY FIXED IT!

December 24th, 2015

A few months ago I bought a bought a netgear R6250 router. It does ac, and when I had something that did ac, it was wicked fast.

Overall I was less impressed with it than the actiontek or whatever it was router that verizon gave me with my FIOS, but the fios router didn’t even do 802.11n so it was time to upgrade.

It’s good enough, it works mostly, doesn’t need to be rebooted all that often but I found one extremely annoying problem with the interface that drove me bonkers.

It has a port forwarding feature like I expect most routers do nowadays, which allows me to connect to the various machines I have on my network when I am away from home. I’m that rare breed that does system administration from my phone.

But the port forwarding feature had a weird limitation. You couldn’t forward to the same numbered port on two different machines.

So if I had 2 machines running ssh on port 22, I couldn’t assign two different external ports to be forwarded (say 22 and 23) to forward to port 22 on two different machines. It just wouldn’t let me do it.
There’s no technical reason you can’t do it, my fios router let me do it, but the UI would not allow the reuse of the number.
I fiddled with the javascript where the restriction is enforced and I saw it was just an arbitrary rule in the UI.

I didn’t bother enough to write a request that would go past the UI and try and submit the request I wanted to the router itself, because it was easier just to have my machines listen for ssh connections on different ports, but that always annoyed me.
I wrote a letter to tech support and I think they basically said “suck on it.”

Well here I am a few months later logging into my router’s admin page and it says there’s a firmware upgrade. So I do it and of course the first thing I try and do is map two ports to the same port on different machines, and voila! It worked!

They fixed it. They actually fixed it.

Yay netgear.

For reference, the old firmware version was V1.0.3.6_10.1.3 and it upgraded to V1.0.4.2_10.1.10

 

Finished languages

November 7th, 2015

Why can’t they leave well enough alone. The C++ people won’t be happy until they’ve turned C in to java and python and erlang and lisp.

I don’t get it, sometimes languages actually get finished. And sometimes people feel the need to make things “better” (in the progress sense) rather than make something new.

C itself was finished. Things like binutils are finished. Nobody feels the need to change ls all the time.
Why is it that some things can be left alone and some have to be ego-ed to death?

A friend of mine told me this story about how he was interviewing somebody for a C++ job and they didn’t know what a pointer was.

He thought this outlandish, but to me it makes sense. They have added so much crap to C++ that they’re trying to turn it into a new language.

If you read the internoise, you’ll see people say native C arrays are bad in C++. It took me a long time to figure out what they meant by that.

There’s nothing actually wrong withe native C arrays, they’re just ‘dangerous’ for people who only know how to use the ‘new’ C++ where you don’t have to know what pointers are.

Why is this a problem? You could say “Well, smarty pants, just don’t use the new features you don’t like.” And I’m all for that, that’s exactly what I do. I use the good early parts and leave the line noise alone.

But alas, maybe I work with other people who suffer from that ancient programmer’s ailment have having to play with all the cool new toys as soon as they’re discovered and you find yourself working on line noise.

So what languages have been finished.

Well, assembly language changes as they add features to processors. I suppose there’s no getting around that.

Fortran I think is done.

Cobol is done.

C is done.

Bash is done.

C++ can’t be left alone.

PHP can’t be left alone, there’s some great rants about that.

And python seems to also be in update hell, so there’s python and python3 binaries…

What else…

Software gone wrong.

October 4th, 2015

Dkms tries to solve the problem of updating 3rd party kernel modules when a new kernel is installed.
It might work but it seems on redhat and centos installations there’s also another script called weak-modules.
The purpose of weak-modules is to detect if an existing kernel module built for one kernel is compatible with a newly installed kernel and won’t bother recompiling the module if it thinks the existing build is compatible with the new kernel. It makes a symlink from the new module’s location to the old built copy of the module.
The problem is that weak-modules doesn’t work very well.

What it does is compare the symbol table of the two kernels as they relate to the symbols used by the kernel module. If there are no changes to any of the symbols that the kernel module uses, it assumes they’re compatible and makes a symlink and dkms won’t try and build it against the new kernel headers.

Depending on the kernel module, the result can be that you upgrade your kernel, and when you reboot, you get a kernel panic, with the name of the module as the problem. The module didn’t do anything wrong, the weak-modules script did.

So if you upgrade your kernel and you’re unlucky enough to have a kernel module loaded that isn’t affected by any of the symbols that did change in the new kernel, you could end up with a computer that won’t boot anymore.

 

Software gone wrong.

October 4th, 2015

Acidrip is installed by xubuntu by default. I think it’s the default dvd ripper.
Somebody changed one of the underlying tools that acidrip uses so that it can no longer enumerate the list of chapters on a dvd. So you can’t get a chapter list so you can’t rip a dvd with acidrip. Acidrip, the default dvd ripper is now useless.
I checked this out and it seems there’s an extra newline after each line of response from the chapter enumerator. Whatever, doesn’t matter, it breaks acidrip and you can’t rip dvds.

Software gone wrong.

October 4th, 2015

Recently I found that xubuntu on my laptop would not show me the desktop after waking up.
Second or third try it would work. But it seems this is a known problem and the solution is to uninstall lightlocker and install xscreensaver.
So somebody broke lightlocker and now its almost useless. It doesn’t get fixed and it doesn’t get removed as the default screen lock handler, you just end up with a laptop that won’t reliably wake up.

Save buttons and backups.

September 17th, 2015

It seems to me that it’s gotten to the point where we don’t need a save button anymore.
Actually I think this happened a long time ago, but we are all so hung up on our save buttons.
In windows programs you’ll see (although I don’t use windows much so maybe this isn’t true anymore) that the icon for the save button is a picture of a 3.5 inch disk. An item that is so far out of use that amazon charges $1.50 for one of them, after they showcase all of the discontinued listings.
But what is the save button for? It takes the data in memory and puts it on disk.
Why can’t the computer do this by itself? Why can’t it permanently record every bit of work you do, so you won’t lose it?
Well, the classic argument is that if you screw up what’s in memory you can fall back to the last saved version.
A valuable technique for sure, but you only get one version of history and it is easily wiped away if you accidentally hit save. It’s hard to argue that the save button is a serious archival data system.

My point is that computers have gotten complex enough and fast enough and disk is so cheap and voluminous that there’s no reason the computer can’t keep track of all of your history since the dawn of time.
Some applications provide this functionality, I think ms office products do, and I know eclipse has a built in local history feature, but these are limited to specific applications.

But it turns out there’s a way to provide this history for everything you do, it’s called zfs snapshots.
Alas I don’t think they’ve ported zfs to windows, and a little poking around says they’re not going to implement btrfs either, but what you can do is set up a virtual machine running some form of linux that exposes a samba share attached to a zfs drive.
And just have the drive take snapshots every minute.
It only stores the deltas, so it will take you quite a while to fill up a $50 terabyte drive.

I’m a linux guy so I use it for everything. I’ve gotten into the habit of storing everything in my home directory, so I made my home directory a zfs filesystem and I run these scripts to snapshot every minute of use, and I never lose anything…
https://github.com/nixomose/scripts/tree/master/zfs
https://github.com/nixomose/zfs-scripts/tree/master/scripts

Can’t say enough good things about zfs.

I had to take some notes for the process of converting my home directory to be a zfs filesystem.

There’s docs on how to make your root filesystem a zfs filesystem, but it’s a real complicated hassle, and does it really matter if /var/ and /run and /usr are on zfs? That stuff doesn’t change much, and you can reinstall it. The valuable stuff you want to backup and archive is usually in your home directory.

So firstly, google and mozilla make a big mess of your home directory, via the .cache and .mozilla directories. You’re better off moving those outside of your home directory and making symlinks to them.

So here’s a cheatsheet list of things I did to make a zfs volume out of my home directory:

(presuming you’re using all of your disk space, and need to make room for the zfs pool)
boot live cd
shrink /
make new partition, leave blank

reboot to your machine again, make sure everything is sane.

I use ubuntu, your mileage will vary with distro…

add-apt-repository ppa:zfs-native/stable
apt-get update
apt-get install ubuntu-zfs
modprobe zfs
cd /home  (don’t tie up your home dir by having it as your current directory)

sudo su

mv username oldusername

(assuming /dev/sda5 is your freed-up partition space)

zpool create zhome /dev/sda5 -m /home/username

(this alone is worth the price of admission…)
zfs set compression=lz4 zhome
cd ../oldusername
mv .* * ../username/
chown username:username username
reboot just to make sure zfs automounts it.

and now you can add the backupnotidle script here for added fun.

https://github.com/nixomose/scripts/tree/master/zfs

 

Yet another note on star wars

September 7th, 2015

There’s that scene where obi wan has to sit down after alderon was blown up by the death star.
He’s deeply hurt because of a disturbance in the force.
Well this just occurred to me: how fast is the force? How long does it take to travel interstellar distances?

The first thing we learn about star wars is that it happens in a different galaxy a long long time ago, and very far away. It’s the absolute first thing we find out.
So if we have our galaxy, and we know there’s another galaxy where star wars lives, it’s not too much of a stretch to say that there are probably other galaxies, and some of them will also have the force.

If the galaxy is really big, I mean really really big (you-may-think-it’s-a-long-way-down-to-the-chemist big) then there are probably millions upon millions of galaxies that are force enabled. Probably more.

Some of them will have empire like things, and some of them will have darth vaders who will go blowing up planets full of people.

But there is only one universe.

So it seems to me given the number of likely force-enabled planets being blown up all the time, obi-wan would never be able to walk.

Patents

August 15th, 2015

So there is this long ranging debate about the value of patents which stretches back hundreds of years.

If I understand it correctly, some people think that inventors should be rewarded for their ideas and be given an exclusive use period of time for selling their invention as the reward.

Other people think that patents stifle innovation and should be done away with.

So let’s think about that for a minute. What would happen if there were no more patents?

People who invented things for profit made by taking advantage of the patent system would stop inventing things.

Does this mean nothing would ever be invented ever again? Probably not. What would happen is that the same type of people who write open source software would invent things and make the invention publicly available for the greater good and expect no financial gain in return.

Maybe they’d get some popularity out of it which might get them a job or funding for their project, but they wouldn’t expect to make any money directly from their invention.

The other side of the coin is that not all inventions are cheap and easy to create. Like drugs from pharmaceutical companies, sometimes a lot of R&D goes into inventing something, and this could not be produced by a handful of volunteers willing to give their ideas away.

So what we’d end up with is something different that would work better in some cases, and worse in others. Simpler ideas would be picked up by quick manufacturers and brought to market more quickly. And complex and difficult ideas just wouldn’t happen, or would take a lot lot longer as the ideas were slowly grown over a long periods of people standing on the shoulders of their predecessors.

There is also the argument that people who find no value in patents aren’t creative and have never had an idea worth protecting and therefore would only gain by getting rid of the patent system.

 

Gotta Get Git

June 8th, 2015

I think I finally get git. I did my first rebase today and I understood what it was doing, and it did what I wanted.

Feeling confident in my understanding of git, and my slight understanding of github, I feel the need to point out a funny little thing I noticed about the brave new world of source control management we all seem to have entered into.

In the good old days, you had version control like cvs or svn.
Everybody checked out, made changes and checked in, if somebody got their changes in before you, you had to merge the updates and commit again. If there was a merge conflict, it was your problem. You had to fix it before you committed.

This spread the responsibility of doing the merging around to everybody and I guess it was unfair in that if you were slower, or committed less often, you had to do more merging. But that’s how it was, erm… still is to some people.

In short, you checked out, you did work and you checked in. Everybody played nice, you tried not to break anything and the build ran off HEAD in the repository.

Git is actually very cool, but I think it is overkill. Mercurial I also found out about, is a happy medium, it gives you all the cool parts of git without having gone overboard the way git does, but that’s just my goldilocks opinion.

Git is the cool source control management system of the decade, and ‘github’ is the new way of working.

So this is what I noticed:

svn/cvs/perforce/etc:
1) check out
2) make changes
3) check in.

git + github.
1) fork project
2) clone your copy of the project
3) check out from your local repository to your work directory
4) make changes
5) add changed files to list of things to commit
6) commit changes to your local repository (admittedly this can be done in one step, but it is actually 2 steps)
7) push changes from your local repository to your forked repository on github.
8) open a pull request to the owner/project manager of the original project you forked, where they will then pull the changes from your forked repository to the main repository.

I think we can safely call this ‘progress’.

I think the oddest thing about this new process, is that svn used to tell you when you tried to commit that you had to update and merge changes first.
The github way, github will mention that your changes will or won’t merge cleanly, but that doesn’t stop you from submitting a pull request.
So the effort of not accepting your pull request becomes the problem of a human (the owner of the project who got the pull request) whereas it used to be a computer that did it.
I don’t know github that well, maybe it’s configurable not to allow pull requests if the merge won’t be clean, but if not, the making-the-human-part-of-the-process seems to be a step backwards.

I realize that github now gives the project owner the flexibility of not accepting changes they don’t want, but, I dunno, where I come from, we’re all supposed to be working on the same team.

Maybe that’s the core difference between the open source environment and the corporate environment: you can ignore the work of open sourcers, after all, you’re not paying them anyway.