This is awesome, and github is not.

March 25th, 2016

http://sethrobertson.github.io/GitFixUm/fixup.html

 

There should be more fix-it-yourself tools like this.

Although if nothing else, it speaks to how overly (and seemingly unnecessarily) complicated git is.

 

Did I rant about this facet of github? I finally hit upon the single most poignant problem with github. This is really a git problem, but the problem is forced on you more strongly with github.

By being a distributed repository, there’s bits and copies of it all over the place, and the defining bit of information that is missing is Which One Is Important.

You can figure out that this fork came from that repo and that repo came from that fork, but that doesn’t mean the most recent fork is the correct one.

One I make a repo somewhere it’s stuck there forever, because if I try and move it, by forking and using the new one… now I have two and there’s no obvious way that I can tell to mark one repository as being the “right” one. Maybe I forked something and made some changes, but the whole thing was a bad idea and I just want to use the original. But I go away and come back a month later and I have no idea which is the one I should do builds off of.

With a central repository, there is really no question where you go to do a build.

You can kind solve the problem a little with git by making a bare repository and just having one, and then it’s a little more obvious that THAT’S the one. The bare-ness of it, is a flag saying that this one is more important than all the others, this is the one where I put the live stuff.

But github doesn’t have that. Everything is a repo exactly like every other repo, forked or not.

They just need a flag saying “important” and you can’t flag more than one that way. Something like that.

 

 

root, not root and back again.

March 25th, 2016

I used to be of the mind that it’s my machine, and I’m going to log in as root because I don’t need to be hindered from doing what I want on my machine.

Over the years I gradually saw the wisdom of doing most things as a non root user, because it protects me from making dangerous mistakes, and it enables me to set up my system so I can share it with other people, and also there’s more and more software that complains that it doesn’t want to run as root.

But now I work at a job where I do a lot of things that involve being root a lot of the time. What you ask? Reading raw data from block devices mostly, updating system configuration that needs to change a lot. And I find typing sudo su plus my password all the time to be real annoying. The computer is supposed to be a tool for the human, not a tool to the human.
Also with the advent of virtual machines and containers there’s almost no need to ever share a machine with another person, you can always make them their own vm or container.

So I’m starting to lean back towards the logging-in-as-root side, because again, it really is my machine and I can do what I want with it, and having it stop me all the time so I can type “sudo su +password”, is annoying, since I’m just going to run the same command again anyway, and if it was a mistake, it’s just that much longer to make it happen before I can go fix my mistake.

 

Today I realized we write our numbers backwards.

December 26th, 2015

We write our letters from left to right, but we write our numbers from right to left.

When we write the number 100, we write the digits 1, 0, 0 from left to right, but that’s not what I mean.

When we add up a series of numbers, we align them right shifted so that all the positions with the same value are lined up in a row. In order to make that work, we effectively write the numbers from right to left.

Wouldn’t it make more sense to write the least significant digit first, so 100 would be written 001 so that we could write from left to right and the numbers would still line up correctly. Adding 100 and 2036 for example…

001
6302
------
6312

THEY FIXED IT!

December 24th, 2015

A few months ago I bought a bought a netgear R6250 router. It does ac, and when I had something that did ac, it was wicked fast.

Overall I was less impressed with it than the actiontek or whatever it was router that verizon gave me with my FIOS, but the fios router didn’t even do 802.11n so it was time to upgrade.

It’s good enough, it works mostly, doesn’t need to be rebooted all that often but I found one extremely annoying problem with the interface that drove me bonkers.

It has a port forwarding feature like I expect most routers do nowadays, which allows me to connect to the various machines I have on my network when I am away from home. I’m that rare breed that does system administration from my phone.

But the port forwarding feature had a weird limitation. You couldn’t forward to the same numbered port on two different machines.

So if I had 2 machines running ssh on port 22, I couldn’t assign two different external ports to be forwarded (say 22 and 23) to forward to port 22 on two different machines. It just wouldn’t let me do it.
There’s no technical reason you can’t do it, my fios router let me do it, but the UI would not allow the reuse of the number.
I fiddled with the javascript where the restriction is enforced and I saw it was just an arbitrary rule in the UI.

I didn’t bother enough to write a request that would go past the UI and try and submit the request I wanted to the router itself, because it was easier just to have my machines listen for ssh connections on different ports, but that always annoyed me.
I wrote a letter to tech support and I think they basically said “suck on it.”

Well here I am a few months later logging into my router’s admin page and it says there’s a firmware upgrade. So I do it and of course the first thing I try and do is map two ports to the same port on different machines, and voila! It worked!

They fixed it. They actually fixed it.

Yay netgear.

For reference, the old firmware version was V1.0.3.6_10.1.3 and it upgraded to V1.0.4.2_10.1.10

 

Finished languages

November 7th, 2015

Why can’t they leave well enough alone. The C++ people won’t be happy until they’ve turned C in to java and python and erlang and lisp.

I don’t get it, sometimes languages actually get finished. And sometimes people feel the need to make things “better” (in the progress sense) rather than make something new.

C itself was finished. Things like binutils are finished. Nobody feels the need to change ls all the time.
Why is it that some things can be left alone and some have to be ego-ed to death?

A friend of mine told me this story about how he was interviewing somebody for a C++ job and they didn’t know what a pointer was.

He thought this outlandish, but to me it makes sense. They have added so much crap to C++ that they’re trying to turn it into a new language.

If you read the internoise, you’ll see people say native C arrays are bad in C++. It took me a long time to figure out what they meant by that.

There’s nothing actually wrong withe native C arrays, they’re just ‘dangerous’ for people who only know how to use the ‘new’ C++ where you don’t have to know what pointers are.

Why is this a problem? You could say “Well, smarty pants, just don’t use the new features you don’t like.” And I’m all for that, that’s exactly what I do. I use the good early parts and leave the line noise alone.

But alas, maybe I work with other people who suffer from that ancient programmer’s ailment have having to play with all the cool new toys as soon as they’re discovered and you find yourself working on line noise.

So what languages have been finished.

Well, assembly language changes as they add features to processors. I suppose there’s no getting around that.

Fortran I think is done.

Cobol is done.

C is done.

Bash is done.

C++ can’t be left alone.

PHP can’t be left alone, there’s some great rants about that.

And python seems to also be in update hell, so there’s python and python3 binaries…

What else…

Software gone wrong.

October 4th, 2015

Dkms tries to solve the problem of updating 3rd party kernel modules when a new kernel is installed.
It might work but it seems on redhat and centos installations there’s also another script called weak-modules.
The purpose of weak-modules is to detect if an existing kernel module built for one kernel is compatible with a newly installed kernel and won’t bother recompiling the module if it thinks the existing build is compatible with the new kernel. It makes a symlink from the new module’s location to the old built copy of the module.
The problem is that weak-modules doesn’t work very well.

What it does is compare the symbol table of the two kernels as they relate to the symbols used by the kernel module. If there are no changes to any of the symbols that the kernel module uses, it assumes they’re compatible and makes a symlink and dkms won’t try and build it against the new kernel headers.

Depending on the kernel module, the result can be that you upgrade your kernel, and when you reboot, you get a kernel panic, with the name of the module as the problem. The module didn’t do anything wrong, the weak-modules script did.

So if you upgrade your kernel and you’re unlucky enough to have a kernel module loaded that isn’t affected by any of the symbols that did change in the new kernel, you could end up with a computer that won’t boot anymore.

 

Software gone wrong.

October 4th, 2015

Acidrip is installed by xubuntu by default. I think it’s the default dvd ripper.
Somebody changed one of the underlying tools that acidrip uses so that it can no longer enumerate the list of chapters on a dvd. So you can’t get a chapter list so you can’t rip a dvd with acidrip. Acidrip, the default dvd ripper is now useless.
I checked this out and it seems there’s an extra newline after each line of response from the chapter enumerator. Whatever, doesn’t matter, it breaks acidrip and you can’t rip dvds.

Software gone wrong.

October 4th, 2015

Recently I found that xubuntu on my laptop would not show me the desktop after waking up.
Second or third try it would work. But it seems this is a known problem and the solution is to uninstall lightlocker and install xscreensaver.
So somebody broke lightlocker and now its almost useless. It doesn’t get fixed and it doesn’t get removed as the default screen lock handler, you just end up with a laptop that won’t reliably wake up.

Save buttons and backups.

September 17th, 2015

It seems to me that it’s gotten to the point where we don’t need a save button anymore.
Actually I think this happened a long time ago, but we are all so hung up on our save buttons.
In windows programs you’ll see (although I don’t use windows much so maybe this isn’t true anymore) that the icon for the save button is a picture of a 3.5 inch disk. An item that is so far out of use that amazon charges $1.50 for one of them, after they showcase all of the discontinued listings.
But what is the save button for? It takes the data in memory and puts it on disk.
Why can’t the computer do this by itself? Why can’t it permanently record every bit of work you do, so you won’t lose it?
Well, the classic argument is that if you screw up what’s in memory you can fall back to the last saved version.
A valuable technique for sure, but you only get one version of history and it is easily wiped away if you accidentally hit save. It’s hard to argue that the save button is a serious archival data system.

My point is that computers have gotten complex enough and fast enough and disk is so cheap and voluminous that there’s no reason the computer can’t keep track of all of your history since the dawn of time.
Some applications provide this functionality, I think ms office products do, and I know eclipse has a built in local history feature, but these are limited to specific applications.

But it turns out there’s a way to provide this history for everything you do, it’s called zfs snapshots.
Alas I don’t think they’ve ported zfs to windows, and a little poking around says they’re not going to implement btrfs either, but what you can do is set up a virtual machine running some form of linux that exposes a samba share attached to a zfs drive.
And just have the drive take snapshots every minute.
It only stores the deltas, so it will take you quite a while to fill up a $50 terabyte drive.

I’m a linux guy so I use it for everything. I’ve gotten into the habit of storing everything in my home directory, so I made my home directory a zfs filesystem and I run these scripts to snapshot every minute of use, and I never lose anything…
https://github.com/nixomose/scripts/tree/master/zfs
https://github.com/nixomose/zfs-scripts/tree/master/scripts

Can’t say enough good things about zfs.

I had to take some notes for the process of converting my home directory to be a zfs filesystem.

There’s docs on how to make your root filesystem a zfs filesystem, but it’s a real complicated hassle, and does it really matter if /var/ and /run and /usr are on zfs? That stuff doesn’t change much, and you can reinstall it. The valuable stuff you want to backup and archive is usually in your home directory.

So firstly, google and mozilla make a big mess of your home directory, via the .cache and .mozilla directories. You’re better off moving those outside of your home directory and making symlinks to them.

So here’s a cheatsheet list of things I did to make a zfs volume out of my home directory:

(presuming you’re using all of your disk space, and need to make room for the zfs pool)
boot live cd
shrink /
make new partition, leave blank

reboot to your machine again, make sure everything is sane.

I use ubuntu, your mileage will vary with distro…

add-apt-repository ppa:zfs-native/stable
apt-get update
apt-get install ubuntu-zfs
modprobe zfs
cd /home  (don’t tie up your home dir by having it as your current directory)

sudo su

mv username oldusername

(assuming /dev/sda5 is your freed-up partition space)

zpool create zhome /dev/sda5 -m /home/username

(this alone is worth the price of admission…)
zfs set compression=lz4 zhome
cd ../oldusername
mv .* * ../username/
chown username:username username
reboot just to make sure zfs automounts it.

and now you can add the backupnotidle script here for added fun.

https://github.com/nixomose/scripts/tree/master/zfs

 

Yet another note on star wars

September 7th, 2015

There’s that scene where obi wan has to sit down after alderon was blown up by the death star.
He’s deeply hurt because of a disturbance in the force.
Well this just occurred to me: how fast is the force? How long does it take to travel interstellar distances?

The first thing we learn about star wars is that it happens in a different galaxy a long long time ago, and very far away. It’s the absolute first thing we find out.
So if we have our galaxy, and we know there’s another galaxy where star wars lives, it’s not too much of a stretch to say that there are probably other galaxies, and some of them will also have the force.

If the galaxy is really big, I mean really really big (you-may-think-it’s-a-long-way-down-to-the-chemist big) then there are probably millions upon millions of galaxies that are force enabled. Probably more.

Some of them will have empire like things, and some of them will have darth vaders who will go blowing up planets full of people.

But there is only one universe.

So it seems to me given the number of likely force-enabled planets being blown up all the time, obi-wan would never be able to walk.