Archive for the ‘Notes’ Category

How the brain works, addendum: why you can’t live forever.

Wednesday, August 24th, 2016

Before the internet was anybody’s bad idea, I wrote this:

http://deadpelican.com/howthebrainworks.html

And I’ve had a lot of time to think about it more over the years, and I have a few thoughts to add…

You can’t live forever. ‘You’ being the important word here.

There’s been a lot of talk lately about the science of getting older and how to put it off as long as possible.

There may be a magical point in time where life expectancy starts to increase at more than 1 year per year.

Anybody alive at that time (and able to afford whatever treatments are required) can conceivably live forever. I mean eventually you’ll get hit by a bus or taken out by some nasty disease, but the dying-of-old-age problem will have been solved.

Firstly if you think about it for a few minutes, you probably don’t want to live forever anyway, but I imagine you can come up with reasons besides the ones I’m about to offer.

Consider the idea of ‘you’. You are a unique individual person made up of your education and experiences and dna yada yada yada. I’m sure lots of philosophers have gone over this endlessly before me. The simple version of the problem is ‘what defines you’ changes. If you’ve ever looked back at your teenage years and thought “boy that time I did yada yada was really stupid” you might realize that given the same circumstances, you would decide differently. Because you’ve learned from your mistakes and you’ve realized a better decision making process given that scenario and you would do things differently.  Is that the same you? How many experiences do you have to have, and how many different decisions would you have to make before you started to think you were not the same person you were when you were 18.

But that’s a philosophical argument about not living forever, here’s a more concrete one.

Quite simply: your brain will run out.

When you are born you are a randomly firing set of neurons with just enough firmware to keep you breathing and eating. Watch any newborn, this is pretty obvious.

As you grow you learn to see and to hear and to do the most amazing of human skills: recognize patterns.

Then you learn to remember.

After that, it’s just a matter of gathering more and  more patterns and experiences until somebody hires you.

After that you can pretty much coast for a few decades.

But then what. The human brain is probably not a fixed size, but it does have to fit in your skull. Maybe some really smart people co-opt some space from their throat or something, but at some point the brain can only get so big.

Unless there’s some weird hyperspatial transference going on (and I have it on good authority there isn’t) there is a finite amount of information you can store. There are a finite number of neurons that can fire to enable you to think, absorb information, tie it to an existing pattern and impress it to long term memory. If you learn something new, at some point, something’s got to go.

I imagine if anybody could figure it out, we’d find that the human brain has evolved quite an impressive compression algorithm. One that, much like common picture compression, is lossy. It ties patterns together and makes a few distinctions between them, but  certainly does not store a perfect representation of all of your memories, each separate and complete.

You don’t know what you don’t know, and you don’t even know what you did know but forgot. And you don’t know what you forgot, so you can’t even attempt to reproduce it. The error correction is limited.

People tend to misremember things, some worse than others, but nobody’s perfect and as it turns out, the laws of physics do not require people to have perfect memories. People make mistakes, not everything works out correctly all the time. And it’s okay, the world keeps spinning. But I digress, because I like to.

So here it is, you can’t live forever simply because on some level, your brain will stop working well. I have this vague idea that as you get older, some of the thinking neurons are sacrificed for memory neurons, but I don’t have any good evidence of that. But even without that at some point, your brain will fill up and you either won’t remember things as you would have before, or you’ll start losing old memories, or most likely, both.

Somebody once explained to me that a great way to tease out the possible solutions to a problem, is to make the numbers in your problem really large, and then it becomes obvious where the solution to the problem lies.

Given the decrepit state that your brain will be in after a few thousand years of live-forever treatment, it’s hard to imagine that you will still be you, or even able to think correctly.

No, you can’t live forever.

 

 

Kirk is a bad deal in the long run.

Sunday, July 31st, 2016

Just about every time kirk lofts himself into space, he manages to lose some crew members.

We are of course totally impressed with all the cool and wonderful things he does, and how he gets himself out of the craziest near death situations, but if you think about it for a minute, he may be a bad bet overall.

Every time he survives one of his misshapen plans, he loses a few crew members. We are of course happy that most of them survive, but let us not forget to count the actual numbers.

If kirk keeps living he will go on to go on (ha ha) another mission where he will lop a few more off the headcount.

If he had simply completely failed miserably, blown up his ship and everybody in it including himself the first time he went out, surely this would be a horrible loss of starfleet membership. But it would end there, it wouldn’t go on forever and ever like it has been.
We’ve got to be up to like 10 kirk movies by now. That’s a lot of dead crew that might have been saved.

 

Regulating the internet is inevitable I think.

Thursday, July 28th, 2016

Today I heard on the radio that there’s some locale in long island that is complaining miserably because they’ve had 20 power outages in the past month. Mostly due to the excessive heat apparently.
And it got me thinking. I’ve been making the point to my compatriots at work lately, that they don’t know a world without internet, and they expect it to always be there.
So I make the comparison that when I was born, we had hot and cold running water and electric power 24/7 and I never knew a world without it and I expect it to always be there.
Nobody regulates the internet yet. But they regulate water and they regulate electricity. So much so that part of the reason long island has blackouts is because con ed can only charge so much regardless of the cost to generate the electricity, oh boo hoo I feel so bad for them.
But the internet is not there yet, but it’s hard to imagine it won’t come to that sooner than later, simply because it’s become one of those ubiquitous life services we’ve all come to expect and can’t live too conveniently too long without.

I’ll never understand bad user interfaces.

Sunday, July 10th, 2016

Specifically I’m talking about web pages.
I’ll be the first to admit that I am not the main target audience of any newly designed web page. I don’t react to fancy doo-dads , things that animate and flash all over the place.
Generally I want to go to a web page to get information or supply information.

But nowadays I find that webpages are more interested in making themselves hard to use and to navigate than to simply supply the information they claim to offer.

One design trait that seems prevalent as of late, is the ever shifting webpage.

In order to put a big fat ad at the top of the page, the rest of the page has to slide down to accommodate it. If you were reading some text, you will find all of a sudden that your text is sliding down to make room for a bigger ad or sliding up to fill the space when it goes away.

You can’t predict when this is going to happen, and the ad isn’t even visible on the page most of the time so all you get is this bizarre effect of the page spazzing out while you’re trying to read from it.

Another little peeve of mine is the invisible UI features. WordPress was the first place I noticed this years ago, but now I see google and plenty of other sites do it too.

I’m guessing they’re shooting for a clean and tidy page, but in making features of the web page invisible, it’s impossible to know what features are available, unless you already know they’re there, and if you don’t, they might as well not be there because you can’t see them.

If you’re lucky, or spastic enough and happen across these features because you’re unable to control your mouse and it has to flip all over the page in hopes of unlocking some secret UI feature, you are among the lucky few that can take full advantage of the website.

But to the rest of us with no gift for clairvoyance, we have to suffer trying to figure out “how am I supposed to select this comment to mark it as spam?”

And lets say I was part of the in-crowd and savvy enough to know the feature was there, I still have to move my mouse to the button to click.

It is impossible to aim a mouse accurately over something you can’t see, so first you have to mouse over the area to make the invisible feature visible, then you can first start to aim at the particular button you want.

In what way is this a better UI? I know I’m a dinosaur, but making things harder if not impossible to use unless you are blessed with preordained knowledge of how the web designer was thinking that day, I can’t see any way this can be perceived as better.

I try and imagine what the web designer was thinking. They’re sitting their with their HTML editor and their CSS editor, and they say to themselves, “Okay, I’ve got it all working, but let me hide some of these elements so they don’t clutter up the page. When I test it, I know they’re there so I can still click on them to test, yeah, that works.”

So these UI people can’t really put themselves in the position of the user they’re designing for because they already know about the invisible features and generally where they are because they put them there.

Maybe web designers should test out each other’s work so they can for a brief moment glimpse into the shitty world they are making for the rest of us.

 

 

 

Washing your hands is the same as taking antibiotics.

Sunday, June 12th, 2016

The difference is you’re not supposed to take antibiotics if you don’t absolutely have to, whereas you’re supposed to wash your hands all the time, especially before you touch food.

Scientists are well aware that the use of antibiotics spell their own doom. What I have read says using antibiotics on bacteria exerts an evolutionary pressure on the bacteria to mutate into a form that is resistant to that antibiotic.

It is true, it is going to happen, it is only a matter of time. The argument for not using antibiotics all the time, is that it will lengthen the time that the antibiotics are effective giving us humans more time to come up with replacements when they do finally fail.

Washing your hands is important because dirty hands enable the transmission of disease especially through food, but also through touching other people or things other people will touch.

It seems to me though, that this will exert the same evolutionary pressure on the germs that you are removing from your hands by washing them. Certainly you’re not going to wash away all the germs every time you wash your hands…

There is a distinction here though, antibiotics kill bacteria, washing your hands merely moves the bacteria and other germs off your hands into the sink. Perhaps you’re not killing the germs so you’re not creating an evolutionary pressure to work around the problem. I’m not sure. Seems to me, the germs will try to expand and grow as far and wide as they can, by washing, you are removing this particular avenue of transmission and giving them a reason to grow soap-and-water resistant.

Wait long enough and we’ll be washing our hands with antibiotics. So now we’re back to the first problem.

Here’s another angle: Why does medical science exist?

People were evolving just fine before somebody got the bright idea to “bang the rocks together, guys.” But people got sick a lot, and got hurt a lot, and it seems there was this evolutionary pressure on humans to survive and spread despite the germ infested and dangerous world around them.

So they used their intelligence to figure out ways to heal the human body where it couldn’t by itself.

But that’s not how evolution works. Evolution works by mutating the child a little from the parent, not making the parent live longer. The goal should be to run through more generations, not prolong them.

So medical science exists to increase the quality of life of the living, but not the species as a whole, in the big picture. Perhaps logan’s run was right, ixnay everybody at 30. By letting people live and breed who should have evolutionarily be taken out of the gene pool, medical science is actually making it worse for humans in the long run.

This sounds mean, and it is, but it is also true. But that’s not my point.

The point of medical science is not to help the species as a whole, we just proved that, so the point must be to increase the quality of life of those living.

But that’s not true either, because of the hold-back-on-the-antibiotics thing.

Actually I missed an important point: Antibiotics kill bacterial infections not viruses, like the cold or the flu. Nobody questions that, everybody knows it, it is unquestionably true. But does that mean there is no point to taking antibiotics if you have a cold and are trying to increase your quality of life? Can’t find much on the web about it, but a little empirical evidence has shown me a number of times that things do seem to get markedly better after starting to take antibiotics. There is so much noise on google about how bad it is that antibiotics are over-prescribed, that it is impossible to find anybody who’s done a study saying if there is really a zero value to them for treating a cold?

Maybe the net result is that the doctors are right and the rapidly increasing resistance to antibiotics is worse than whatever gain is had when taken if not actually required. But I can’t find any information on that.

Either way, something is wrong, the dots don’t line up for me.

What is the purpose of medical science?

 

Flying Scope and The Great Dinosaur Divide.

Monday, May 23rd, 2016

I’ve recently had occasion to do some work in a javascript-like language and it reminded me of something I thought of years ago.

Javascript and this javascript-like thing I’m using supports closures.
Closures seem to me like one of those things that’s handy for the programmer, and really hard on the computer, or the compiler/interpreter at least.

I never understood why they were called closures. I’ve heard the phrase “close around the variables” or something like that but it seems to me, the concept would better be described as “Flying Scope.” The scope of local variables exists here, then goes away, then magically reappears at a later point in time as if that code had flown away and come back. But that’s me.

At some point I learned how that stuff was implemented and I remember it being not as bad as I thought it had to be, but still, the very concept rubs me the wrong way a bit.

I am a dinosaur, I come from the land of 6502 where there were 56 instructions and 13 addressing modes, and that was it. Everything in the universe had to be implemented through some combination of those 56 instructions. And if you realize that NOP is one, and 7 of them are for setting and clearing particular bits from the flags register, you start to realize how little the dinosaurs had to work with.

So when I see things like flying scope, I feel grief trying to imagine what the poor little computer has to do to make it work. But when I start using closures, because in some cases frameworks force you to use them, I start to see why they appeal to people.

And this is where there great dinosaur divide begins.

I am a very good dinosaur. I understand all the layers of programming from transistors to high level languages. And while it allows me to truly appreciate what’s going on, and offers me a good understanding of bit-twiddling, I think it holds me back in some cases.

People who have no idea the pain that the CPU has to suffer to make closures work can only possibly see the upside. It’s a handy way to write localized stateful code. And it’s very useful. So people use it. It makes sense and it is good.

And here’s the divide.

We can’t go on forever writing software the way we always have. I take into consideration what the compiler comes up with when I write something, because I know what it’s doing. (I’m not saying I understand the gnu c++ optimizer, but the more basic stuff, I get.) And this might encumber me when considering a more abstract design. But the reality is, we’re never going to be able to write really smart software if we concern ourselves with what the computer has to do to make it go.

It is up to the generation after me who would have to go way out of their way to learn what I know, to leave all the low level baggage behind and dream up far crazier constructs than flying scope to enable them to write really really high level abstracted software that can do even more abstract problem solving than we do today.

And the generation after them won’t be encumbered by flying scope or move semantics or any other goofy constructs that will be invented in the near future, because they will have something even better.

The only problem is, you still need somebody to write the compilers, you still need somebody who understands the video hardware so they can write device drivers for them. So not all the dinosaur technology will be lost forever.

But as somebody pointed out to me not too long ago: All the people who have any real first hand experience at landing people on the moon are retired or dead. When they’re gone, that will be it. Experience that can not be replaced by reading a book or watching a video, will be gone forever.

And I figure I can always get into the high frequency trading game. You just can’t do that in javascript.

 

This is awesome, and github is not.

Friday, March 25th, 2016

http://sethrobertson.github.io/GitFixUm/fixup.html

 

There should be more fix-it-yourself tools like this.

Although if nothing else, it speaks to how overly (and seemingly unnecessarily) complicated git is.

 

Did I rant about this facet of github? I finally hit upon the single most poignant problem with github. This is really a git problem, but the problem is forced on you more strongly with github.

By being a distributed repository, there’s bits and copies of it all over the place, and the defining bit of information that is missing is Which One Is Important.

You can figure out that this fork came from that repo and that repo came from that fork, but that doesn’t mean the most recent fork is the correct one.

One I make a repo somewhere it’s stuck there forever, because if I try and move it, by forking and using the new one… now I have two and there’s no obvious way that I can tell to mark one repository as being the “right” one. Maybe I forked something and made some changes, but the whole thing was a bad idea and I just want to use the original. But I go away and come back a month later and I have no idea which is the one I should do builds off of.

With a central repository, there is really no question where you go to do a build.

You can kind solve the problem a little with git by making a bare repository and just having one, and then it’s a little more obvious that THAT’S the one. The bare-ness of it, is a flag saying that this one is more important than all the others, this is the one where I put the live stuff.

But github doesn’t have that. Everything is a repo exactly like every other repo, forked or not.

They just need a flag saying “important” and you can’t flag more than one that way. Something like that.

 

 

root, not root and back again.

Friday, March 25th, 2016

I used to be of the mind that it’s my machine, and I’m going to log in as root because I don’t need to be hindered from doing what I want on my machine.

Over the years I gradually saw the wisdom of doing most things as a non root user, because it protects me from making dangerous mistakes, and it enables me to set up my system so I can share it with other people, and also there’s more and more software that complains that it doesn’t want to run as root.

But now I work at a job where I do a lot of things that involve being root a lot of the time. What you ask? Reading raw data from block devices mostly, updating system configuration that needs to change a lot. And I find typing sudo su plus my password all the time to be real annoying. The computer is supposed to be a tool for the human, not a tool to the human.
Also with the advent of virtual machines and containers there’s almost no need to ever share a machine with another person, you can always make them their own vm or container.

So I’m starting to lean back towards the logging-in-as-root side, because again, it really is my machine and I can do what I want with it, and having it stop me all the time so I can type “sudo su +password”, is annoying, since I’m just going to run the same command again anyway, and if it was a mistake, it’s just that much longer to make it happen before I can go fix my mistake.

 

Today I realized we write our numbers backwards.

Saturday, December 26th, 2015

We write our letters from left to right, but we write our numbers from right to left.

When we write the number 100, we write the digits 1, 0, 0 from left to right, but that’s not what I mean.

When we add up a series of numbers, we align them right shifted so that all the positions with the same value are lined up in a row. In order to make that work, we effectively write the numbers from right to left.

Wouldn’t it make more sense to write the least significant digit first, so 100 would be written 001 so that we could write from left to right and the numbers would still line up correctly. Adding 100 and 2036 for example…

001
6302
------
6312

THEY FIXED IT!

Thursday, December 24th, 2015

A few months ago I bought a bought a netgear R6250 router. It does ac, and when I had something that did ac, it was wicked fast.

Overall I was less impressed with it than the actiontek or whatever it was router that verizon gave me with my FIOS, but the fios router didn’t even do 802.11n so it was time to upgrade.

It’s good enough, it works mostly, doesn’t need to be rebooted all that often but I found one extremely annoying problem with the interface that drove me bonkers.

It has a port forwarding feature like I expect most routers do nowadays, which allows me to connect to the various machines I have on my network when I am away from home. I’m that rare breed that does system administration from my phone.

But the port forwarding feature had a weird limitation. You couldn’t forward to the same numbered port on two different machines.

So if I had 2 machines running ssh on port 22, I couldn’t assign two different external ports to be forwarded (say 22 and 23) to forward to port 22 on two different machines. It just wouldn’t let me do it.
There’s no technical reason you can’t do it, my fios router let me do it, but the UI would not allow the reuse of the number.
I fiddled with the javascript where the restriction is enforced and I saw it was just an arbitrary rule in the UI.

I didn’t bother enough to write a request that would go past the UI and try and submit the request I wanted to the router itself, because it was easier just to have my machines listen for ssh connections on different ports, but that always annoyed me.
I wrote a letter to tech support and I think they basically said “suck on it.”

Well here I am a few months later logging into my router’s admin page and it says there’s a firmware upgrade. So I do it and of course the first thing I try and do is map two ports to the same port on different machines, and voila! It worked!

They fixed it. They actually fixed it.

Yay netgear.

For reference, the old firmware version was V1.0.3.6_10.1.3 and it upgraded to V1.0.4.2_10.1.10