Archive for the ‘Notes’ Category

An interesting idea for why time travel doesn’t exist.

Sunday, April 30th, 2023

I don’t usually forward videos because I don’t like being inundated with “oh you HAVE to watch this!” so I don’t want to do it to other people. But if you’re interested in the moon landing hoax, this is worth a watch.

I love the moon landing hoax. I think it’s a great testament to how people think and the conspiracies they’re willing to adopt. It’s great amusement.

But of all of the anti-hoaxers and their arguments, this is my favorite. It has nothing to do with any of the other arguments people make to explain why the moon landing wasn’t a hoax. This one is technical, and was made by an apparent film geek (I can appreciate geeks because I am one too, just not a film geek).

In short he explains how the moon landings can’t be fake simply because the technology to create the film footage that came back couldn’t possibly have been made with the technology of the time anywhere but in space. The only way the video could have been made, was from the moon. It’s quite interesting.

The reason I bring it up is because I thought of a way to explain the lack of any apparent progress in the field of time travel, and it’s similar in thinking to the above video.

I do believe in time travel but I think you can only go forward in time and it has to do with perception more than anything else. You experience time travel every time you go to sleep.

Anyway, I am a storage geek. I like disk. And one day it occurred to me, that if time travel backwards in time were possible, that would mean that somewhere, or somehow the state of every atom in the universe would have to be stored for every instant in time. This way somebody would be able to replay that stored state in a way that could be observed or interacted with. That would mean you’d have to build some machine that would let you retrieve that stored state and pull each instant’s state (remember we’re talking every atom in the universe) into your viewer/replayer to be observed.

That’s a lot of bandwidth. That’s a lot of disk.

For the same reason that the moon landing video could only have been produced on the moon, (the limit of technology) I think we will never be able to produce time travel because even if the state of all of the atoms in the universe is stored somewhere, we’d never be able to retrieve it in able usable fashion. Even if you only wanted to grab a 1 foot square block of it. That’s an insane amount of storage to move over some type of information transfer. And remember there’s that whole speed of light thing to deal with.

So, no time travel, sorry.

So you’ll realize that I’m talking about bringing the history data where ever it is to us now in some machine, and most time travel stories have to do with taking the person to the information and not bringing the information to the person. Well as hard as it is to move a lot of data around nowadays in real time, we don’t have ANYTHING that remotely hints at being able to bring a person to the data.

So again, I’m not seeing it.

Doctors and programmers

Saturday, June 11th, 2022

I went to the doctor yesterday, and he’s a good guy, he’s smart, very knowledgeable calm, very robot-like in his questioning to diagnose whatever ailments you might have. I’d say he’s a very good doctor.

But his job is to file paperwork. To be a doctor nowadays, there’s so much regulation and so much fighting for pennies with the insurance company that you spend 5 minutes with a patient and hours dealing with paperwork.

This is a sad situation you might think.

But reflect on the lot of the software programmer in life. They have the same problem, they write software for 5 minutes and fight with tools, and build systems, and broken libraries and updating patches, and security vulnerabilities and all this other stupid annoying shit nobody wants to do, that has very little to do with actually designing or writing software.

The difference is: the doctor has this crap foisted on him by external parties, whereas programmers do it to themselves.

SREs.

Saturday, September 4th, 2021

So I was thinking about this recently. You’ve heard of SREs? Software Reliability Engineers. 

I like to joke: “SREs exist because we (developers) are SO BAD AT OUR JOBS that even an entire team of QA people isn’t enough to stop us from releasing shitty broken code, so now there’s an entire NEW layer of people to protect the end users from developers.”


I joke but it’s not entirely untrue. But I thought about it some more and what I realized was this: when charles babbage invented the first computer (were he able to actually build it) he was the inventor, designer, programmer, tester and user. All in one guy.


Then as time went on, we split out the work so that the hardware guys designed and built the hardware, and the software guys wrote and ran the software.


Then there were different people who starting using the software other than the people who wrote it.


Then there were QA people to separate the guys who designed and wrote the software from the guys who tested it. Then there were architects that designed the big picture project (as systems got larger and larger) and they handed down the bits of coding work to developers. And then there became sysadmins who managed the running of the computers, and the software guys just wrote the software and the sysadmins ran it.


And what I realized is that this is just the industry maturing. Same thing with cars. The first car was designed and built by one guy, now it’s farms of teams each doing a little bit. Same thing with the hardware. babbage designed the whole thing soup to nuts, now there’s teams of people designing the machine in the fab that prints the silicon.


And the SRE role is just a new split out part of the bigger picture of the software development life cycle. the process has gotten bigger and more complicated and there’s a gap between qa and sysadmin so now there’s SREs to fill that gap.
so it’s not exactly what I joke about, but it’s interesting to see that the field is still growing. and I’m sure it’s not done growing yet.

The genius of the itanium

Friday, August 13th, 2021

The final shipment of itanium chips went out a few weeks ago now, and that got me thinking about it again.

So I’ve spent most of the past 20 years championing the Itanium because it is truly brilliant. The basic design idea of taking the trying-and-optimize-the-paralellism-of-the-code that was in the processor and putting it in the compiler was brilliant. No question. I think the real reason it didn’t take off was sure, because the compilers were hard to write but so was monopoly at some point, that would eventually get worked out. Really, I think it was because the first itaniums didn’t run windows very well. Oh well, too bad, that’s history now.

But recently when I started thinking about it again and now knowing what the future turned out to be, it turns out the Itanium, though brilliant at the time was really just a stopgap.

They eventually would have squeezed all the performance possible out of the super wide instructions on the itanium, and maybe they’d find ways to expand the wide instructions and make them extra super wide or mega wide, but at the end of the day, they’d still be left with the 4ghz problem. One core can only go so fast.

Now that we see the future, we know that the world moved to parallel processing which any processor can do and all the added complexity of the Itanium would just have been a big technical burden to carry forever on. So maybe the lack of adoption was for the best after all. sure there are crazy optimizations on the x86 chips, which are now also biting us in the ass (spectre, etc) so maybe it would have turned out the same way in either case.

But my point is, I spent years marvelling at the wonders of this novel chip design and in the end, it wouldn’t really have bought us much, because like a lot of intended futures, things actually end up going off in a wildly different direction than anybody could have anticipated.

Same thing with ZFS. I love zfs, it’s amazing, but it was designed in a time when we thought we’d be adding more and more disk to a single machine. The future didn’t quite turn out that way did it. So now zfs is amazing for what it is, but it just can’t compete with an entire cloud of storage.

The internet is the digital version of globalization.

Saturday, March 2nd, 2019

It used to be that you could buy a computer, run software on it an it worked, and it stayed working forever.
I saw a guy at a hertz rental place in the early 2000’s running the truck rental service on an ibm pc xt with a green screen and an epson dot matric printer and everything and it all just worked.

But now everything is on the internet.

So you can no longer be sure that any software you have will continue to work as long as your computer does because it has to talk to other computers that might be changing in some way.

This happened to me (again) today.

I have a machine that has a dynamic ip address and it uses inadyn to update the ddns server at afraid.org so that I can find my machine when I want to.
But the dynamic ip changed at some point and I could no longer get to the machine. I eventually got access to it and found that inadyn was no longer able to talk to the ddns server:
“The response of DYNDNS svr was an error! Aborting.”
Like all good software it told me what the problem was in detail and how to fix it.

After some rummaging around I found that the server inadyn was talking to was no longer supporting the old version of inadyn I was using, there was a new protocol and my old old version didn’t speak it.
Because I’m attached to the global supply chain, I can no longer expect things to remain working, because some other parts of the global supply chain might change and I’ll have to change as well to keep things working.

So I said, okay, I will fix it.

I downloaded the latest version of inadyn and tried to build it.
inadyns requires libconfuse.
So I download that, and try and build it, but it requires gettext.
So I build that. That works.
Then I go to build libconfuse again and it fails with some lexer problem.

I download a version that’s a bit older than the newest and built that, it builds.
Then I go back to inadyn and it builds too.
I install it, and run it, and it says… it requires a newer version of gnutls than I have.

So I download gnutls and try to build it.
It says in requires nettle 3.4. So I download that and build it. It builds.
I try to build gnutls again and it says it still requires nettle 3.4.

I google and there’s a few answers on stack overflow, but none solve my problem.

At this point I stop and I wonder what the purpose of all this is.
Somewhere at the bottom of this chain of rabbit holes I expect there will be a circular dependency making it impossible to get working.

At this point some of you are wondering “what kind of machine is this that you can’t just use the package manager to get the latest package.” It doesn’t matter, that’s not the point. It’s all broken. It’s a pile of houses of cards stacked on top of each other.

One of the more amusing points was when I noticed libconfuse titles itself thusly: “Small configuration file parser library for C.

I am libconfused as to why it takes so long to build something small and why there are so many source files just to parse a config file. Or maybe what they’re saying is that it can only use small configuration files, and large configuration files are beyond it. It still shouldn’t take that long to build. I have a small c++ class I use to read config files. It’s about 100 lines long. I can compile it faster than I can hit the period at the end of this sentence.

If it’s 2019 and we can’t make this a simpler process, then maybe it’s not worth doing at all.
But it doesn’t matter whether I like it or not or whether it works or not, because we are all part of this global interoperable supply chain that now requires you keep up to date or no promises that it will continue to work.

For really important things (ie, systems where money is traded) apparently there’s some notification system to alert you to impending breaking changes, but for anything that isn’t about transferring money, you just better keep up all the time, or suffer unexpected compatibility failures when somebody else decides to break something you set up years ago and left running because it worked.

 

SFINAEIBP

Tuesday, February 5th, 2019

Substitution failure is not an error is bad programming, in my opinion.

It seems to me that if you are making special cases for different classes in a template, then you’ve clearly missed the point of templates and are using them incorrectly. Templates are supposed to apply a concept or algorithm uniformly to a class. A vector, or a hash, work on objects of any type, uniformly.

If you’re SFINAEing, then what you really want to be doing is make a base class and derive other classes from it, each having traits specific to that class. That’s the very definition of what object oriented programming is for.

By taking advantage of a hack to cover a language flaw that serves no purpose but to supply entries for ‘the most heinous error message to come out of a c++ compiler’ contest, you’re being cool, but you’re not being a good programmer.

 

Google Shark Jumping

Monday, December 17th, 2018

In 2003, google says: “Seth Godin Says Google Has Officially Jumped the Shark”

I think that’s kind of a personal decision.

I think google only recently jumped the shark for me.

Google, having amassed vast amounts of information about every or at least lots of individuals can be said to jump the shark for different people at different times depending on the amount and type of data they have for a particular person and how they use it and the results that gathering that information for a particular person has had.

For me, google just jumped the shark.

A few weeks ago, probably months ago now, I forget when it was, google stopped updating the news headlines on their “google news and weather” app.

This is kinda funny because I remember them doing that once before as well, forcing me to abandon my favorite news app for something ‘better’.

Well this is the second time they’ve done that, maybe third time’s the charm.

But it wasn’t, it was the time they jumped the shark.

I replaced the “google news and weather” app with the “google news” app like a good little sheep, just like they told me to. Funny how removing weather from the app somehow was supposed to make it better.

Anyway.

A friend of mine just asked me about something related to politics and I pointed out how I don’t read too much about politics, but it made me realize that this new app shows me lots more in the way of news articles and most of them are political.

There’s the “just for you” page, and the “latest” page which are nearly identical and filled with lots of the latest political hoo-ha. I will admit to reading some of it, but not very much.

But I realize I don’t read many non-political articles, because it just doesn’t show me very many.

I have to look a number of pages in to get an article that is just a current news story about something that isn’t politics.

Then I realized, that this app never shows me sports news. That’s fine, I never follow sports, and I never click on articles, so google got that one right.

But then I thought about it some more and realized, that I do read a relatively large percentage of ‘technology’ articles, relative to all the news I read. And I realized that lately, most of what I’ve seen show up in the technology section of the news app is about games.

I’m not a gamer, I really don’t care about fortnight or why I should click A-B-B-A or this or that game. I have clicked through many more technology articles than political articles, and none of them were about games, yet that’s all google shows me now.

And I realized… google has jumped the shark. They have so fined-tuned their understanding of my interests in news articles that they can no longer show me news articles I’m actually interested in.

Congratulations google, you’ve peaked, you’ve surpassed maximum, you’re on the downside of the hill.

I can’t wait to see who’s going to replace them with a small shell script.

 

 

Things that are hard to google for (1).

Wednesday, December 12th, 2018

Try finding information with google about problems building gdb.

It’s impossible.

And it’s not because nobody ever has problems building gdb.

 

I found this amusing.

Sunday, December 9th, 2018

When ngate (http://n-gate.com/) refers to joe user, he calls them “An Internet.”

When ngate refers to a web developer, he calls them “A webshit.”

But when ngate refers to Richard Stallman, he refers to “Some fuckwad.”

That made me laugh, so I thought I’d share.

 

A slightly better internet

Saturday, December 1st, 2018

Since the dawn of google you found stuff on the web by searching with keywords.

Yahoo did this organized thing where they grouped the internet into categories. The internet was much smaller then.

Altavista did… I don’t remember what altavista did, but it didn’t work as well as google.

But google does us all one big disservice. It presents links to websites with ads.

Wouldn’t it be neat if there was a search engine that did the same thing google did, but would only show you sites with no ads. Or maybe at least no ads that popped up at you distracting you from the content you were trying to read.

So how hard would that be? Make a webpage with a search box, that hits google’s servers to do the search (probably against some terms of use of theirs) and then filtered out results based on a blacklist of sites with annoying ads.

Where to get that information? Well, the helpful user, of course.

Each link could be presented in an iframe with a little bar at the top with a button that says “click this if you see an ad” I suppose you could automate it by doing whatever adblocker does to block ads, you can just use to detect them, and if you do, flag the page as annoying and it will never show up in search responses again.

Just an idea.

What’s the business model you ask? I don’t really care.  I just find being jarred away from reading something by an annoying popup ad… well… annoying.