Archive for the ‘Notes’ Category

I just thought of a neat way to avoid git conflicts

Sunday, January 14th, 2024

I think git is a technological marvel. I think it very well suits the problem it was designed to fix: lots of random people contributing random junk to try and get it into the linux kernel, and for that it is great.

But everybody else uses git too, apparently private companies, which never quite made sense to me, because everybody is [supposed to be] on the same team.

You’re not going to reject somebody’s pull request because they’re trying to put insecure code into your codebase, or whatever other reasons prs against the linux kernel are rejected. You work for the same company, probably on the same team trying to make the product better, why would you compete? You don’t, that’s not how companies work.

It seems to me that git is an overengineered heavy handed way to do source control among a group of people who are all striving for roughly the same goal.

Yet it seems to be the most popular hotness so everybody uses it. It is what it is.

One of my larger beefs with git is the merge conflict problem it creates.

Now I honestly can’t remember how we dealt with the problem, which must have existed, in the days of cvs and subversion, but somehow we managed, but I don’t remember being as frustrated with cvs as I am with git conflicts.

Firstly all git conflicts could be avoided with a little application of intelligent logistics. If you just order the work to be done such that two people don’t work on the same piece of code at the same time, there will be no conflict. All a conflict is, is a waste of time cause by poor time management. I’ve never heard of anybody enjoying resolving merge conflicts, and they’re easy to avoid but… nobody does.

But this morning, I had an idea, a stupid easy way to avoid git conflicts and help with the time management problem at the same time.

Now the technology to make the idea viable didn’t exist until fairly recently, so it wouldn’t have been workable, but now it does exist and it’s easy to use and nobody’s going to do it, because nobody likes change and people have been using git forever, and git is good and right and nobody really wants to make things better. If they did, they’d follow this easy solution: Use live shared editing.

Google docs lets two people edit the same file at the same time, and I believe vscode live share does something similar, so the technology exists.

If two people needed to work on the same code at the same time, they could. Forking/branching your own copy is what causes the conflict problem so… don’t do it. Everybody works on the live repo in main.

There are other things that can be done to make the experience better, like warn both or all parties when people start working to close to each other on the same code, and you could still use branches to segment units of work. I’m sure other tools could be invented to make this style of code editing more useful, but the basic concept, is that everybody actually edits the same file at the same time, and sees what the other editors are doing and naturally stays away when somebody else is working here. When you walk down the hallway and see somebody coming right at you, do you keep walking right at them? No, you make way. And so do they. Real simple.

As a result, one person may very well back off and go work on something else until the first person is done, thus avoiding creating conflicting code. Look at that: self driven logistics and time management.

Yes it would make writing tests a little harder because you’d have to wait for the other person to finish before you could run your test but… again, time management, find something else to do, so there’s no conflict. As annoying as it may seem, it’s way less annoying than having to always deal with conflicts or fear the git pull because you don’t know how much work you’re in for just to make your code work again.

The shiny new editor tools can put up little marker notifications annotating the editor saying things like: “so-and-so touched this a few minutes ago, they might not be done.”
Or the programmer can mark it complete while they go work on tests or move on to something else, indicating to the next person they’re free to work on this stuff.

Will it solve all the problems? No, it will solve the git conflict problem which is unquestionably a waste of resources time suck for all involved. There has to be a better way. This is one possible option.

The earth is a giant battery. With one charge.

Sunday, December 3rd, 2023

Before the humans showed up, the earth was here soaking up sunlight. For hundreds of millions of years, the sun sent its energy to the earth where it turned into things like plants and trees and eventually animals.

All the trees and plants died and mushed into the ground and this went on for an absurdly long amount of time.

And then the people showed up and for the past 150 years or so began digging up all the oil which stored the energy from the sun, like a battery.

We are consuming the energy stored by the planet at a fairly quick rate, and the noise about “peak oil” has been around for a while. But there’s a much simpler way of thinking about energy consumption from the great earth battery.

If we are to go all in on green and get all of our daily energy needs from solar (directly from the sun) and wind (driven by weather changes caused by the sun) and hydro electric (which is where the energy from the sun evaporates water from the oceans and ground and carries it up to the clouds so we can draw the power from the falling water, noticing a theme here?), we have to consider that on any given day, our draw down of energy from these sources has to be less than that which is provided by the sun in a day.

My point is we’ve built up cities and cars and infrastructure by consuming the energy stored in the planet, and while there is still charge in the battery, that will continue to work, but in order for the system to function long term, it’s not a matter of where we draw the energy from but that the source of the energy (the sun in a lot of cases) has to provide more energy on average per day than how much we draw down in that same day.

If we draw down more, we will be consuming more of the stored energy from the great earthen battery, and plain and simple math, that is not sustainable. The battery will eventually run out.

So the question is: how much energy is provided by solar, wind, oxford comma, and hydro electric in a day, and how much do all the people consume in that same day. And if it’s more, long term, we’re in trouble.

An interesting idea for why time travel doesn’t exist.

Sunday, April 30th, 2023

I don’t usually forward videos because I don’t like being inundated with “oh you HAVE to watch this!” so I don’t want to do it to other people. But if you’re interested in the moon landing hoax, this is worth a watch.

I love the moon landing hoax. I think it’s a great testament to how people think and the conspiracies they’re willing to adopt. It’s great amusement.

But of all of the anti-hoaxers and their arguments, this is my favorite. It has nothing to do with any of the other arguments people make to explain why the moon landing wasn’t a hoax. This one is technical, and was made by an apparent film geek (I can appreciate geeks because I am one too, just not a film geek).

In short he explains how the moon landings can’t be fake simply because the technology to create the film footage that came back couldn’t possibly have been made with the technology of the time anywhere but in space. The only way the video could have been made, was from the moon. It’s quite interesting.

The reason I bring it up is because I thought of a way to explain the lack of any apparent progress in the field of time travel, and it’s similar in thinking to the above video.

I do believe in time travel but I think you can only go forward in time and it has to do with perception more than anything else. You experience time travel every time you go to sleep.

Anyway, I am a storage geek. I like disk. And one day it occurred to me, that if time travel backwards in time were possible, that would mean that somewhere, or somehow the state of every atom in the universe would have to be stored for every instant in time. This way somebody would be able to replay that stored state in a way that could be observed or interacted with. That would mean you’d have to build some machine that would let you retrieve that stored state and pull each instant’s state (remember we’re talking every atom in the universe) into your viewer/replayer to be observed.

That’s a lot of bandwidth. That’s a lot of disk.

For the same reason that the moon landing video could only have been produced on the moon, (the limit of technology) I think we will never be able to produce time travel because even if the state of all of the atoms in the universe is stored somewhere, we’d never be able to retrieve it in able usable fashion. Even if you only wanted to grab a 1 foot square block of it. That’s an insane amount of storage to move over some type of information transfer. And remember there’s that whole speed of light thing to deal with.

So, no time travel, sorry.

So you’ll realize that I’m talking about bringing the history data where ever it is to us now in some machine, and most time travel stories have to do with taking the person to the information and not bringing the information to the person. Well as hard as it is to move a lot of data around nowadays in real time, we don’t have ANYTHING that remotely hints at being able to bring a person to the data.

So again, I’m not seeing it.

Doctors and programmers

Saturday, June 11th, 2022

I went to the doctor yesterday, and he’s a good guy, he’s smart, very knowledgeable calm, very robot-like in his questioning to diagnose whatever ailments you might have. I’d say he’s a very good doctor.

But his job is to file paperwork. To be a doctor nowadays, there’s so much regulation and so much fighting for pennies with the insurance company that you spend 5 minutes with a patient and hours dealing with paperwork.

This is a sad situation you might think.

But reflect on the lot of the software programmer in life. They have the same problem, they write software for 5 minutes and fight with tools, and build systems, and broken libraries and updating patches, and security vulnerabilities and all this other stupid annoying shit nobody wants to do, that has very little to do with actually designing or writing software.

The difference is: the doctor has this crap foisted on him by external parties, whereas programmers do it to themselves.

SREs.

Saturday, September 4th, 2021

So I was thinking about this recently. You’ve heard of SREs? Software Reliability Engineers. 

I like to joke: “SREs exist because we (developers) are SO BAD AT OUR JOBS that even an entire team of QA people isn’t enough to stop us from releasing shitty broken code, so now there’s an entire NEW layer of people to protect the end users from developers.”


I joke but it’s not entirely untrue. But I thought about it some more and what I realized was this: when charles babbage invented the first computer (were he able to actually build it) he was the inventor, designer, programmer, tester and user. All in one guy.


Then as time went on, we split out the work so that the hardware guys designed and built the hardware, and the software guys wrote and ran the software.


Then there were different people who starting using the software other than the people who wrote it.


Then there were QA people to separate the guys who designed and wrote the software from the guys who tested it. Then there were architects that designed the big picture project (as systems got larger and larger) and they handed down the bits of coding work to developers. And then there became sysadmins who managed the running of the computers, and the software guys just wrote the software and the sysadmins ran it.


And what I realized is that this is just the industry maturing. Same thing with cars. The first car was designed and built by one guy, now it’s farms of teams each doing a little bit. Same thing with the hardware. babbage designed the whole thing soup to nuts, now there’s teams of people designing the machine in the fab that prints the silicon.


And the SRE role is just a new split out part of the bigger picture of the software development life cycle. the process has gotten bigger and more complicated and there’s a gap between qa and sysadmin so now there’s SREs to fill that gap.
so it’s not exactly what I joke about, but it’s interesting to see that the field is still growing. and I’m sure it’s not done growing yet.

The genius of the itanium

Friday, August 13th, 2021

The final shipment of itanium chips went out a few weeks ago now, and that got me thinking about it again.

So I’ve spent most of the past 20 years championing the Itanium because it is truly brilliant. The basic design idea of taking the trying-and-optimize-the-paralellism-of-the-code that was in the processor and putting it in the compiler was brilliant. No question. I think the real reason it didn’t take off was sure, because the compilers were hard to write but so was monopoly at some point, that would eventually get worked out. Really, I think it was because the first itaniums didn’t run windows very well. Oh well, too bad, that’s history now.

But recently when I started thinking about it again and now knowing what the future turned out to be, it turns out the Itanium, though brilliant at the time was really just a stopgap.

They eventually would have squeezed all the performance possible out of the super wide instructions on the itanium, and maybe they’d find ways to expand the wide instructions and make them extra super wide or mega wide, but at the end of the day, they’d still be left with the 4ghz problem. One core can only go so fast.

Now that we see the future, we know that the world moved to parallel processing which any processor can do and all the added complexity of the Itanium would just have been a big technical burden to carry forever on. So maybe the lack of adoption was for the best after all. sure there are crazy optimizations on the x86 chips, which are now also biting us in the ass (spectre, etc) so maybe it would have turned out the same way in either case.

But my point is, I spent years marvelling at the wonders of this novel chip design and in the end, it wouldn’t really have bought us much, because like a lot of intended futures, things actually end up going off in a wildly different direction than anybody could have anticipated.

Same thing with ZFS. I love zfs, it’s amazing, but it was designed in a time when we thought we’d be adding more and more disk to a single machine. The future didn’t quite turn out that way did it. So now zfs is amazing for what it is, but it just can’t compete with an entire cloud of storage.

The internet is the digital version of globalization.

Saturday, March 2nd, 2019

It used to be that you could buy a computer, run software on it an it worked, and it stayed working forever.
I saw a guy at a hertz rental place in the early 2000’s running the truck rental service on an ibm pc xt with a green screen and an epson dot matric printer and everything and it all just worked.

But now everything is on the internet.

So you can no longer be sure that any software you have will continue to work as long as your computer does because it has to talk to other computers that might be changing in some way.

This happened to me (again) today.

I have a machine that has a dynamic ip address and it uses inadyn to update the ddns server at afraid.org so that I can find my machine when I want to.
But the dynamic ip changed at some point and I could no longer get to the machine. I eventually got access to it and found that inadyn was no longer able to talk to the ddns server:
“The response of DYNDNS svr was an error! Aborting.”
Like all good software it told me what the problem was in detail and how to fix it.

After some rummaging around I found that the server inadyn was talking to was no longer supporting the old version of inadyn I was using, there was a new protocol and my old old version didn’t speak it.
Because I’m attached to the global supply chain, I can no longer expect things to remain working, because some other parts of the global supply chain might change and I’ll have to change as well to keep things working.

So I said, okay, I will fix it.

I downloaded the latest version of inadyn and tried to build it.
inadyns requires libconfuse.
So I download that, and try and build it, but it requires gettext.
So I build that. That works.
Then I go to build libconfuse again and it fails with some lexer problem.

I download a version that’s a bit older than the newest and built that, it builds.
Then I go back to inadyn and it builds too.
I install it, and run it, and it says… it requires a newer version of gnutls than I have.

So I download gnutls and try to build it.
It says in requires nettle 3.4. So I download that and build it. It builds.
I try to build gnutls again and it says it still requires nettle 3.4.

I google and there’s a few answers on stack overflow, but none solve my problem.

At this point I stop and I wonder what the purpose of all this is.
Somewhere at the bottom of this chain of rabbit holes I expect there will be a circular dependency making it impossible to get working.

At this point some of you are wondering “what kind of machine is this that you can’t just use the package manager to get the latest package.” It doesn’t matter, that’s not the point. It’s all broken. It’s a pile of houses of cards stacked on top of each other.

One of the more amusing points was when I noticed libconfuse titles itself thusly: “Small configuration file parser library for C.

I am libconfused as to why it takes so long to build something small and why there are so many source files just to parse a config file. Or maybe what they’re saying is that it can only use small configuration files, and large configuration files are beyond it. It still shouldn’t take that long to build. I have a small c++ class I use to read config files. It’s about 100 lines long. I can compile it faster than I can hit the period at the end of this sentence.

If it’s 2019 and we can’t make this a simpler process, then maybe it’s not worth doing at all.
But it doesn’t matter whether I like it or not or whether it works or not, because we are all part of this global interoperable supply chain that now requires you keep up to date or no promises that it will continue to work.

For really important things (ie, systems where money is traded) apparently there’s some notification system to alert you to impending breaking changes, but for anything that isn’t about transferring money, you just better keep up all the time, or suffer unexpected compatibility failures when somebody else decides to break something you set up years ago and left running because it worked.

 

SFINAEIBP

Tuesday, February 5th, 2019

Substitution failure is not an error is bad programming, in my opinion.

It seems to me that if you are making special cases for different classes in a template, then you’ve clearly missed the point of templates and are using them incorrectly. Templates are supposed to apply a concept or algorithm uniformly to a class. A vector, or a hash, work on objects of any type, uniformly.

If you’re SFINAEing, then what you really want to be doing is make a base class and derive other classes from it, each having traits specific to that class. That’s the very definition of what object oriented programming is for.

By taking advantage of a hack to cover a language flaw that serves no purpose but to supply entries for ‘the most heinous error message to come out of a c++ compiler’ contest, you’re being cool, but you’re not being a good programmer.

 

Google Shark Jumping

Monday, December 17th, 2018

In 2003, google says: “Seth Godin Says Google Has Officially Jumped the Shark”

I think that’s kind of a personal decision.

I think google only recently jumped the shark for me.

Google, having amassed vast amounts of information about every or at least lots of individuals can be said to jump the shark for different people at different times depending on the amount and type of data they have for a particular person and how they use it and the results that gathering that information for a particular person has had.

For me, google just jumped the shark.

A few weeks ago, probably months ago now, I forget when it was, google stopped updating the news headlines on their “google news and weather” app.

This is kinda funny because I remember them doing that once before as well, forcing me to abandon my favorite news app for something ‘better’.

Well this is the second time they’ve done that, maybe third time’s the charm.

But it wasn’t, it was the time they jumped the shark.

I replaced the “google news and weather” app with the “google news” app like a good little sheep, just like they told me to. Funny how removing weather from the app somehow was supposed to make it better.

Anyway.

A friend of mine just asked me about something related to politics and I pointed out how I don’t read too much about politics, but it made me realize that this new app shows me lots more in the way of news articles and most of them are political.

There’s the “just for you” page, and the “latest” page which are nearly identical and filled with lots of the latest political hoo-ha. I will admit to reading some of it, but not very much.

But I realize I don’t read many non-political articles, because it just doesn’t show me very many.

I have to look a number of pages in to get an article that is just a current news story about something that isn’t politics.

Then I realized, that this app never shows me sports news. That’s fine, I never follow sports, and I never click on articles, so google got that one right.

But then I thought about it some more and realized, that I do read a relatively large percentage of ‘technology’ articles, relative to all the news I read. And I realized that lately, most of what I’ve seen show up in the technology section of the news app is about games.

I’m not a gamer, I really don’t care about fortnight or why I should click A-B-B-A or this or that game. I have clicked through many more technology articles than political articles, and none of them were about games, yet that’s all google shows me now.

And I realized… google has jumped the shark. They have so fined-tuned their understanding of my interests in news articles that they can no longer show me news articles I’m actually interested in.

Congratulations google, you’ve peaked, you’ve surpassed maximum, you’re on the downside of the hill.

I can’t wait to see who’s going to replace them with a small shell script.

 

 

Things that are hard to google for (1).

Wednesday, December 12th, 2018

Try finding information with google about problems building gdb.

It’s impossible.

And it’s not because nobody ever has problems building gdb.