I just came up with a solution to the wikipedia problem. Every year wikipedia goes on about the millions of dollars they need to keep running. Wikipedia is a volunteer effort just like the local volunteer fire department. The labor is free, but the equipment and resources are what cost the money. Most people don’t have fire equipment to donate to the local fire department, but they do have computers that are idle most of the time.
Wikipedia is the perfect system to run as a world-wide-distributed application. People volunteer content, and they can volunteer cpu and disk too. How big could all the wikipedia data possibly be? It’s mostly text. You know there’ll be some geeks out there more than happy to have copies of the entire thing, and everybody else who contributes disk and cpu (just by running a little application on their pc) would host caches of sections of the whole database. Not outrageous to imagine, and given the state of peer systems nowadays, not that hard to do. If wikipedia started building that system and transferring all their current data to it, they’d never have to ask for money again.
— later comments —