jueves, 30 de agosto de 2007

Optimization

Once upon a time I applied to a job in a big consultancy shop here in Santiago.

They were requiring a J2EE expert. After several interviews it was like 23:00 pm at night. 8-O

I was having this interview with a technical manager and he was asking me ow to optimize J2EE applications. Do you know how to optimize the application?

He explained that they built a huge application and that it neede to do something like 2 transactions per second and they were unable to do so. Do you have any ideas on how to optimize this?

First, I find it very uncharming to ask questions liek this. I mean, of course I can answer that, but you will have to pay the big bucks BEFORE I tell you how to.

So, what would you have replied in that case?

I remembered other projects I saw failing at other companies in which people struggled for months in order to write the code.

Then they struggled for months in order to compile the code.

I wish you are thinking the same as I'm thinking: These dudes had no idea what they were doing. They should have compiled before doing any check-in, but of course they didn't even know CVS or SVN even existed.

The good thing about having people like this in the market is that there is a lot of job offers (in the economic sense: a lot of people looking for jobs) and therefore the market price goes down, but on the other hand all these failed projects make the market price go up. Letting them fail is like a charm for asking for a raise.

Probably now you think I'm totally wrong on this, but it is a market, and if there is no lack of the product or service, it is impossible to make a profit. And failed project taint the company, not the individual workers, who go job hoping and therefore can't be blamed for anything.

Even the most knowledgeable hiring manager wouldn't know about every project in a city, and since today with globalization most projects are subcontinent-wide, it really is impossible to know the outcome of every project.

So failure and success is irrelevant. At least for individual workers.

But most projects fail not because people does not have the skills, but they don't know how to apply the skills and even more important, when to apply those skills.

In Search Of Stupidity

People are stupid, yeah, sometimes. Mostly all the time. That's why interfaces are defined to be fool proof.

People can't concentrate on the big picture if they are solving the small picture all the time, so you need to solve all those irrelevant details, as if people were stupid.

When you do that, people feel more confortable because they can concentrate on the big picture.

Unless the learnt Unix and vi, in which case they think they are smarter because they know all those arcane details. But Unix and specially vi people will go the way of the VAX system. They are completely negiglible.

The people we should care about are the people who understand that the best interface is the one that doesn't need a manual. Unix people are so smart that you can even type "man man" and the computer shows a manual on the "man" command. Useless, or course, but complete.

Or completely useless.

Optimization

Optimization has several techiniques that have been known for decades. It is a sign that "the inmates are running the assylum" that you get interviewed on how to optimize. It is the same as being hired for solving an equation.

What? Are you kidding me?

But some people lack basic instruction, and at least here in Chile, the number of people studying at the universities is going to double by 2012.

This means that if now I consider that it is very hard to hire acceptable engineers, it will get next to impossible by 2012.

The first problem this people have is that they are told to optimize from the very beggining. I don't understand the rationale, but I certainly remember from calculus that you can't optimize both locally and globally at different places the same function. You simply get impossible solutions.

You either optimize locally or you optimize globally. And of course it is better to optimize globally, beause that way you find the global optimum, instead of the local optimum, which is always somewhat worse than the global optimum.

Techniques

The techniques are:

1. Caches.
2. Buffers.
3. Copy on write.
4. Dirty bit.
5. Change number.
6. lock free/ wait free / non blocking algorithms.
7. cache oblivious algorithms.
8. read write locks
9. optimize at the end
10. measure, optimize, measure again.

The most important thing is to optimize at the end.
And then not to optimize unless you know for sure (you measured) that the response time is too slow.
And you make sure the code is clear, and the automated tests are run and they pass.
And there is not repeated code.
And then you still find that it is too slow.
Then you optimize the library, and make sure the library is not broken by running the tests.
If the optimization is not at least 5 times better, remove it and add a better one.

martes, 28 de agosto de 2007

About Closure

Closure is when a function (or any code snippet) has all variables linked to a value. The concept comes from functional programming, in which you can say "this context is a closure", or if it is not a closure because some variables are still free, you can assign them values and therefore they are not free anymore.

Also closure is when you finish something. For example in English you can say "think through", which I really think there is no equivalent concept in Spanish, meaning that you think of all the possible consequences and come with a unique solution (or a set of possible solutions).

So when it comes to computing, several years ago, I was thinking about this idea about client/server computing which is the basis of almost everything we do now, and I was thinking why do we store data in databases, or disks for that matter, since that complicates design, we could instead just have a messaging paradigm (like sockets) and solve everything using messages, so that it wouldn't matter if the connection was local or remote, the algorithms would be the same.

And it was like an epiphany, I realized that the server must be up, so everything was messaging and the case of storing on disk is just trivial case solved in one particular machine, but the communication is always from program to program through messaging.

Seems like nothing, doesn't it?

It means that if you program everything using sockets, you are done. That's closure!

Well, then I thought about how databases are described using functional dependencies, which are really like hashmaps, and then functional programming is all about defining functions, which in turn could be defined as hashmaps, and also object oriented programming when applied throughly in a program ends up feeling like functional programming, and finally caches are really just hashmaps.

And hashmaps are really functions (inputs and outputs). And when you define correctly the inputs and the outputs, you can define correctly any function or any program. That was one of the first suggestions I recieved when learning how to program. It is really humbling that you must go for 20 years discovering new stuff, just to arrive a very revolutionary idea, that was passed to you a suggestion when you were learning 20 years ago.

And well, it also applies to modelling, because you can draw a context diagram, for specifying all inputs and outputs for your module. That's closure!

A Critique on Agile

I'm almost certainly a partidary (is that a word?) of Agile Methods.

But since this is a blog about rants, I'm going to criticize, hopefully with reasonable arguments, the ideas behind agilism.

I find most troublesome that both Scrum and XP require sprints (or iterations) to not change and not accept other endevours in the mean time. The whole point about being agile is being able to change, but somehow both methods seem to imply that not all change is good, specially when you are working toward the sprint goal (or the iteration goal if your doing XP).

I agree. Most projects fail because during the week the objectives are changed, and this change happens almost daily and sometimes several times a day, and at the end of the project, you are asked "why did we fail?".

"So we did have an objective?" - you ask perplexed

As you can imagine the situation is not very amenable and you not only loose your job or your contract, but you also loose something much more important: Your credibility.

Most professionals do not have any credibility, so you have to work hard to obtain some credibility in a credibility lacking market.

Loosing your credibility is comparable to opening a bottle or your favorite cola and finding water with soil in it. You go to change it to the local store and they give you another identical. You would stop buying cola.

But you do buy cola, don't you? Obviating the health risks, you buy cola because cola has what marketeers call "mental space". When people hire you as a consultant or as a knowledge worker, you are being hired because of the same mental game "when I need something like this, I just call this guy...". Information asymetry. You wouldn't hire yourself because you already know exactly the same as the person you would hire.

Nevertheles, people who hire you learn quickly (they are not morons), therefore you have to keep learning in order to be required in the future.

So both Scrum and XP are right on requiring you not to change objectives during a sprint, but they are contradicting themselves. They are trying to avoid you the pain of having nothing to show at the end of the iteration, but on the other hand they are prescribing just the opposite of their original idea.

Is there a better way?

I think there is. Simply say: Ok, but we are not going to achieve the goal on time.

It works like magic, because it allows the marketing people to make an informed decision while at the same time, you have saved your face. It is important because you don't want to loose your market mind share, while at the same time to avoid conflict in which probably you are going to loose because they are the clients, so they probably know what they want, they just need to know the consequences of their decisions and it is your job to communicate the risks.

(Note: I'm not recommending this particular project managemente course, although I do think that project managers should take courses like this. I'm referrig to it, because an integral part of project management is communicating risks, and not only project managers need to communicate risks, all developers should know about this, because risks happen everywhere in the project, so if only the project manager knows he needs to communicate risks, the project is in jeopardy).

Ok, so now to the second critique. I already mentioned this critique on XP before, but I'm going to repeat it here: XP insist in that you shouldn't use software to do XP.

The rationale is that once you set it up in software it is like if it were carved in stone.

This is the proverbial "In the house of the blacksmith, the knives are made of wood" (I don't know if this is known in English, but at least in Spanish it is "En casa de herrero, cuchillo de palo").

Software is not stone. You can change it. If you can't change your software, it is probably because you are not using the right tools. I think the whole industry is too adept to static compilers and stuff. But there is a tendency in the patterns movement to make software more dynamic.

I think we should replace all the programming problems into configuration problems.

And configuration should be done on the fly. Not on property files.

Then the XP fear of software will disappear.

lunes, 20 de agosto de 2007

How to calculate a square root

I was thinking about on how to calculate a square root.

Usually you are told "use this formula" and although most people can follow simple formulas, I always get that feeling "why are we doing this?". You can always use a calculator and your calculations will often be wrong, they will be just approximations...

Ok approximations is the name of the lesson today, next week we will calculate square roots up to any number of decimals we want, or infinite precision.

How can we do those approximations?

If we are doing approximations I can always give you the next integer or the lower integer of the real answer and you could go happy with it.

Now let us suppose that those numbers are a and b respectively. Given x, we can easily find numbers a and b such that: a <= sqrt(x) <= b. That is almost trivial.

Now we know for sure that sqrt(x) * sqrt(x ) = x.

Let us suppose we take a and do the following:

c = x / a => a * c = x

We don't have sqrt(x), but we have a and c which are close approximations.

And we also know:

a * a < x
b * b > x
c * c > x
a < c < b

What we need now is intuition.

We know sqrt(x) is between a and c.

What would happen if we take a1= (a+c)/2?

If a1 * a1 == x, then sqrt(x) = a1
If a1 * a1 < x, then a = a1, c = x/a, repeat the algorithm
If a1 * a1 > x, then c = a1, a = x/c, repeat the algorithm

So you can approximate easily sqrt(x) just by doing divisions. Actually if you keep finding the median it should converge the double of digits in each step (since you are dividing by 2).

The problem is how close are we going?

When you are calculating sqrt(1), a = c, so this question is irrelevant.

When you are calculating sqtr(10), a = 3, b = 4, c = 10 / 3 = 3.33333..., so a1 = 3.166665, the actual sqrt(10) = 3.16227, pretty close to be the first approximation, but a1 > sqrt(x) even for the first iteration. The problem here is that 10 is a number to small.

When you are calculating sqrt(10000000), a = 3162, c = 3162.55534471853257432005..., so a1 = 3162,277672359266287160025, the actual sqrt = 3162,2776601683793319988935, pretty close to be the first approximation, and still the approximation is higher... (red numbers are the ones we didn't get right).

So calculating the integer part now seems to be the hardest part, but it is actually the simplest.

First remove the border conditions like negative numbers, zero, one, all that can be solved without doing any math.

Then calculate a number a so that a * a <= x and (a+1) * (a+1) > x. To do this first start with a = x / 2 and iterate using binary search.

For example if x = 2, start with a = 1. So a = 1, x = 2, a * a = 1, (a+1) * (a+1) = 4 => Found.

If x = 19, start with a = 9 (the integer part of x/2).

So a = 9, x = 19, a * a = 81, too big => new a = a /2 = 4 (the integer part of a /2)

So a = 4, x = 19, a * a = 16, (a+1) * (a+1) = 25 => Found.

Simple, isn't it?

viernes, 17 de agosto de 2007

On Merit

Merit in very important in the software world, as in the real world.

In the real world if you show no merit (if you don't go to school for example) you will probably never find a job and starve to death. There are some mechanism on societies that prevent this, but most of the time, you will be taken out of existence. You can live by asking and using the money of your rich aunt, but it certainly won't last forever.

I know gross generalizations are mostly wrong, most of the time, including this one. But bear with me, this story has a moral.

With computer languages it is the same. At the end of the 80's and beggining of the 90's object technology was going to win. Or so we thought.

Windowing environments like the Mac were the models others would follow, and those windowing environments could not be successfully duplicated without object technology: object oriented languages were going to replace the arcane C, COBOL, FORTRAN, LISP, etc.

The replacement was called Smalltalk, but it was not available on every desktop. It was a very expensive product that run on very expensivve hardware, so older languages were retrofitted to be able to execute object oriented concepts like encapsulation, inheritance and polymorphism.

Therefore C++, ADD 1 TO COBOL GIVING COBOL, Object Oriented Fortran and CLOS were invented in order to improve the rather lousier languages before Smalltalk took over.

Ok, ADD 1 TO COBOL doesn't exist, but Object Oriented COBOL is in the works:
http://en.wikipedia.org/wiki/COBOL#COBOL_2002_and_object-oriented_COBOL

Object Oriented Fortran does exist:
http://en.wikipedia.org/wiki/Object-Oriented_Fortran

Somehow Smalltalk, which was too big to run on our computers in 1990, was too small to run on our computers in 1996, compared with the amount of RAM and hardisk required to compile C++.

The thing was not a hardware problem, that was just the explanation given, the real problem was that the people who liked C were afraid to learn something new like Smalltalk. At least new for them...

Why? The more I think of it, I think it has to do with the fact that if we all change languages, then who knows more? The smarter ones will learn faster (or at least we tend to think smart guys learn quickly), therefore people who invested 10 or 20 years in one language would find themselves asking newbies how to do things... So they need to say, with a straigh face, something about Smalltalk, like:

- It is too hard to learn.
- It is too expensive.
- It is too slow.
- It consumes too much resources.

They need to say something to convince others not to look into it. Also they need an explanation for themselves. Self delution.

If you feel comfortable about a thought, it might be that it is not true, but that you prefer to think that way. Comfortable. Not true, but sure it feels good.

C++ went belly up the way of the dinasour. I worked many years in C++ and I developed long lists of "do's and don'ts" similar to design patterns, but full of technical details. There were around 30 of those recommendations, and they were arcane (in the sense that they were hard to understand), and were all interrelated with each other, so if you wanted to understand deeeply one of them, you had to understand almost all of them, so they were very hard to explain to newbies.

It was like a miracle for job stability.

But C++ projects were failing. Not my projects, mind you, but projects all over the place.

The problem was that I had a technique, a solution, that wouldn't scale.

I know very smart people who can write code I could barely understand, but for 99% of the cases the opposite is true. If you have worked in C++ I know for sure this has been your experience too. Even maybe you never had the luck to see C++ code you could't understand, but given the amount of open source code, I doubt that.

Did C++ have any merit?

I mean, C++ is now being retired as a language and being totally replaced by Java. Java exists since 1991 with the name Oak, but it became widely known as Java in 1996.

http://ei.cs.vt.edu/book/chap1/java_hist.html

I really think C++ will not be recognized as a good language 50 years from now. If you think about how many failed projects it produced, how many late projects it produced, you will see that it will be teached at universities showing how not to design languages.

Bjarne Stroustrup or course wouldn't like that, but it really doesn't matter. Nobody could have guessed at that point that the language was going to be such a disaster, and the fact that he believed he could create such a language, actually do it and finally give it away for free shows how noble his intentions were. The results of course were terrible, but his intentions were noble.

I seriously think we should form a kind of alliance or professional body to limit such endeavors. Ok, probably you now think I lost my mind. But let us explore this idea as a thought experiment and see what could happen.

First, everyone should be free to try whatever they want, but standards should be approved by a comitee, like it happens already with the RFC internet standars and the JCP community process.

They need to submit an spec and a working prototype for everyone to review. After a few months or a few years, everyone has something to say and finally they vote. The process is simple because you can always know in which state it is, it is cumbersome if you want to subvert it, which is a rather good characteristic, it may take a lot of time, but since you always know in which state it is, at least from the outside, it is simple.

The internet run 24x7 and is never taken down for service (a clear indication IBM and Microsoft had nothing to with it). Do you see any other system with that characteristic?

Now imagine IBM and Microsoft decide to replace the internet with a new standard invented by them. Would you trust them?

The fact is that both IBM and Microsoft do not have good technical people working for them, or if they have them, they are not the ones making any decisions. If that's good or bad for those companies would be interesting to discuss, but it would take too long and is irrelevant with the topic of merit I'm trying to put on the table.

Would there be any merit on such a standard?

Was there any merit on C++?

I think that all software have to have some merit in order to be considered.

And that takes me to the current technologies: Web Services, SOA, ESB, who need them?

Have you ever seen J2EE merits?

J2EE merits were already debunked by Rod Jonhson: http://www.thespringexperience.com/speaker_view.jsp?speakerId=149

All that is left of J2EE is transaction processing, something databases did before J2EE joined the party.

So, if we already have J2EE, why do we need web services?

One reason could be to make .NET and J2EE interoperable. But web services are not transaction aware, so how could they replace EJBs?

Someone has already thought abut this: http://www-128.ibm.com/developerworks/xml/library/x-transws/

Morals:
  • Before sinking into new technologies you should first understand the need for it.
  • Usually there is no need for new technology and you play safer using the old technology for solving the new problem.
  • Whenever you make a technical choice, make sure there is a way to back out.

miércoles, 8 de agosto de 2007

Wikis are the Future

In the postmortem of the project we just finished I wrote that we needed to integrate our bug tracking software (Insecticida) with our Wiki (VQWiki).

So I implemented in 2 hours a Wiki, just implementing 2 pages: one to view a topic and another one to edit it.. I'm rather slow on PHP, and I know I could do it in Java, I know Java a way better, but I'm lazy, and I really think Smalltalk is better than Java.

Anyway I was amazed how easy it was to create a wiki, with versions, embedded pages, embedded photos, AMAZING!!!!

martes, 7 de agosto de 2007

Multiverses

If it is possible to travel to parallel universes: http://www.bbc.co.uk/science/horizon/2001/paralleluni.shtml

Then we would go to underdeveloped universes and exploit their resources, while at the same time go to more developed universes and steal their technologies.

Anyway, how could they know a person came from another universe?

It would convert us into mediocre and lazy people.

If we work harder, then we would get that technology earlier, so it is inevitable to become mediocre and lazy. The mediocre and the lazy are going to win!

Update: 2007-08-20

This is not about having no consideration for the rest, but a conclusion on what history shows about the behavior of people and human groups.

For example, it is always shown on TV how the Spanish came to america and they arrived to an island called "la Española" (the Hispaniola, but it should be called "the Spanish" instead, well English terms are very hard to translate into Spanish too), natives greeted and treated spanish very well, but the Spanish enslaved them and evetually killed them all. This is the same that the romans did with the villages that sorrounded them, and the same can be said of the greeks. The main difference is that the romans made their empire grow a lot larger than the greeks because they eventually realized that freedom produces more than slavery, so they allowed people to stay free if they pais their taxes, so there was an incentive to produce more: not becoming a slave.

In fact we still follow the same procedure: state officials do not pay taxes, or they pay the minimum, while normal people, if they do not pay their taxes, they go to jail, which is similar to being slave in the sense that you loose your freedom.

See: http://www.history.com/tdih.do?id=5224&action=tdihArticleCategory, it seems the natives defended themselves, so I guess we can't blame the Spanish, can we?

Now think what would happen if people could travel to parallel universes. The amount of oil in this universe would never end, because we could always travel to different universes and through a wormhole extract their petrol without them knowing.

Also in different universes people would have develop one technology more than others, so we would end up learning from them and using our advanced technology to subsume them. Not a pretty picture, but better we do it to them than them to us.

If the parallel usinverses exist, it is just a matter of time. But on the other hand if resources are so abundant, would it make sense to go there and steal?

Maybe it would if if were cheaper to steal than to produce here.

If you look at the Spanish settlements in America, they wold only settle where they could find other people producing, because there model of wealth included other people working for them. The same was not true in the States since the had a revolution, while in south america we didn't have revolutions, we just became independant, but people with privileges remained in power.

Why did that happen in the US and not here in the south?

I guess it has to do with the fact that all the nothern languages are of germanic descent. I mean, English, Swedish, Danish, German itself, are all germanic languages and therefore, they are descendants of germanic tribes, the same germanic tribes that were enslaved by the romans, that fought the romans and that eventually defeated the romans.

So in a sense, in the american culture and in all germanic cultures there is tendency to break free from opression, while in the latin cultures, there is a tendency for one group to make the others work for them. Americans think latin americans are lazy, and it is true, so profoundly lazy you could never imagine, specially mentally lazy. If that wasn't true the romans would have never been defited by the germanic tribes.

Yet our cultures have mixtures of both ideas. Americans do not live in tribes anymore. Nor we dress with togas. There are americans who would prefer not to work as there are latin americans who all they do is work.

But americans who do not work do not find people to do their chores for them. And if they do, it is because they are smart enough to know how to do it, probably they have wealth and they share the wealth. But it doesn't last more than 3 generations.

In the case of latin america, people who live to work are slaves and never see a return of their effort. It may be that our countries are poor because of that, but I think the problem is self reinforcing: Sin ce people who work more do not see any benefits, people who see this are convinced they should not work as hard and eventually people who work a lot get tired of pulling the chariot all by themselves.

Eventually this means that countries with people who think in the group perform better.

Let me put an example: Microsoft. Microsoft is an excellent company. People are very cooperative in there. Some may show examples where this is not true, but I know for sure that people are given stock options for working there. Name one company in latin america where this is also true.

How many cmpanies as successful as Microsoft do you see in latin america?

The problem is that people in latin america are greedy, there is so much poverty that they think they need to gather all resources and keep them to themselves. And this means companies are poor, and employees are poor, so companies sell less, because the market is smaller, therefore it is a vicious circle.

I know how to fix this: eliminate the IVA (VAT: Value added tax). Why? Because companies that produce can't produce too much (the IVA takes aways 20% of their added value), and this goes on through all the value added chain, so products are 100% more expensive.

By reducing the VAT to 0%, our countries would consume the same as the US, proportionally.

There are other things to fix: free education for the smart ones, social security and a working patent system:

  • Education should be free for all, and at least in Chile, we could have that because of the surplus generated by the copper.
  • Social security is standard in the EU and the US. What are we trying to prove? That we are better than them. I can tell you we are not. We are just in an analysis paralysis stage, where we have the resources and we do nothing with them.
  • A working patent system is important because a lot of ideas are simply lost. If people write their ideas down and publish them in the patent system, eventually those patents become free. Why would anyone publish their patents? Because they are greedy and they think they can get something in return of almost nothing. And it is a matter of starting one patent system and the rest is automatic, because people are so greedy. It is a money making machine that runs on free energy. ;-)


lunes, 6 de agosto de 2007

New Pattern Catalog

This are some "new" pattern which I have been using and not yet documented:

1. Separate methods in I/O methods and calculation methods.
2. Once those methods are separated, separate classes into I/O classes and calculation classes.
3. Make transformation so that no information is lost. That is, be able to undo operations at any time, because information can be reconstructed.
4. A corollary of the above is not doing operations at all, but do as little as possible to conserve the old information and add the new information as requested. Whenever you need to show the information in a certain way, do the calculation on the fly. This could also be called "Lazy calculation".
5. Avoid storing repeated information. It may become inconsistent and it is waste of time and space.
6. Optimize at the end, when all functionality has been implemented and have been tested.
7. Only introduce significant optimnizations (10x, 5x, etc.). If optimizations only introduce 2x performance improvements, remove them and try new ones.
8. Move repeated code into libraries. Code always have bugs, and one way to reduce the bug count is to test the code throughly. When code is in libraries it is tested more often, therefore it can only contain fewer bugs. Besides repeated code is a maintainance nightmare.
9. Move optimizations into libraries to make sure they are not introducing subtle bugs and to make sure those optimizations can be reused.
10. Convert programming problems into configuration problems, so that they are easier to handle and test and so that they can be changed without breaking the old functionality.
11. Make sure configuration may be changed on the fly, so that testing becomes easier.
12. Introduce interceptors for logging, permissions, validations, ejbs, sql queries, etc.
13. Make sure you are working on real requirements and not something that nobody will test or nobody will use.

jueves, 2 de agosto de 2007

Manufacturing Myths

According to Laura D'Andrea's Ideas Viewpoint article on December 12, 2005 issue of Business Week, even though the US economy was recovering since 2001 recession into 2005 for almost 5 years, yet manufacturing employment was still down 10%, or 1.6 million jobs, which would mean there were 16 million manufacturing jobs in 2001 and only 14.4 million manufacturing jobs in 2005.

She argues that the job loss is not due to the long believed offshoring of goods and jobs to low wage countries, but a 50 year tendency to reduce manufacturing jobs from 35% of the total jobs in 1950 to 13% today. This means machines are taking the role of humans, which is also referred to as "productivity growth". Cool! More automation means more software development jobs.

Or another possibility is that people are moving into sales.

She also argues that exporting more goods would not necessarily mean more jobs. She compares the US with Germany. While the US is the biggest producer of manufactures goods, Germany is the biggest exporter. (How did they measure this? Counting the amount of goods or adding their prices? If it is because of the price, I bet it has to do with how expensive a Mercedes is, compared to say a Chevrolet Corsa).

Anyways, she continues to argue that since Germany has bigger unemployment than the US, even though Germany is a high wage countries, it sells high priced products (calling them high value added sounds better, but the price is what we can measure, not the added value).

Therefore the conclusion, at least to me, is that the more industrialized a country is, the more automated their jobs are, and therefore, you can expect high unemployment. This means that people should be free to study and return to higher paying jobs, improving the economy and themselves. The question is how can a person study while having a job? He has no time to study. Now imagine the same person without a job, he has the time, but no money.

Therefore a very good way to motivate people into studying would be so that the government pays for their education while they are at a job, they would simply take some "education leave", and be paid their full salary by the government, for let's say one or two months each year. The government would have to pay the full tuition too.

Also for the unemployed, the government could pay up to six months of education every year. The rest of the time people should actually try to find a job or start a company.

It may seem bizarre for some people, but having lived in Chile, the US and France, the most striking difference is that in developed countries there are no people living on the streets. You may think I didn't search enough. But my point is that in Chile you don't have to search, we are all sorrounded by people who are living on the street.

Even people who have jobs are very reluctant to do their jobs, because they know they are being underpaid. They are underpaid because there are so many people looking for jobs, people who do not have the necessary qualifications. Yet most companies are packed with this people. It is as if it were too expensive to filter people. Or as if the actual result of the job didn't matter.

Ok, now you probably think I'm describing your office, but there are orders of magnitude of difference. I mean I had to recruit 15 developers last year and I interviewed 3 to 10 persons a week during 3 months. That's at least 36 and at most 120 different interviews, probably I was going to interview 90 or so. They were supposed to be engineers with at least 6 months hand-on Java experience.

10 of them, realizing that I was going to ask them to code, remembered they had a sick relative to look after them, or found another complling reason to leave. I'm glad they decided to leave and didn't make me loose more time on them. That would leave 80.

Half of them didn't know how to reverse a String. That would leave 40.

A fourth of the rest didn't know how to reverse a String recursively. That would leave 30.

A third of the rest didn't know how to search a binary tree. That would leave 20.

A fourth of the rest didn't know how to use a HashMap. That would leave 15.

And with just 15 people we impemented this whole huge system with 250 use cases and 400 different screens (combined together they form a lot more, we just haven't count all those pssible combinations), in just 6 months.

What we are doing is to industrialize software production. We are still at a stage in which most of the knowledge is art. But eventually we will have created all the meaningful code. Eventually the system will just be configurable and be able to do whatever the user expects, without developers intervention.