jueves, 25 de octubre de 2007

Automation: Most people dislike it.

Automation: Most people dislike it. Even computer scientists.

Why is that?

Because you can't automate the smart stuff, only the stupid stuff. And people like to concentrate on the smart stuff, therefore they avoid the stupid stuff because it is boring, and nobody ever automates it.

Successful companies automate the boring stuff, so that no one has to do it. Then people have more time to do the hard stuff (by definition non boring), and therefore, since it is challenging, they can decompose the hard stuff into less complex stuff. And they can repeat this process until most of the hard stuff has been decomposed into trivial (and therefore boring) stuff. So companies that detect deficiencies in other companies (for example that they are not efficient), can compete in the same markets and beat them (as long as clients have no power).

Suppliers that are in markets in which the customer has too much power can't be efficient, since any efficiencies are absorved by the customer, making it impossible for them to gather the results of their efficiency.

In the long run, market economics dictate that people who automate too much are more efficient, but if all efficiency goes into the pocket of the client, it hurts its suppliers. And therefore, suppliers end up either without a job or without a contract and end up doing something else. At the same time, clients that benefit from this efficiency could probably improve their market share. But reality is different. Most companies spend as little as 1% or 2% in IT, not because they try to spend less, but simply because they are in markets that are so profitable.

IT certainly isn't profitable because there is no barrier of entry. Almost anyone can study Visual Basic or Python and be a developer in 6 months. Which gives that in order to be a contractor in this market you have to be crazy or not like yourself or be an incompetent or all of the above.

domingo, 23 de septiembre de 2007

Impossible to create products in JEE

It is impossible to create products in JEE.

First, JEE is not about creating off the shelf products, but about delivering integration solutions (hence the "Enterprise" in its name).

Second, JEE is not portable across vendors. If you build it for WebLogic, you can't deploy in WebSphere and viceversa.

Third, with all the mergers and aquisitions, there will always be plenty of room for JEE developers, since all those disparate propietary systems will need integration. And even in the case of the JEE vendors integration is not something to take for granted, but it is usually harder because of all the APIs and the descriptors needed.

Spring is like fresh air to this respect. It is far easier to integrate Spring services than to integrate JEE vendors. Spring and Hibernate are so simple that the JEE 5 (incuding EJB 3) was modelled after them.

Fourth, web services are a mess. There are many standards and they do not interoperate. They introduce more trouble than they solve, but that is good. Companies and developers will move away from web services and therefore only the developers that really understand the technology will remain using it, until it becomes simpler for other people to use. usually this means they will find and embrace the right abstractions, and then coding monkeys will be able to leverage from there.

There is a very strong need in the market for an abstraction layer that will permit different application servers to work together. But the right abstraction must first be found. It is not like developers are not trying to find it (granted: 80% of the developers are simply drones and couldn't possibly think of creating abstractions themselves, although they are using abstractions all the time), nor it is that they can't come up with a good abstraction because they lack a brain. Maybe they lack the ability to decide what a good abstraction is, or in other words, the criteria for deciding if an abstraction is ok or not. The real problem, as in the case of most software development efforts, is finding the correct requirements.

Usually developers settle for the wrong abstraction and for the incorrect requirements. It is as if they knew they had a hammer and everything looked like a nail. This problem with this reasoning is that it conduces to wasted time and money. And wasted money is no problem, because if you loose some money you can always recover it, but if you loose time, you can't get it back.

The Right Requirements for a JEE abstraction layer

In order to understand how to create an abstraction layer for X, you first need to understand X. No secrets here. But you don't need to understand X mechanisms, but how X relates with its environment. In other words, you need to model interactions of X with its surrounding environment.

Most frameworks already have an abstraction that they present to you. Like it or not, the present an abstraction. Sometimes they try to force an abstraction on you, the programmer. Other times they try not to enforce an abstraction, but a completely transparent code where you can modify everything underneath and have direct access to the internals.

Most of the time the abstraction is lacking, other times the abstraction is there simply is no abstractions and you can modify everything.

And sometimes, as with TCP/IP, the abstraction is great and you can do wonders with it. Even if you don't like TCP because it is too high level, you can use UDP which is IP and very thin layer on top.

In other words, you don't need to worry that you are going to obscure the underlying mechanisms, because they can always go back to UDP mode, but guess what?... TCP is 99% of the time good enough as an abstraction layer.

martes, 18 de septiembre de 2007

Sun always makes the wrong moves

It seems that Sun makes always the wrong moves.

In 1996 I used Sun OS, and Sun workstations had 4 CPUs per machine, consirably faster than Intel CPUs of the time. The OS run on one of the chips and the other 3 chips were used for the user space.

It was really smooth. But Sun had a different idea named Solaris (which must be Sunny in Latin I guess): The operating system was going to be distributed and therefore it would be faster.

That goes against my intuition and against the separation of concerns, against the division of labor, etc. Solaris was a very bad idea from the beginning.

Fast forward to 2007

Now Sun delivers an 8 core CPU with embeded ethernet... An 8 core is something amazing, if it weren't for Azul 768 cores in one machine.

But having ethernet embedded on the CPU? Maybe I'm missing something. Is the CPU going to communicate with main memory using ethernet. Maybe it is, since disks are supposed to get faster by using iSCSI (SCSI over IP), so getting an ethernet inside the CPU is not crazy after all.

I thought the primary drive for iSCSI was to have disks outside any computer and therefore reduce costs. Of course you could also use iSCSI internally, but I always thought the CPU would never speak IP. If it now speaks ethernet by hardware, IP is just a thin layer over ethernet, so 90% of the job is done, the other 10% can trivually be done by software.

But I think this tendency to convert software into hardware will only accelerate. I don't know if people realize how fast new protocols are invented and are crammed into chips. Today you can run an emulator in an applet, so a full computer can run in your browser.

If IP can be crammed in a chip, Java can be crammed in a chip, and therefore computers with 1024 CPUs (or cores) are just arounf the corner. If each of those cores can run Java, there will be a lot of unused computing power.

I know the companies are underserviced, most of them spend as much as half million dollars per month just ro run their datacenters. And that's cheap compared with the amount of money that would be spent if run manually (not considering all the data loss, that also has a price).

So companies are paying really very little for running this datacenters, but in 10 years they will be able to run their whole data center in an applet, if things continue this way. And I know they will.

4K must be enough for everyone

When I was 14 or so I knew a guy who was the CIO of a bank in Equador. He would tell me that when he started working a computer with 4KB of RAM was enough to perform everything at the bank. He didn't tell the processes run all night, but of course it couldn't be any different.

Now things have changed. Computers are commodities and developers are expensive, so people are not expected to work at night. And yet you can run processes at night, like performance testing and see the next day what happened. But even doing that is a waste, because if something doesn't work, you need to retest, and if you wait for the process to run at night again, you lost a full day.

It is much better to buy a machine for running the tests, since if you use Java, a normal PC can do, and they cost less than a $1,000. This is important because developers typically cost between $2,000 to $6,000 per month depending on their experience. If they can perform faster (by having computing resources available), you are saving the big bucks. Saving on computer hardware is rarely a saving.

So maybe Sun is on the right track this time, making it easier for hardware manufacturers to use iSCSI inside computers. But IMHO moving everything from software to hardware doesn't make 20x faster, only 10% faster, and a lot more hard to change.

As I already said, good optimizations are the ones that perform at least 5x faster, even 10x faster. Any optimization that is not at least 2x faster should be disregarded and eliminated. I think we are seeing that kind of optimizations now.

The best ideas always win in the long run

One nice thing about not understanding is that it really doesn't matter. If you don't understand, the market will take care of it. The market always takes the best technologies and makes them blossom, while leaving the bad technologies behind.

Now you think I'm wrong because you know these examples, for example I know of Smalltalk, which was clearly superior to C++ and Java, but the main problem it had was its price. It was too expensive at the time, so Java occupied its place. Smalltalk good ideas were copied one by one until C++ and Java leveraged its potential. First C++, now Java, which is mostly free.

So in the end it is the right technology mix that wins. Even if Smalltalk did not win, its technology its all over the place in Java, so in a sense Smalltalk won, through Java. For all those Smalltalk lovers and Java haters, I know Smalltalk is still better, I know there are many things in Smalltalk that have not been copied (yet), but it is just a matter of time and a new language that leverages the Smalltalk potential that is already left.

Why not Smalltalk directly? I now think Smalltalk has trust issues. I mean, the more I work in collaborative environments (development projects) the more I see people can't be trusted. they perform rather ok at the beggining, but eventually they do tricks and they try to benefit themselves in the sort run (while damaging themselves in the long run, and at the same time damaging the projects). I have no recipe for solving this issue, but certainly using a language that is less restrictive (Smalltalk) doesn't seem like a solution.

Trust issues

Maybe it is. I mean, maybe being in an environment where you are trusted from day one makes you be worth the trust, while being in an environment where you are not trusted makes you rebel and be less trustable. At least that certainly true for children, you need to trust on them for them to act maturely (according to their age of course).

In the case of programmers, I've read it is the same, but I feel tempted to do the opposite, since I have read their code and I know they did tricks, so a natural solution would be to build an even more restricting cage. Psycologically, I suppose the only reason they did tricks was that they needed to solve the problem and they thought the controls were unnecessary obstacles and even offensive. "If you hire professionals, why would you not trust them?"

On the other hand if they didn't have anything to hide, why did they do these dirty tricks? You can imagine that they thought they were trying to say "I'm smarter than you, because I did this and you didn't notice". Or they simply needed those tricks in order to perform appropiately, or so they thought at the time.

I'm sorry but I have to let them know that that is not the purpose of control. The purpose is to find out who is doing tricks, apply some current (only psychologically, just in case you wonder) and then go back to normal. The idea is not to have a cage on the mind of each developer, but to unlock their potential without making them step on each other's toes.

What do I mean by that?

All developers do not think the same, and you need to build a collective mind. This means that each individual has his own ideas, but the ideas of one developer can be understood by the rest, leveraging the potential of other to improve their own potential. This means you can start with a very lousy team, but you improve each of its members until they understand each other. This means they need to communicate, to pair program, or if they lack the basic instinct of cooperation, to do the poor man's pair program which is to code review before a check-in.

Governance is a very interesting topic, because you can govern like the Stasi or the Nazi regime, or you can govern like if you were in Switzerland or Sweden. The people act better when the government is light and is actively encouraging people to help the government with ideas and independent action. People feel as part of a community and they have a possitive view of the goverment, because they think "we are the goverment, because we do as we feel like it". The goverment just forbids the more aberrant behavior, but people feel everything is permitted and the sky is the limit. You can certainly feel that way in the US where children are encouraged to do as they wish and to not limit themselves.

In Chile, our education was radically different. You had to ask for permission and nothing was allowed. Even if you tried to motivate people, people would look suspiciously because who would allow that? Yes, it sounds silly and funny, but it is not so funny when all days were like that. I mean, if you are not trusted, it means you are a bad person, then it is ok to do bad things as long as nobody sees, because otherwise you would be gounded. See? The logic is perfect, and all that started it was that you were not trusted in the first place.

Is this still true for grown ups? Yep!

Chile has a very special way of treating companies like robbers. For example if you start company, you are allowed to stamp only 3 billing notes. Unstamped billing notes are not valid and you can stamp new billing notes when the other billing notes have been returned to the chilean IRS equivalent (already paid). So nothing really works, because no company could survive that long, and people have to do things under the table in order to operate.

See: you are not trusted, therefore you need to subvert the system, so if you look closely into what companies do, you realize they can't be trusted really, and all is just pose and looks. Welcome to latin america, where nothing works as it is supposed to.

lunes, 17 de septiembre de 2007

Being an Idiot

Don't you feel sorrounded by idiots?

By the famous message on the "I see dumb people" t-shirt, I suppose most IT geeks feel the same way, and some even want to let others know.

Apparently in the Unix culture people are called idiots if they ask questions that are responded in the manual or FAQ. This is consistent with the RTFM expression so common in Usenet.

But being an idiot is good if you are learning, because you are asking the right questions (somebody wrote the question in a FAQ, so it must be common enough), besides how are you going to find out there is FAQ if you can't ask?

But there is something not right about redirecting people to read the manual. In Windows it is assumed people do not read manuals, while in Unix there is a manual even for man, the manual reader.

In the Windows culture, if the program does not work as expected it is the program's fault, not the user's fault. There are no manuals and the manuals that exist are mostly useless anyway. Most people prefer Windows because it is simpler and it doesn't treat you like you are an idiot. Also, the user interface is consistent accross several programs because programers know users do not read manuals. Besides Microsoft encourages software developers to write programs that are consistent with the Windows UI look and feel.

Assuming you don't know

Most jobs prefer people who don't know, because they can pay less. And people will learn anyway, right?

But then the same people are asked for not to ask any stupid questions. And since most questions are stupid anyway, I mean even if you ask what is the objective of a team, the objective of a compnay, its mission, its vision, etc. Everything is considered stupid because mostly people doesn't know.

Even when they do know, they feel they can't disclose that privileged information, which almost always means you are repleaceable and your function will be eliminated in the follwoing months.

So, by all means, ask. Even if the question sounds stupid. Even if the reply that you should know the answer, or that it is not of your business. Simply ask. And if they reply that you should know, tell them that you do, but you want to know with which kind of professionals you are working: people who hide vital information or people you can trust.

If they tell you that it is not of your business, tell them that you don't accept to be limited by people who can't answer simple questions.

Write the answers down

Probably the answers they gove weere prerecorded answers for situations like this. People work like this, they memorice lines and repeat like parrots. Thinking requires time ans space, so they go the easy way.

But write the asnwers down. They could be very important down the road.

One of the things I learned the hard way was that doing post mortem for every project is probably one of the most important parts of every project. Doing a post mortem is very easy, you simply ask what went right and what went wrong and you document it, proposing different solutions for the problems found.

It really gives you insight on what you did wrongly.

The Cathedral vs. the Bazaar

It seems people bring this over and over: Why don't small teams of programmers in a garage replace them huge teams we see today?

It seems that the explanation is that their respective products are in a different category.

But I doubt it.

Most big projects are just massive copy and paste and they could be replaced with the right prototypes and building abstraction layers (with the right abstractions of course), but it seems managers prefer to fail: They prefer the predictable and long development cycle, because when you fail it is already too late, massive amounts of money have gone through the drain and therefore managers are in a better position (they have already been paid for wasting time and money) to negotiate even better pay.

When people are confronted with 2 alternatives: one that can produce better results, but that if fails means you have no scapegoat, and another that will eventually fail but you have an scapegoat, people prefer the one that will fail, but has an scapegoat. The reason is that people get hired to avoid uncertainity, and I'm sure people in latin america prefer this situation all the time.

There are many examples and you probably have been in many: it is very common for projects to avoid proptotypes, because prototypes can show which design decisions work and which can't probably work. If they work or not can be explained with developers who lack the required knowledge or bad design decisions. It doesn't really matter, since design decisions must be the ones that developers can implement.

But developers tend to avoid writing prototypes because they could be accused of not knowing how to build this little examples, and designers do not like this for the same reason: the code can be corect, but the design can be shown not to do as expected, therefore, people prefer the non accountibility of delivering a mess.

And the big problem of a code that is a big ball of mud is that no one can fix it because no one can understand it.

The different category if Wikis

It is amazing how people think that people in a garage can't produce the complications coming out of their little brains, but they can build a computer in a garage.

I mean, come on, a computer is a lot more complex than the mumblings fo a business analyst or a user. Usually the businesses are plain simple and hide behind a courtain of poorly defined words. There is usually no more than that. A little script in Excel can usually replace the biggest experts.

When it comes to Wikis, I already mentioned that they are great and usually people don't use them for the worng reasons. I think the world would be a lot better if the Wiki was invented just a few minutes before the Web. The Web language is just so complex and lacking even the minimal amount of think ahead.

Almost everything in the web is thought for today business and if you need something else, you need to extend the web protocol or the web language and you end up with incompatible browsers. This rant I heard it first from Alan Kay, one of the inventors of Smalltalk, and he proposed a solution, a little language that would explain itself to the browser.

So you could easily change HTML and the new HTML tags would explain themselves to the browsers, I can already imagine this because tag libraries work like that, although the browser doesn't have a clue about them.

Most systems can be implemented as a Wiki. As I already mentiones, the access restrictions of the Wiki can be introduced using AOP or dynamic proxies.

The Generalist Specialist

Some people are specialists (most of the people) and some people are generalists (very few people). Industrialization has brought us a lot of job specialization and some people think it is good to be a generalist.

Being a generalist means you must be able to talk to a big crowd of different specialists and be able to not say anything stupid or anything that will make them feel anger, fear, etc. One way to do this is to study politics and learn how to do double talk: each group agrees with what you are saying because they understand different things.

Another politically correct way to manage a crowd is to find agreement on something and talk endlessly about something trivial and uncontroversial. It produces no real results on their understanding, but aren't we having a fine time!

Finally there is a way to convince people in non-confrontational ways, allowing them to save face. But first you need to find configurations in which people are not against each other, at least from the point of view of their convenience and their interests. A good way to say this is "we are all on the same boat, so we all arrive peacefully to a safe port or we sink all together". Ok, the choice of words probably has to be a lot safer and welcoming than that, but the main idea is avoid confrontation.

Why?

When you have different specialists, you have different views, and inevitably people manifest this conflicting views. People are afraid that their work will be meaningless or disregarded, therefore they try to impose their views at all costs.

If they were so good professionals they wouldn't be so scared, because they would have succeeded at other places, so they would like to see if other professionals have the same skills and can do things differently. But not all the people have the lack to have a brain and have it working at the same time.

I know it sounds politically incorrect, but the brain is not turned on all day, sorry. I have not actually measured, because the brain works by memorization, by deduction and pattern matching, so when it working by memorization and by pattern matching it is not calculationg the logical consequences of the possibilities and therefore, for the software development tasks point of view, it is useless.

Maybe in medicine and law professionals can work in full mode of pattern matching and memory and the results would be better than trying to deduct, because in order to deduct you need to know a lot about the subject at hand, and doctors probably can only diagnose according to the known symptoms, and since symptoms are always different, there is a tiny probability of delivering the wrong diagnosis and therefore they tend to say "we need to observe the evolution of the disease", meaning they are not sure what is going on with you.

Eating healthy food (meaning fresh food) and doing exercise every day is a better way to remain healthy, by the way.

In software development we have the same problem when projects are built organically, that is all software is thrown at the project, it is well shaken, and when the results are not what it is expected, you are supposed to debug endlessly to diagnose the symptoms and apply microsurgery. Then they find out your surgery had unexpected side effects (collateral damages), or in my vocabulary, you introduced new bugs.

The project is always 99% finished, it doesn't matter how much money, developers or unpaid overtime you throw at it.

Developers say the code is a mess.

Managers say they need to hire more specialists because they ones that are working now are unable to finish and they are already burnt out.

The problem are the generalists

Why would managers have a blame on how developers behave? Developers wrote the mess disn't they?

When left alone developers will always write unreadable code.

Those who specialize on generic skills, have to know the details of every single skill. If they can't learn, they are useless and drive their companies down with them. Since they generally are the bosses, they tend to be pretty agressive with insubordinate subordinates.

The best defense is to attack first. And the only way to avoid confrontation is to win in a matter that doesn't allow the opponent to retaliate. I'm not advocating turning the office into a battleground, but if you risk being fired you are obligued to fire your opponents at the office.

The main advantage is that if it doesn't turn out as expected and you are fired instead of them, then it is usually better to move into other opportunites.

domingo, 16 de septiembre de 2007

Unix and Linux design flaws

Like it or not, all software has desing decisions built into them. Those design choices may be good choices or bad choices, in which case we say design flaws.

Some times the design is not clearly either good or bad, only after 20 years you may realize one of those design decisions turned out to be flaws.

Usually the ones that are hard to change, maintain or improve are the worst ones. Also the ones that take time from unsuspecting users (or customers), since the whole point of using computers is that you can save time by using them.

Unix design flaws

Unix design flaws are shared with Linux since Linux is a Unix clone. I already wrote Unix is a Linux clone because Linux is a lot more popular than Unix. But Unix has really old design choices made in the 70's when computers were expensive, and resources in computers were scarce. No one ever though you could possibly have more than 1 GB of RAM, so a lot of the design decision of C and Unix are really dated (C and Unix originally were developed together).

The fact that Unix and C were so popular were that they were almost open source. BSD was freely available to some universities and to this day it is not clear which company has the rights on Unix.

Linux is simply changing the scenario by making a Unix clone free and therefore the price is zero, but the design flaws are still there. I know I'm not going to win any friends with this post, but what the heck.

File descriptors are integers and everything in Unix is a file, so you operate on them using file descriptors, and integers in C are machine dependent, which means that all Unices have different integer sizes and therefore you write some code and it is machine dependant.

So Linux runs some way in one machine, and in other machines it fails. GNU was the inventor of the "GNU is Not Unix" moniquer and what they did was to reimplement Unix from scratch (the APIs can't be copyrighted according to US courts) and also, make GNU code portable across different hardware.

How did they do this?

make
make install

And that's about it!

Therefore GNU, Linux and Unix is portable at the source level when using the GNU mechanism for building software (basically make and configure).

But Windows is portable at the binary level. and this is a superior alternative.

Going crazy

Probably now you think I'm crazy. How it would be possible for Windoze to be superior to the almighty Linux?

I agree that Linux and specially Ubuntu have gone a long way to make Linux a viable alternative to Windows. You can install Windows binaries from a CD without having to recompile and the same can be done with Ubuntu. But the parallel stops there.

I installed Ubuntu in one of my computers and then instead of cabling my house I tried to use an USB based wireless network. The only problem is that the USB stick didn't come with the Linux drivers, small frustration, I downloaded them from the Internet and then I have to compile them. But Ubuntu doesn't come with the compiler. Should I download it? It comes in source form, since if it is compiled, it is not guaranteed to run in my computer.

What can I do now?

It is really a bad decision to be compatible at the source level. Windows design choice was clearly superior.

Java is superior

But I think it is even better in the case of Java, because you can write once and run everywhere, so it is compatible at the binary level.

Special relativity is a hoax

As I already explained, I think special relativity may be wrong because the Michelson-Morley experiment can't measure a different speed of light. The rationale goes like this: since light is a particle, let assume those are ping pong balls, you are inside a car and you go at 100 km/h. You throw the ball forward and then you throw another ball to the left. You measure the speed of both and since they are the same you assume you are not moving.

The main idea of the Michelson-Morley experiment was that the speed would be different since the earth is rotating and the ether can't be rotating with earth... Why would that be true?

Also light can be a wave that propagates through ether, something that can't be seen or touched. It is consistent with the found speed of light which is the same as the speed of the electric current. Light can diffract, so diffraction would occur on ether.

Let us suppose light is a wave that propagates on ether. We already know that sound propagates on air so that it moves at a fixed speed according to the air itself, which makes it possible for the Doppler effect.

The main problem with the Michelson-Morley experiment is that light propagates in one direction and then it is reflected on a mirror and returns in the opposite direction. If it really was affected by a doppler effect in one direction, it would be nulled by the effect in the opposite direction, therefore the result will always be zero, even if the same experiment is done by waves on water.

Here is a different take: The Michelson-Morley experiment gives 30 km/s (108000 km/h) and therefore ether is real.

sábado, 15 de septiembre de 2007

Stain removers

I found this. I haven't tried them though.

What I've tried is removing swet stains from shirt using hand soap. It really works, although you need to scrub for a few minutes.

Homeopathy and the memory of water

The nice thing about having a blog is that you get to talk about many things, even if you don't really know the subject. 8-D

The homeopathy is a pseudo-science that given a patient with symtompts, finds an herb that produces the same symptomps, infuses water with the plant, and dilutes it well beyond the point there is any molecule of the plant left on the water.

The most interesting thing is that the article depicts homeopathy as something that can't possibly work because it goes against science: Water doesn't have memory.

Does really the water have memory? I hope not because people drink recycled water.

Do you know that the first locomotives were invented for trains and only when that was well understood and working for years the theory of heat was developed?

In order for science to advance, you first need to find the phenomena and then you need to find the explanations. And the explanations are just the explanations, the important thing is the phenomena.

In the case of Homeopathy the article claims that it is a placebo effect. Come on, if the placebo effect were true you would have a market for placebos, and there is no market in sight. And don't tell me Homeopathy is a placebo, because placebos are supposed to be water and sugar, so if people with diseases can feel better drinking sugared water, wouldn't it be that they are "sugar high"?

If placebo really works we are spending too much money on real medicine. People who claim that placebo works should get a brain scan, because it can't possibly work.

Let us suppose that placebo works: We don't need real medicine, let us use placebos.

Let us suppose placebos don't work: What is Homeopathy then?

Planes can't fly

How do you know if you have achieved real innovation? When intelligent people are claming you are wasting your time because your goal can't be achieved.

For example when the Wright brothers were building their first plane (and failing miserably), big and reknowned physics were publishing papers explaning why machines heavier that air couldn't physically fly.

There were many groups tryign to fly, but only Wright brothers were trying to make a kite fly. A kite? Yes! At first it was an unmanned plane that flew as a kite and was controlled from earth. Once they could control the kite from earth and turn left, right, up and down as wished, they tried to fly it.

Other teams tried to fly their planes from the very beginning and since there was not a market for pilots, the inventors died.

So the real lesson for invention:

1. Do not listen to people who claim pigs can't fly, even if they have PhDs on the subject.
2. Take baby steps and make sure the last step worked before taking the next one.

Getting rid of the reset button

I do not use the reset button a lot.

I hear people complain that they need to reboot their computers at least daily, but I run windows for weeks and even months without reset. How is that possible?

Software is interdependent. That is a consequence of how the software is built. Everything depends on everything, so a little change here and the system explodes. Therefore I always install stable software and avoid getting new upgrades. I know what you are thinking now, that I'm not getting protected of malicious attacks.

That's true. Per se, if you don't upgrade, you prone to documented attacks. If you upgrade, you are only prone to undocumented attacks, so it is a false sense of security. Installing a firewall and an antivirus is more effective.

Not upgrading automatically is a way to prevent unpredicted collateral effects.

Microkernel

Andrew Tanenbaum is one of the main proponents of secure and reliable systems by means of microkernel technology.

The main idea of microkernels is that software device drivers run in user space rather than in kernel space. User space is controlled, so that if the program is badly written and tries to write to a memory address it shouldn't, it simply core dumps and the system can continue to work.

In monolithic systems, the contrary to microkernel, the kernel is huge and contains all the drivers, so that the drivers run in kernel space. Since OS writers do not get to write all device drivers, their software can be corrupted by third parties.

One solution would be to write the operating system in Java.

Chapter 11 in SCO book

Did you have any doubts this would happen to all Unix vendors?

Isn't it obvious that they can't make money on a heavily commoditized market?

Windows and Mac OSX can probably stand longer since they are continually improving their products, although on the Windows side it seems their are not innovating enough. New computers with Vista preinstalled are not selling fast, because the new user interface is not considered a great advance for users (actually it is a great advance, but it takes time to get used to it). That's the innovator dilemma: It takes time for people to understand your innovation, so innovations are not best sellers at first. Therefore the really good companies fail financially.

I guess the real problem for Windows is the language (C++) and the lack of a layered architecture. Also it has a baggage of underprime users who got used to inferior technology, and Microsoft is trying to educate the users like Unix tried in the past. It is very Quixotic and good for the economy, but the company will see no ROI. Linux will copy the new user interface as it has for the past several years.

Steve Ballmer thinks the pirates are stealing Windows (cracking it, in pirates jargon) and therefore people buy computers without an operating system and then install pirated Vista. Don't you think it would be better to install Ubuntu?

Microsoft is even reducing the price of Vista. The right price is zero, but it will take some time for Microsoft to forget about making money in a commoditized market and move on into better (READ: UNEXPLORED) markets.

Besides pirates help the company they are pirating. I mean, you can pirate at home, but you can't pirate at the office, and we all know in the office you use the most expensive products. And it takes months, if not years, to know some products really well, so if you pirate at home, you are helping the pirated company by learning to use their products in a free course you take at home in your own time. If you install say open office instead, you are helping yourself and at the same time hurting Microsoft. The sooner Microsoft understands that the office productivity market it created has been commoditized, the sooner it will be able to move on and invest on new products that people really care about and are willing to pay.

Chapter 11

Companies selling Unix are really companies selling pirated copies of Linux clones. Linux is now mainstream, and Unix is like the ugly experiment that was once meant to be Linux, but never got the chance to get there.

According to Richard Stallman, Linux should be called GNU/Linux instead. Of course he is right, Linux is just a kernel program that boots, but 99% of the code is GNU (which means GNU is Not Unix). Nevertheless, everybody says Linux, and it is easier to pronounce, thank you. I guess Stallman should study a little bit of marketing.

If there is a free Unix, no one can make any money on the market of Unix clones. SCO, owner of SCO Unix, which once was a Microsoft product, did not realize this and sued itself out of existance. Maybe you think it can recover, but no knowledgeable investor would invest in a company with no product, or at least not in a company with a commoditized product with a market price of zero.

Sun seemed to understand this by open sourcing Solaris. I personally think Solaris was a substandard product. Sun OS was much better in my not so humble opinion. But Solaris had some interesting ideas that can now be leveraged and included into Linux, so the investment will not totally be lost. Also the company will probably not invest in trying to market and sell Solaris, since marketing is usually several times more than development costs. This savings can be used to build other technologies on top of the commoditized ones.

Companies that build products on top of Linux will be the successful companies in the future.

Troubleshooters vs. Troublepreventers

I just read this about troubleshooters and trouble preventers.

Companies that hire troubleshooters obtain a rapid return on investment, but in the long run they are in more trouble than in the beggining, because troubleshooters think in the short term, so they hire even more troubleshooters. Which means that these companies are not viable in the long run.

Companies that hire trouble preventers do not obtain a rapid ROI, because trouble preventers seem to spend a lot more time getting things right. They also tend to follow strict processes like:

  1. Documenting in a wiki.
  2. Performing code and design reviews.
  3. Running automated unit testing.
  4. Performing automated compilation.
  5. Running automated functional testing.
  6. Performing automated code bug finding using findbugs or PMD.
  7. Performing automated code copy and paste detection using CPD.

Optimizing in the short term and in the long term are mutually exclusive. It is impossible to do both at the same time.

How to migrate from one to the other?

Simply work in trouble shooting fixing things trivally in the first time, and then gradually begin increase the amount of planning and bug prevention by introducing design patterns.

jueves, 13 de septiembre de 2007

We can't measure productivity?

Some people argue that measuring productivity is the wrong approach because people always find ways to subvert the system, that is, even if you measure everything and you know for sure what the objective is, people will find ways to have good scores while doing a lousy job.

Other people argue that measuring performance is impossible.I bet they would like it to be impossible, but it certainly is possible. Maybe not 100% accurate, but that is not my problem. Nothing is 100% accurate. Not even the distance between 2 objects if you use lasers to measure the distance, because errors are inherent to every measure. Sorry about that.

The point I'm tryign to make is that given the option of measuring and not measuring, you should measure, automatically, using tools, not humans. So that instantly you can know how people are doing. It sounds bizarre. Numbers can't tell yoy why people are having a problem, but they can tell you they are having a problem. And you can step in and help. That's what good managers are for.

About subverting the system

If people subvert the system, cal them in private and tell them you found out and you expect them not to do that again. If they lie, fire them.

If they do that again, fire them or, if they are in the middle of something important, wait for them to finish and fire them. Keeping people with a bad attitude only destroys the morale of the team.

Doing exactly the opposite

Some compnaies do the exact opposite. They keep the bad apples and fire the good apples.

Why? Why would anyone in their right mind do something like that?

Some companies are full of incompetent people. When somebody who is competent arrives, he usually finishes first his duties. He then is assigned with helping others and the rest just hates him. People say he is arrogant and he gets fired. Sometimes it is the other way around, but it doesn't matter, since he is just one guy. The incompetent guys are all over the place, remember?

What is the benefit for the manager? If the manager sees that the project is late, he may ask people to work till 11 pm everyday, to schedule meetings as 12:20 am, to harrass people who arrive late, to ask people to work on weekends, etc. Sometimes all this at the same time.

The benefit is that the manager puts all this in an Excel spreadsheet and then tells his manager that he saved the company $XXX becaus epeople worked for free, so he deserves a bonus...

So a manager that creates a sense of lack, a sense of "we need people to work more and I'm delivering it", even if people organized in a different way would produce more with less effort, obtains more income, while managers who are effective obtain nothing.

Martin Fowler is right. If you measure, you can show that you can produce 10 uses cases per day, while the industry average is 1.5 uses cases per week. But it doesn't matter, because where one use case is needed, analysts will produce 10 just to subvert the system, in which case the developers will do exactly the same amount of work, except that they had to read very boring and repetitive documents to understand something that could be said in just one use case.

How to tell

How can you tell if the company you are woking at has that little problem? Do they are the worst people they can find so that projects take longer than expected?

1. Do they make you an exam to asses how much you know before hiring you?
2. Do they provide courses?
3. Do they pay for your certifications?
4. Do they do review meetings or walk throughs over design documents and code?
5. Do they demo the products at the end of the development cycle?
6. Do they test throughly the code?

Serious companies devote half the time to the above activities. Ok, the exam probably is a one time chance to avoid you, but the rest should be standard part of your daily job. yet some companies want you to simply "code", as if that were a magic word that let the world revolve.

Success

Do you want to be succesful?

First, resolve meaningful problems.
Second, do no work unless you know exactly what is being requested. If nothing is clear, a meeting may help. Other times, you need to read documents. Other times you need to build screen mockups, and other times you need to build prototypes. No matter what, do not begin to "code", because code = cost. What you need is to deliver functionality with the least possible code.
Third, ask for real requirements. Countless times I've found requirements that were just made up. You always need to analyze the requirements to realize which are real and which are not possible or useless. Once this is done, 80% of the problem is solved.
Forth, number the requirements and sort them according to business value.
Fifth, imagine design decisions for every requirement. Less design decisions solving more requirements is the best design. Write down which requirements you think might change. This is really important and you will not find out from the beggining, but you will improve with time.
Sixth, implement those design decisions as prototypes, to make sure they are implementable and there are no strange border cases you didn't know beforehand.
Septh, work with business use cases instead of user use cases. Business use cases do not consider the internals of your company, but only the clients. Your company is only a solid block. this should give you ideas for the clients to attend themselves on your site. Once this is done, divide them in user use cases.
Eight, draw context diagrams to see that every operation enters and leaves the system.
Ninth, integrate the prototypes.
Tenth, avoid repetition both in the design and in the code.

I could go on and on. Use SVN, LuntBuild, ant, JUnit, Selenium, etc. Use abstraction layers. Do not allow information to be lost. Permissions and performance are customized at the end. Etc.

That has nothing to do with success

The only way to be successful is to promote what you are doing. It doesn't matter if your project is a piece of fecal matter. You need to explain why your project is so hot and so cool, and this apparent contradiction means your project rocks. Better than anything!

Trust me on this one... Most managers have no idea about technology. They fundamentally do not understand, they just live in a world of Excel spreadsheets, where information is delayed at least 3 weeks, but might be more. It could be 3 years for example, especially if they follow RUP.

So rule #1: Promote yourself and your product internally. Explain why techinically is cool and explain why economically it is cool.

And rule #2: PPTs are so nice, but the real numbers are in Excel. You need to affect those numbers. In order to do that you have 2 options:

1. Option number 1 is to lie and say everything is so expensive that 1 use case per month is really fast, so beat it. I can't recommand this option, it is too risky and not very well rewarded. At the end you know you will fail miserably, for too many reasons to mention, but imagine when people find out you lied.

2. Option nuber 2 is to not lie but tell the truth. Press the pedal to the metal and let managers know exactly what you are doing, if the project compiles, etc. EVERYTHING! If you show just a number, they will be tempted to ask you to improve it, but if they see more than 10 numbers, they won't know what to ask. Show them all the data so that they can't make any decision, so that you would be safe. This is important because later you need to defend what you are doing now.

At the end you will have all the information and you will be able to make informed decisions on ho to work. You will detect causes. That is important, much more important than the small risk that managers will get in the way.

On Quantum Physics (2)

When it comes to physics, most explanations are just theories and the problem is that arranging several explanations gives almost the same equations as when using the right explanations (right being the simpler explanation), which means that you can build very complex theories and everytime something can't be explanied, you simply make more complex equations and problem solved.

Physics right now is building progressively more complex equations, giving progressively more complex explanations.

The Occam Razor is one possible way to detect what we find better theories. Does inertia exist or objects tend to stop? A priori you can't know, but when applying equations, inertia is a better model, because it can explain more phenomena and therefore our limited brains can get better conclusions.

It doesn't matter if one theory is the right one or the incorrect one. The important matter is that you can predict accurately what's going to happen under different configurations. If one theory requires you to think for 2 minutes and another requires you to think for 2 years, the correct one is the one that requires less time.

Now in the software business you sometimes end up without a job if you solve the problem in minutes, since developers are paid by the time they work, not by the complexity of what they are fixing. If the problem is not complex enough you can always make it harder by writing the same code over and over using copy and paste.

In the case of physics, if you can do a quantum entanglement, you can build better computers and a better internet (always connected, even when outside the solar system). Can you imagine how much you could ask for a 100% uptime internet?

Orbits

Planets have orbits and electrons have orbits. Why don't we have a theory explaning why planets and electrons are alike?

The universe expands?

Isn't it getting too easy to put one idea over the other to explain what can't be explained? Maybe the problem is not that we have too many explanations but that we don't have enough data to disregard the wrong theories.

The big bang created matter in an instant, so it was a black hole. If nothing can escape a black hole, how did we escape the black hole?

The solution? The universe expanded faster than light. Does that mean that there is a fabric of the universe? Isn't that the original ether that was proved incorrect?

I doubt that it is incorrect. In fact I think the ether exist because the experiment to show that the speed of light was the same in all directions is wrong, IMHO.

But it really doesn't matter. Physics have been rejecting the idea of the ether and they came up with ether again and again, disguised in different ways. It is obvious, at least to me, that ether exists. The problem is how to measure it, to know their real properties.

Gravitational Waves

So far gravitational waves have not been measured. I already mentioned that if gravitation traveled, it wouldn't be faster than light, and therefore it would escape black holes. It is nonsense. All the intent to measure the gravitational waves will show that gravitation does not travel, or that we don't have equipement precise enough.

miércoles, 12 de septiembre de 2007

On Quantum Physics (1)

I'm no expert on quantun physics, so what ever I say is just my uninformed view on a very complex subject.

Once upon a time people thought the sun and the planets orbited around the earth. To match the experience, some objects did not move in circles around the earth, but moved in spirals around the earth.

Then a much simpler explanation was given: All objects orbit around the Sun using newtonian physics.

It turned out to explain everything easily and the most complex models, for which mechanical aparatus were built, were dismantled as "crazy".

Fast forward to 2007. Quantum phsycis is this complex notion that a particle may be botha particle and a wave, and the particle might be in several places at the same time, but when you make it interact with some other particle, it is certainly somewhere defined, and then you can measure position or velocity, but not both at the same time, and there is the tunnel effect which means a particle can travel through space in zero time and quantum entanglement which means that 2 particles may communicate through the whole universe in zero time.

The predictions of quantum physics are awsome and some of them have already been confirmed in experiments, as for example quantum entanglement. Needless to say this means you could build a much better internet using entanglements and I have the intuition, although I haven't pictured exactly how, that you could build a faster computer using entanglements, since inside chips, they communicate, so if you reduce all those communications, you would end up with much simpler and faster chips, which means one chip could replace a 100 or a 1,000 CPUs.

Also the communication inside a computer could be done using entanglements, meaning less cables and reducing the computer size.

You could also make a backup or connect to the internet without using cables. The advantage is that nothing would travel in the air.

Quantum Physics is Flawed

I read once that a guy read about Newton and his "theory" of gravitation, basically that F = gmM / d*d, and he concluded that Newton was wrong, because he was stading on the top of the surface of the earth and the gravitational force was not infinite (the distance to the earth is zero). Little did this guy know about how to measure that "distance", if he measured from center of gravity to the other center of gravity, it would hold.

In the same vein, I think quantum physics is flawed and maybe I don't know how to measure the distance.

Although quantum physics had made great advances and has been proved correct many times, I think the main idea is flawed. Proving correct a theory just by showing a prediction and saying "see? this was predicted by the theory" is not a real proof. Think of the complex machines used to demonstrate that the Sun revolved around the earth. That machine would have predicted where each planet should be, but it doesn't make the theory true (althoguh it might be a very good approximation in case you really needed to know where the planet should be).

What's wrong with quantum physics?

It all started when the charge of an electron could not be measured correctly. A drop of a liquid either have 1 extra electron or it didn't have it, which meant that the electric charge was quantified. (Is that a word?).

Then the electron orbits were quantified. How do we know? Because given a certain energy, an electron could be taken out from a material. See? We are applying concepts from the mechanical world and applying them to these small objects we can't see and we conclude that the laws of physics still apply and since each orbit has a certain energy, we know for sure that each atom has certain permited orbits. Cool!

Then the electron orbits had a probability to be quantified, but they were all over the place. This meant that although the permitted orbits were more probable, the non permitted orbits had electrons, but electrons in very low probabilities... What? I hear you say.

We started saying that electrons had a definite mass and definite charge, and now we say that the electron might be all over the place.

Do you know how fast an electron can revolve around a nucleus?

Very fast, almost as fast as 10% the speed of light. How do we know that? Because the electron is very light, but it doesn't fall into the nucleus, therefore there must be some force (the centrifugal force) stopping the electron.

This means that when you send a particle to collide to an electron, the electron might revolve several times around the nucleus, and if you measure with which electron (which orbit) it interacted, you will see several orbits at once, even if the atom had only one electron. At least that makes sense to me.

The whole point about quantum physics is that there are no smaller particles (like we have light in the macro world) so that we can see objects without touching them. We instead have to find a bus by sending another bus and making them collide. If instead of particles we talk about buses, we will see that this is not state of the art.

Gravitons and the Speed of Light

Black holes do not let light scape because the escape velocity of black holes is beyond the speed of light. Since you need infinite energy to achieve the speed of light, you simply can't escape a black hole.

But if gravity is transmitted through gravitons and supposedly those particles travel at the speed of light, black holes would not be very strong, since gravitons would not escape. Therefore gravitons do not exist and the gravity is transmitted instantly.

But what about light? What is light really?

Why does an electric current travel at the same speed?

Think of a storm and a lighting. The individual electrons travel very slowly, but the current travels at the speed of light. The magic is that the particles in the air, atom by atom, recieve an elecrton and give an electron almost at the same time, so that the lighting can travel at the speed of light.

Now think about the light. The light travels at the same speed but in a straight line. Also it is continuous rather than spasmodic, but that's because the source of energy is continuous. If you have a continuous current you would have the same effect.

What if there is a medium on the space that allows light to travel? What if there are particles we can't see but that react to light (which is very contradictory, because if what I say is true, they would be the very reason why we see).

This would explain why light behave as a particle (travel in a straight line) and as a wave (refracts when it goes through a hole).

martes, 11 de septiembre de 2007

Office 2007 is Weird






It feels weird because the menus disappeared, the icons are at different places, etc.


But after a while it becomes easir to use.


Why?

Because things (menus and icons) are grouped by category, so your memory has to remember less places.

It is interesting to note that Microsoft has very good usibility experts and historically the market follows the afvice but 2 years later, when it is already too late.

lunes, 10 de septiembre de 2007

On setting prices (2)

I talked endlessly about setting prices here.

At the very end I mentioned that you needed to change the market price. That of course means that you can flood the market with a superior product and inferior price, while at the same time, avoiding imitators by having an unimitable product.

Think of La Pieta for $5.

No one can sincerelly think that could work. The whole point of buying La Pieta would be to have the real one or one very close to the real one) for a price not many can pay, say $1 million. At the end you are not buying a piece of hardware, you are trying to say "I'm better than you", even if you are just doing it by throwing money at the problem.

The main reason people buy expensive products is because they are expensive in the first place, so people think of them as "exclusive". At least that's what Daimler Benz thought of their Mercedes in the 80's, so the price went up until only very few could buy it, which made more people want it and therefore more people bought it.

Asking for more

I'm going to defend the strategy of asking more of the pie. I mean, what Daimler Benz did was to understand their customers: If you buy our cars not only you will get more prestige, but you are making an investment, because our cars are more expensive every year.

Daimler Benz's strategy worked fine for them, but it wouldn't have worked for Fiat 500, wouldn't it?

Why not?

I guess the mentality of the customer is different. He just needs the car to perform a function and a bigger price would make the car prone be replaced by a larger car with a similar price. In other words, there are replacements of a Fiat 500, but there are not many replacements for a Mercedez Benz.

So thinking long and hard, I bet Daimler Benz realized that every year cars of other makes are trying to imitate more expensive cars. Eventually the Mercedez would have many replacements that would be cheaper, but look similar to the real one, so the brand would have less value and eventually there woudl be no difference.

So perception of the Mercedes being better worked to their advantage, because they knew they could charge more and people would still buy from them. That financial advantage could be used to improve tha car even more and therefore they could charge more every year. It would be harder and harder for other makes to imitate the leader.

Eventually Daimler Benz bought Chrysler and now the company is known as Daimler Chrysler.

So having a financial advantage can be used to buy the competition.

Minimum price

You can use another strategy: the japanese strategy. Bid low. Ask for less money on a cheap product and people will buy. Actually lots of people in underdeveloped countries do not have cars, and therefore the potential market for a very cheap car is enormous.

The strategy worked so well that now Honda has the Lexus which rivals with the Mercedes. As far as I know the Lexus sells even better than the Mercedes, but if you ask me I'm more of a traditional guy so I prefer the Mercedes.

So selling at the very low end means having an even bigger advantage of market penetration.

This is the same strategy done by Microsoft. Their products are in the very cheap end of the spectrum, although lately they are rather expensive when compared with prices of 10 or 20 years ago.

But the main problem with low prices is low margins. Some companies fail to make a profit because they underestimate their costs. Other overestimate them and let their competition grab the whole market.

Then the real problem is how to determine the real cost. And then of course make a profit while at the same time not loosing the market to your competition.

Fiat 500

The Fiat 500 is the little brother of the Fiat 600. Both cars were older than the japanese cars, but they couldn't compete with the japanese cars.

The reason? The japanese cars were in another league.

If you look at the Fiat 500 it looks like a VW bug but slimmed down until it is small enough to fit in your living room or to be put over your dinner table. The problem is that its visual appeal says "I'm from the 1950's", while all japanese cars were modelled after new cars and even looked newer than contemporary cars. So japanese cars were saying "I'm the future", while Fiats said "I'm the past".

The interesting thing is that the cost should be the same, but the price was very different.

Thought experiment

Let us suppose you have a store and sell computers. You have several clerks that have a commision rate on every sell they make. Let us suppose it is 10%. It is much more than the real 2%, but it will make it easier to calculate.

So you don't know really what your consumers will want, so you have inexpensive computers from a $100 to a $1,000.

Your clerks will prefer to "upsell" your customers the most expensive computers instead of selling the bargain $100 computers, because on the top of the line they make $100 per computer instead of the $10 they make on the cheap ones.

How about you? Do you really care? You could go happily with them selling anything, but you also make more on the more expensive ones, so you prefer that they upsell, as much as they can.

Now a new kind of computer comes and your clerks refuse to sell it. You ask them why. They say the new computer costs $10 and they only make $1 each. If you calculate how much you make on selling this new machine, you realize you won't be able to pay the rent of your store if you sell inexpensive stuff like that.

Therefore you have 2 options:

1. Selling it at a premium price.
2. Do not sell it.

Option 2 is the correct one. Selling it at a higher price will make you store a product you won't be able to sell (until all other products are sold, which is unlikely to happen). SO IN THE END YOU ARE GOING TO TO STICK TO OPTION NUMBER 2: Do not stock it and do not offer it.

That happens all the time. Try to buy an Amiga or an Atari or an Spectrum or a PC XT. They are considered garbage in computer stores, so they do not offer them.

Once upon a time a local computer store tried to sell used computers by letting you offer your used computer on the second floor of their store. needless to say, that didn't work, although I did sell my old computer on there. It was a fantastic service (at least for me), but it didn't work.

The main reason it didn't work was that they charged 10% of the price and the store real state value used by my computer during that week was a lot more than the exiguous 10% they charged (not me, but) the buyer.

The Lesson

So, the real lesson: Do not charge less than the value you need for continuous operation of your store.

When it comes to IT professionals: Does your employer pay for your courses and your certifications? If he doesn't, quit immediatly!

Ok, maybe you should get certified first and then quit. Or maybe you should find a better job and then quit. Or you should study for the certification on the company time using http://www.javablackbelt.com/.

Who knows, really?

When it comes to retaliation, the most effective attack is to take the customer from them, because the customer feeds them and without the customer, they perish because of starvation.

Some employees prefer not to do their jobs. I find that tactic too obvious and counter productive in the long run, although in the short term it can be very productive if your boss thinks he really needs your effort (paid overtime is not something so rare). The only problem is that you become overworked, make more mistakes per seccond, which means you are in a vicious cycle... they see the results and ask more of your time, which means you end up being paid more for working longer hours and so on...

Therefore being incompetent is not such a bad move if you come up with something that works at the end.

For something to be appreciated you have to go through a period of hardship to appreciate it. It also applies to your clients and your employers.

GUI patterns

There are standard GUI patterns. For example: Do not overlap windows. Why? Because information is hidden (assuming that the information is in a window because it is useful and meaningful in the first place). Yet both Windows and MacIntosh do not follow this because screens are so small.

Maybe in 10 years screen will be of 48 inches, but right now, having overlapping windows is still useful.

Another GUI pattern that is corollary of the first one is: Do not scroll. This is a very important pattern in web, since most websites seem to follow that, except the point of entry to the internet: Google. So I guess they think it is not that important. I mean if you are searchign a document it will probably be in the first 3 links, so you won't scroll anyway.

A much more important GUI pattern, in my humble opinion, is direct manipulation, that is, being able to modify (or manipulate) directly what you are seeing on the screen (provided that you have the right privileges). Which also makes me remember of 2001 when I was discussing with a bunch of XPers about XP and how you couldn't keep track of bugs using a Wiki (because I assumed that since the Wiki was so open to direct manipulation, you couldn't control what people put in it). Also I assumed Wikis did not store any history, so you could not rescue an old page, or fire an alarm everytime a page was modified. Needless to say I was wrong! Wrong! WRONG!

First of all, adding permissions to a page (or to any resource) is just a matter of adding a proxy. Adding a proxy in Java is like 3 minutes of work if you are lazy AND dumb. If you are not lazy, but you are dumb, it is like 2 minutes. If you are lazy, but not dumb, it is like a 30 seconds of configuration. And if you not lazy nor dumb, it has already been done.

So direct manipulation is a very important pattern, especially when it goes hand in hand with Wysiwyg (What You See Is What You Get).

Most sites can be implemented as Wikis and I see no need to hire expensive web developers.

Most projects I've worked on have required half the people to work on the presentation layer and the presentation layer is never finished quite right. The customer is never satisfied with how the site looks, but he doesn't care at all about SOA, ESB, EJB, Spring, Hibernate, SQL, etc. All your techinical details are just meaningless words to him. All he cares about is how the site looks and what the site does when you press a button or do something else with the user interface.

So the user interface is like an open problem to him, while we IT professionals couldn't care less. We concentrate on the hard stuff, that is the internals. And I guess that is correct from our point of view or the screen will just be a mockup.

But guess what. After 15 years dealing with the internals, it becomes repetitive and simple. Eventually there are no secrets to discover, no problems to resolve, and we begin to realize that the only reason a project can be late is because we didn't understand what the customer wanted. So we need to concentrate on the user interface. At least from my point of view, once you get the user interface right, the rest is a piece of cake (I'm not saying it will be done a minute, in an hour, or a day, I'm just saying that it is already wel defined), just in case....

On Abstraction

I'm going to defend abstractions and I'm going to explain a way to build abstractions that seems to be counterintuitive to some.

Programming is building abstractions. If you are not not building abstractions, you are just copying and pasting code. It is very nice to be paid for something a machine can do, I mean, most managers do not understand the abstractions you can create and therefore they think copying and pasting code is fine.

And no, abstractions can be created automatically by a machine, at least not that I knwo of. Maybe using pattern matching they could. See for example condenser.

What are abstractions?

To abstract is to rescue some elements from a set, while leaving other elements of the set discarded as irrelevant or technical details.

Hopefully you have rescued the important or essential elements, that is the "what" as opposed to the "how". In other words, the intent rather than the mechanism.

Why is intent more important than the mechanism?

Because the mechanism can change, but the intent is usually a lot more stable.

How to build abstractions?

One way to build abstractions is to gather the universe in a concept (for example "universe") and then divide that concept according to characteristics. You can choose the charactetristics you want to divide the universe off, for example big and small, happy and unhappy, fast and slow. As you can see there is no happyometer, so there is a blurry line dividing the two sets.

But you could also be precise and say "everything that takes more than a second is slow" and therefore divide all the services you have implemented in fast and slow. But that second that is used as a dividing line is totally arbitrary. You could use 2 seconds because that is usually the attention span of most computer users, but I bet that number will get to 1 second in 2017. Why do I know this? Because 10 years ago 3 seconds were acceptable and some even required user interfaces to wait for at least 3 seconds or the user "would get too excited". That's totally contrary to what you would expect in the US, where users expect to get excited with products, but please bear in mind I live in Chile and here people have very low expectations about just anything (so users getting excited for the first time in their lifes is something unexpected and possibly could have deteriorating consequences on society ;-).

Another way to build abstractions is to forget about the universe and look at concrete things like a car, then look at a bicycle and then come up with a totally new abstraction: transportation.

Useful Abstractions

Are abstractions useful? Ask Larry Wall, the creator of Perl.

Abstractions is what make computers possible. Without abstractions, it is impossible to create a program, it is impossible to display characters on the screen (computers just have numbers inside) and all numbers would be binary.

Ok, but there is a saying "In theory, practice and theory are the same. In practice, they are different". I don't know where these people studied the theory of knowledge, but they should ask for their money back.

First, theory is just something you think, so if you think x and y are the same (in theory) then they are the same, but if you think they are different, they just are different (again in theory). So a self contradictory theory about x and y being the same and different is just a contradiction and it doesn't make it true. It is not even smart. It is just an irrational belief.

What abstractions are useful then?

First, for an abstraction to be useful:

1. It must not be self contradictory.
2. It must must be tested in a laboratory environment and be proved correct every time it is tested.

An example: Galileo predicted that without the air a feather and a stone would fall at the same speed. There is a video recorded on the moon that shows that that experiment was performed by the astronauts on one of their landings (or should I say moondings?). Guess what. Galileo was correct.

Another way to test a theory is to come up with a logical conclusion and prove that correct. For exmaple, the main reason astronauts got to the moon 500 years after Galileo predicted the existence of inertia, was that physics believed in inertia and they could continue to develop physics until Galileo was proved correct.

But the interesting thing was that Galileo was proved correct even before that, by Newton. Newton built a theory based...

sábado, 8 de septiembre de 2007

Six Sigma

Before talking about 6 Sigma, I want to make a point about business theory: If something can be done by anyone and everyone, you don't want to do that.

The theory is that if the market has too many producers, the market is perfect and therefore the market price does not allow to make a profit. It is worse in practice, because if you start a bakery and anyone can begin to sell bread on the street, then it means you won't be able to sell yours and the commodities needed to make bread will get really expensive because of all the demand.

After a while companies will begin to bankrupt but in the mean time companies will try to gain market share by lowering their prices below the production costs in order to make other companies file for the 11th chapter.

Until they run out of money.

Therefore, you should try to enter markets with a very high barrier of entrance. That is what explains why people study: The loose some years of making a profit while studying, but at the same time, later in life they can perform jobs that other people can't, meaning they can have higher wages and sometimes their wages are so high that they are paid by the hour of effective work.

Six Sigma

Six Sigma assumes that you want to improve the quality of your products by reducing the "variation", assuming that you produce repetitive stuff like fridges, cars, etc.

The variation is measured by using the standard deviation called sigma, which is a gauss bell, and sigma...

The Problem with Six Sigma

The problem with six sigma is that it assumes you are doing always the same thing, but in software if you do that you are a moron.

Writing software is simmilar to theory building.

Not only that but companies that try six sigma see no improvement, but fail miserably in the market.

Why is that?

They are focusing on the wrong stuff.

Toyota

Toyota learned what was needed even before six sigma was started. While six sigma allows failure to happen and then measures it and puts it in a graphic, Toyota and lean management in general, stop the production line and find out why the problem happened in the first place in order to avoid this problems.

This eliminates waste and more importantly, avoids people hiding their mistakes in order to pass some meaningless objective defined in a chart. The most important key factor of lean management is that you, the worker on the assembly line, can stop the system!!!

It empowers you because now you are in control.

That's why I think building software is like building cars, although to be more clear it is more like building a prototype, or several evolutionary prototypes, until you make it right.

viernes, 7 de septiembre de 2007

10 Ways To Insure Project Failure

Take a look at the excelent 10 Ways To Insure Project Failure

1. Set Unrealistic Goals
2. Staff Up Quickly
3. The More Documentation The Better
4. You Can Always Make Up a Schedule Slip Later in the Project
5. Relax Your Standards To Shorten the Schedule
6. Micromanage
7. Call a Daily Project Status Meetings
8. Threaten Team Members to Motivate Them
9. Bring In More Programmers
10. Set Your Plan in Stone

Amazingly well written and extraordinarily common, I lost the count of projects I've seen making several of these mistakes at once. Some even managed to get them wrong all at once!

The most terrible projects had 150 analysts and outsourced the design and the coding to Argentina.

Back to the doomed project:

  • Groups of up to 5 analysts drawing UML instead of writing use cases. That should be clearly a sign of trouble.
  • The screen mockups and the database model are not embedded in the use cases.
  • The coding and the design is not iterative, which means designers wait for all the analysis to be finished before sending the design back.
  • Standards? Who needs that?

Another doomed project:

  • Daily status meeting taking full 5 hours for just 11 developers. That leaves you with 4 hours to do your work and the next day explain you didn't have the time because of these meetings.
  • Crappy computers for everyone. Tomcat takes 15 minutes to start up.
  • New hires don't go through an screening process that included a test that measured their actual Java skills. People who can't reverse an String are hired.
  • No source control.
  • No continuous build.
  • No machine to test the system.
  • Unclear goals.
  • Having 2 bosses with different projects asking you to work on different stuff and dividing your day in 5 minute increments for diffrent projects.
  • Fixing stuff from other projects which were totally unheard of, at least 3 different each day.
  • One of the bosses goes out on vacation so you think you are going to become really productive now, but now his 2 bosses came asking stuff directly to you, so now you have 3 bosses at once (courtesy of matrix management).

Another doomed project:

  • The company is CMMI certified, which means you have to write a lot of documents before doing any change to the code. Some people take a look at the documents and they correct only the looks, since they don't understand what the project is about.
  • The quality of the overall project in actually much worse than before it was vertified, since the company is concentrating on the wrong goals like passing the new certification and having pretty documents.
  • The manager is changed because the project is late and the new manager begins to replace everyone under his supervision.
  • Some modules of the project are working, but the new manager requires that all modules be rewritten from scratch.

C++ is crap

According to Linus Torvalds, C++ is crap.

He says "C++ is a horrible language. It's made more horrible by the fact that a lot of substandard programmers use it, to the point where it's much much easier to generate total and utter crap with it." He maybe has seen a tendency of C++ programmers to write sloppy code. I have seen that in many languages, I wouldn't say C++ programmers are lousier than other programmers, but a C++ programmer with 3 years of experience writes code equivalent to a Java programmer with 6 months of experience.

Since most people "retire" from programming between 3 to 9 years since they start programming and move into management, it means that C++ programmers never surpass the ability of a Java coder who has been programming 1.5 years.

He says "In other words, the only way to do good, efficient, and system-level and portable C++ ends up to limit yourself to all the things that are basically available in C." It is very interesting that this is exactly what happened in a project I worked in 1999 at Microsoft: No one was allowed to use C++ features like user defined operators, templates, etc. We were programming in C, but using a C++ compiler. We even didn't use exceptions.

The result was not easy to understand, to stabilize and didn't perform very well, but I think there were other reasons for this.

Also it was very interesting that I managed to add "<<" operators to reduce clutter in the logging facility. At first they didn't like it because it was dog slow. Then I used references (&) and it became faster than the original code. My boss, who originally fought this idea, eventually congratulated me on this one. It was totally unexpected that C++ could do stuff like this, and in my opinion that's pretty lame, because he was working at Microsoft for 14 years back then.

I mean, don't they leave time for developers to learn new tricks? I just worked there for a year and realized that not only you don't have time to learn, you don't have time for anything else. You are not encouraged to learn, and I find that amazing in a company that is supposed to hire only knowledge workers.

how do you think I got that job in the first place? I studied, really read a lot of C++ books, design patterns and even C++ papers. The C++ papers were the really helpful here, because they try to show the result of some research, which may or may not have got good results, and may or may not have been integrated in the compiler you are using. So it makes you think of alternatives. This helps a lot in the recruiting process and I would say the recruiting process is a piece of cake when compared with reading C++ papers and trying to implement those ideas.

Where did I take the time to do that? First, you need to solve problem once and forever, so that you have time to do your stuff. Next, you need to read a lot. Find the material and read it. Next, classify the ideas and implement some of them. Develop prototypes. Rewrite programs using this new techniques so that with fewer lines of code you can achieve the same funcitonality.

Next create abstraction layers that hide the mechanisms. Only let intent to be the interface.

Being an effective C++ programmer takes time. Before working at Microsoft I developed a rule set of about 30 rules that you needed to learn before touching the code. It simply reduced the amount of C++ developers who could work at our group to 3, then 2 then just me. Not very nice. Then I learned about Smalltalk and how all my ideas were already found out and fixed in the language itself. And the humbling thought that it was in 1980, 12 years before I even started to realize about these rules in 1992...

The lesson here is that I think C++ is really crap, but you still can manage to produce simple,, compact and fast code if you follow 30+ simple rules. Which I'm not giving you here because I can manage to make some money from them from time to time and there is a tendency in me to not let go information that was very hard to conquer and that I consider key for every C++ project, but that since I wrote it down and never looked at it again, I forgot. Not completely, but I forgot. In my defense I can say that I haven't programmed in C++ in 7 years and I feel very nice about it, but I'm sure I would get the same information if I program in C++ again (which I expect not to happen).

On the other hand, you can use Smalltalk or Java and you will get those advantages for free in the language itself.

The material

Linus says "C++ leads to really really bad design choices. You invariably start using the "nice" library features of the language like STL and Boost and other total and utter crap, that may "help" you program, but causes:
- infinite amounts of pain when they don't work (and anybody who tells me that STL and especially Boost are stable and portable is just so full of BS that it's not even funny)
- inefficient abstracted programming models where two years down the road you notice that some abstraction wasn't very efficient, but now all your code depends on all the nice object models around it, and you cannot fix it without rewriting your app."

The problem as explained very concisely by Linus is that you can write code very quickly but when it doesn't compile or when it doesn't do what it is expected to do, you spend hours if not days trying to find out what went wrong. It is a big waste of time and I've seen projects fail because of this.

Also he mentions that the code is coupled. No shit! Of course it is coupled because C and C++ assume that you want to #include your code. Have you ever seen those #ifdef ... #endif to avoid .h be included twice?

Well, that is a language mistake being fixed by the programmer. Now picture a .h with a class. In a class declaration you can put a struct (which is a class) and some inline methods.

Now change an inline method. Oops! You need to recompile a lot of stuff. And compilation of C++ takes 10 times more than compilation of equivalent Java code. Oops!

Make a mistake with pointers here and you will see your progran blow up somewhere else. Not everytime in the same place, mind you. Oops!

Eventually your code and your project becomes unmanageable.

jueves, 6 de septiembre de 2007

Fast Forward MBA in Project Management in just 24 hours

Why do people study MBAs?

I mean really, do you need a master in how to administer a business? I mean, if you sell hamburgers, do you really need a guy who knows Economics, Finance, labor laws, etc.

In my dictionary, selling hamburgers is pretty simple, but you still need an MBA to figure out how to beat the local competition, so MBAs will thrive.

Do you need an MBA in project management?

I hope we can both agree that project management, specially in the high tech sector, is pretty confusing, compared to flippling burgers, of course.

Read a resume or the average job posting on monster.com and you will see SOA, ESB, XML, XSLT, CSS, HTML, JSP, Java, JavaScript, J2EE, Entity Beans, Session Beans, HTTP, JNDI, Servlets, Struts, Ajax, DWR, Hibernate, Spring, Axis, the list just goes on and on.

Most people just put acronyms on their resumes, without even understanding why is that needed. The same goes to the recruiting specialists. Even people making choices are not aware of why do they need that.

It is just something mentioned by the vendor XXX. Vendors do have an agenda when recommending technologies: Separate customers from their money. And there is no better way to do that than to recommend some solution they don't have for a perceived problem. The solution presented by the vendor must be bought and must be "integrated", which means hire a developer who seems to know the technology (or so his resume said) and slap him in the face with the product until he delivers a solution. Have you ever wondered why your boss says 3 weeks and you look at him in disbelief?

A vendor recommended that. Vendors are in a position that attracts many customers. Even a 100 per day, asking for recommendations. What can they do about them but schedule a meeting and letting them know they have solution for their troubles?

Have you ever wondered why vendors buy other companies? Because they are being asked for solutions they do not have, they know how much people would pay for it, so they know exactly how much they will make if they buy them. Therefore starting a company and buying bought by BEA, IBM or Oracle is a sure way to get rich quick. But not without a brain and with a lot of effort.

So in the end there will be lots of project managers with MBA certifications trying to integrate solutions of vendors that cost a fortune. Vendors only sell what is in the market already, for example now BEA sells a lot, but at the beginning all companies start small.

And that takes me to the most important recipe on project management: Start small.

Few people, few requirements, short iteration. Do prototypes. Integrate them. Succeeded? If the answer is yes, grow a little bit. If not, shrink the prototypes until they are manageable.

See?

Manageable. That is the whole point of project management.

The results are what is important. I mean, If I say jump of a building you wouldn't jump because you can imagine the results. My point now is, what is the result of reducing the size of the experiments?

Successful experiments!

Think about this. If you reduce the size of the experiments eventually the experiments are about putting an extra comma in your code. Hardly difficult, therefore you must succeed at putting the comma.

And now you imagine your whole project is late because you are handling the irrelevant stuff. Not so. Once you get your commas right, you can take more difficult endeavors. Knowledge has an interesting property: The more you know, the easier it is to get more.

Therefore you begin to increase the size of your experiments and you begin to tackle more difficult problems every time. Until you succeed.

It is amazing because it is exponential. One day you have half of your project ready and the next day you have your whole project ready.

What is the idea behind this?

The idea is simple: industrialization. Industrialization is just division of labor. And division of labor means that a single product goes through the hands of as many people as possible, each of them doing a particular task to the item.

Fast forward to the 1980's and every worker is able to stop the production line if he finds a defect. The defect is traced back to its source in order to reduce waste. This idea was a courtesy of Toyota.

People do not have long list of to-do items, but instead have a long list of "done" items. When ever a worker has depleted his list of to-do items, he grabs an small list of "done" items from another worker (remember this is a production line) and he continues to work. This method is called Lean Production or Lean Manufacturing and is blamed for being more productive than the traditional factories.

XP, Scrum and all the agilists base their ideas on lean manufacturing. The reasoning being that if the Japanese could improve their manufacturing by applying these ideas, the same could be done to the software sector.

The main problem is that workers outside of Japan will goof off. I mean, in Japan you can't work more than your coworkers, nor can you work less, everyone has to work the same, because in their culture you can't succeed if anyone doesn't succeed. In the rest of the world it seems to be the opposite, so if someone goofs off, everyone goofs off. Therefore management has to give incentives to the ones that do their work and punish the ones that are slacking.

But how can you successfully manage people if the tasks are so small? I mean you would spend a whole day supervising 2 or 3 developers and the rest would effectively be slacking.

So instead of going over everyone everyday, compare the amount of work done every month. For example, count the number of lines checked-in, but remove the number of lines copied and pasted. Count the number of issues resolved, but reduce the number of bugs introduced by him.

At the end of the month you will have a clear picture of what is going on and how each person is doing.