Automation: Most people dislike it. Even computer scientists.
Why is that?
Because you can't automate the smart stuff, only the stupid stuff. And people like to concentrate on the smart stuff, therefore they avoid the stupid stuff because it is boring, and nobody ever automates it.
Successful companies automate the boring stuff, so that no one has to do it. Then people have more time to do the hard stuff (by definition non boring), and therefore, since it is challenging, they can decompose the hard stuff into less complex stuff. And they can repeat this process until most of the hard stuff has been decomposed into trivial (and therefore boring) stuff. So companies that detect deficiencies in other companies (for example that they are not efficient), can compete in the same markets and beat them (as long as clients have no power).
Suppliers that are in markets in which the customer has too much power can't be efficient, since any efficiencies are absorved by the customer, making it impossible for them to gather the results of their efficiency.
In the long run, market economics dictate that people who automate too much are more efficient, but if all efficiency goes into the pocket of the client, it hurts its suppliers. And therefore, suppliers end up either without a job or without a contract and end up doing something else. At the same time, clients that benefit from this efficiency could probably improve their market share. But reality is different. Most companies spend as little as 1% or 2% in IT, not because they try to spend less, but simply because they are in markets that are so profitable.
IT certainly isn't profitable because there is no barrier of entry. Almost anyone can study Visual Basic or Python and be a developer in 6 months. Which gives that in order to be a contractor in this market you have to be crazy or not like yourself or be an incompetent or all of the above.
jueves, 25 de octubre de 2007
domingo, 23 de septiembre de 2007
Impossible to create products in JEE
It is impossible to create products in JEE.
First, JEE is not about creating off the shelf products, but about delivering integration solutions (hence the "Enterprise" in its name).
Second, JEE is not portable across vendors. If you build it for WebLogic, you can't deploy in WebSphere and viceversa.
Third, with all the mergers and aquisitions, there will always be plenty of room for JEE developers, since all those disparate propietary systems will need integration. And even in the case of the JEE vendors integration is not something to take for granted, but it is usually harder because of all the APIs and the descriptors needed.
Spring is like fresh air to this respect. It is far easier to integrate Spring services than to integrate JEE vendors. Spring and Hibernate are so simple that the JEE 5 (incuding EJB 3) was modelled after them.
Fourth, web services are a mess. There are many standards and they do not interoperate. They introduce more trouble than they solve, but that is good. Companies and developers will move away from web services and therefore only the developers that really understand the technology will remain using it, until it becomes simpler for other people to use. usually this means they will find and embrace the right abstractions, and then coding monkeys will be able to leverage from there.
There is a very strong need in the market for an abstraction layer that will permit different application servers to work together. But the right abstraction must first be found. It is not like developers are not trying to find it (granted: 80% of the developers are simply drones and couldn't possibly think of creating abstractions themselves, although they are using abstractions all the time), nor it is that they can't come up with a good abstraction because they lack a brain. Maybe they lack the ability to decide what a good abstraction is, or in other words, the criteria for deciding if an abstraction is ok or not. The real problem, as in the case of most software development efforts, is finding the correct requirements.
Usually developers settle for the wrong abstraction and for the incorrect requirements. It is as if they knew they had a hammer and everything looked like a nail. This problem with this reasoning is that it conduces to wasted time and money. And wasted money is no problem, because if you loose some money you can always recover it, but if you loose time, you can't get it back.
The Right Requirements for a JEE abstraction layer
In order to understand how to create an abstraction layer for X, you first need to understand X. No secrets here. But you don't need to understand X mechanisms, but how X relates with its environment. In other words, you need to model interactions of X with its surrounding environment.
Most frameworks already have an abstraction that they present to you. Like it or not, the present an abstraction. Sometimes they try to force an abstraction on you, the programmer. Other times they try not to enforce an abstraction, but a completely transparent code where you can modify everything underneath and have direct access to the internals.
Most of the time the abstraction is lacking, other times the abstraction is there simply is no abstractions and you can modify everything.
And sometimes, as with TCP/IP, the abstraction is great and you can do wonders with it. Even if you don't like TCP because it is too high level, you can use UDP which is IP and very thin layer on top.
In other words, you don't need to worry that you are going to obscure the underlying mechanisms, because they can always go back to UDP mode, but guess what?... TCP is 99% of the time good enough as an abstraction layer.
First, JEE is not about creating off the shelf products, but about delivering integration solutions (hence the "Enterprise" in its name).
Second, JEE is not portable across vendors. If you build it for WebLogic, you can't deploy in WebSphere and viceversa.
Third, with all the mergers and aquisitions, there will always be plenty of room for JEE developers, since all those disparate propietary systems will need integration. And even in the case of the JEE vendors integration is not something to take for granted, but it is usually harder because of all the APIs and the descriptors needed.
Spring is like fresh air to this respect. It is far easier to integrate Spring services than to integrate JEE vendors. Spring and Hibernate are so simple that the JEE 5 (incuding EJB 3) was modelled after them.
Fourth, web services are a mess. There are many standards and they do not interoperate. They introduce more trouble than they solve, but that is good. Companies and developers will move away from web services and therefore only the developers that really understand the technology will remain using it, until it becomes simpler for other people to use. usually this means they will find and embrace the right abstractions, and then coding monkeys will be able to leverage from there.
There is a very strong need in the market for an abstraction layer that will permit different application servers to work together. But the right abstraction must first be found. It is not like developers are not trying to find it (granted: 80% of the developers are simply drones and couldn't possibly think of creating abstractions themselves, although they are using abstractions all the time), nor it is that they can't come up with a good abstraction because they lack a brain. Maybe they lack the ability to decide what a good abstraction is, or in other words, the criteria for deciding if an abstraction is ok or not. The real problem, as in the case of most software development efforts, is finding the correct requirements.
Usually developers settle for the wrong abstraction and for the incorrect requirements. It is as if they knew they had a hammer and everything looked like a nail. This problem with this reasoning is that it conduces to wasted time and money. And wasted money is no problem, because if you loose some money you can always recover it, but if you loose time, you can't get it back.
The Right Requirements for a JEE abstraction layer
In order to understand how to create an abstraction layer for X, you first need to understand X. No secrets here. But you don't need to understand X mechanisms, but how X relates with its environment. In other words, you need to model interactions of X with its surrounding environment.
Most frameworks already have an abstraction that they present to you. Like it or not, the present an abstraction. Sometimes they try to force an abstraction on you, the programmer. Other times they try not to enforce an abstraction, but a completely transparent code where you can modify everything underneath and have direct access to the internals.
Most of the time the abstraction is lacking, other times the abstraction is there simply is no abstractions and you can modify everything.
And sometimes, as with TCP/IP, the abstraction is great and you can do wonders with it. Even if you don't like TCP because it is too high level, you can use UDP which is IP and very thin layer on top.
In other words, you don't need to worry that you are going to obscure the underlying mechanisms, because they can always go back to UDP mode, but guess what?... TCP is 99% of the time good enough as an abstraction layer.
martes, 18 de septiembre de 2007
Sun always makes the wrong moves
It seems that Sun makes always the wrong moves.
In 1996 I used Sun OS, and Sun workstations had 4 CPUs per machine, consirably faster than Intel CPUs of the time. The OS run on one of the chips and the other 3 chips were used for the user space.
It was really smooth. But Sun had a different idea named Solaris (which must be Sunny in Latin I guess): The operating system was going to be distributed and therefore it would be faster.
That goes against my intuition and against the separation of concerns, against the division of labor, etc. Solaris was a very bad idea from the beginning.
Fast forward to 2007
Now Sun delivers an 8 core CPU with embeded ethernet... An 8 core is something amazing, if it weren't for Azul 768 cores in one machine.
But having ethernet embedded on the CPU? Maybe I'm missing something. Is the CPU going to communicate with main memory using ethernet. Maybe it is, since disks are supposed to get faster by using iSCSI (SCSI over IP), so getting an ethernet inside the CPU is not crazy after all.
I thought the primary drive for iSCSI was to have disks outside any computer and therefore reduce costs. Of course you could also use iSCSI internally, but I always thought the CPU would never speak IP. If it now speaks ethernet by hardware, IP is just a thin layer over ethernet, so 90% of the job is done, the other 10% can trivually be done by software.
But I think this tendency to convert software into hardware will only accelerate. I don't know if people realize how fast new protocols are invented and are crammed into chips. Today you can run an emulator in an applet, so a full computer can run in your browser.
If IP can be crammed in a chip, Java can be crammed in a chip, and therefore computers with 1024 CPUs (or cores) are just arounf the corner. If each of those cores can run Java, there will be a lot of unused computing power.
I know the companies are underserviced, most of them spend as much as half million dollars per month just ro run their datacenters. And that's cheap compared with the amount of money that would be spent if run manually (not considering all the data loss, that also has a price).
So companies are paying really very little for running this datacenters, but in 10 years they will be able to run their whole data center in an applet, if things continue this way. And I know they will.
4K must be enough for everyone
When I was 14 or so I knew a guy who was the CIO of a bank in Equador. He would tell me that when he started working a computer with 4KB of RAM was enough to perform everything at the bank. He didn't tell the processes run all night, but of course it couldn't be any different.
Now things have changed. Computers are commodities and developers are expensive, so people are not expected to work at night. And yet you can run processes at night, like performance testing and see the next day what happened. But even doing that is a waste, because if something doesn't work, you need to retest, and if you wait for the process to run at night again, you lost a full day.
It is much better to buy a machine for running the tests, since if you use Java, a normal PC can do, and they cost less than a $1,000. This is important because developers typically cost between $2,000 to $6,000 per month depending on their experience. If they can perform faster (by having computing resources available), you are saving the big bucks. Saving on computer hardware is rarely a saving.
So maybe Sun is on the right track this time, making it easier for hardware manufacturers to use iSCSI inside computers. But IMHO moving everything from software to hardware doesn't make 20x faster, only 10% faster, and a lot more hard to change.
As I already said, good optimizations are the ones that perform at least 5x faster, even 10x faster. Any optimization that is not at least 2x faster should be disregarded and eliminated. I think we are seeing that kind of optimizations now.
The best ideas always win in the long run
One nice thing about not understanding is that it really doesn't matter. If you don't understand, the market will take care of it. The market always takes the best technologies and makes them blossom, while leaving the bad technologies behind.
Now you think I'm wrong because you know these examples, for example I know of Smalltalk, which was clearly superior to C++ and Java, but the main problem it had was its price. It was too expensive at the time, so Java occupied its place. Smalltalk good ideas were copied one by one until C++ and Java leveraged its potential. First C++, now Java, which is mostly free.
So in the end it is the right technology mix that wins. Even if Smalltalk did not win, its technology its all over the place in Java, so in a sense Smalltalk won, through Java. For all those Smalltalk lovers and Java haters, I know Smalltalk is still better, I know there are many things in Smalltalk that have not been copied (yet), but it is just a matter of time and a new language that leverages the Smalltalk potential that is already left.
Why not Smalltalk directly? I now think Smalltalk has trust issues. I mean, the more I work in collaborative environments (development projects) the more I see people can't be trusted. they perform rather ok at the beggining, but eventually they do tricks and they try to benefit themselves in the sort run (while damaging themselves in the long run, and at the same time damaging the projects). I have no recipe for solving this issue, but certainly using a language that is less restrictive (Smalltalk) doesn't seem like a solution.
Trust issues
Maybe it is. I mean, maybe being in an environment where you are trusted from day one makes you be worth the trust, while being in an environment where you are not trusted makes you rebel and be less trustable. At least that certainly true for children, you need to trust on them for them to act maturely (according to their age of course).
In the case of programmers, I've read it is the same, but I feel tempted to do the opposite, since I have read their code and I know they did tricks, so a natural solution would be to build an even more restricting cage. Psycologically, I suppose the only reason they did tricks was that they needed to solve the problem and they thought the controls were unnecessary obstacles and even offensive. "If you hire professionals, why would you not trust them?"
On the other hand if they didn't have anything to hide, why did they do these dirty tricks? You can imagine that they thought they were trying to say "I'm smarter than you, because I did this and you didn't notice". Or they simply needed those tricks in order to perform appropiately, or so they thought at the time.
I'm sorry but I have to let them know that that is not the purpose of control. The purpose is to find out who is doing tricks, apply some current (only psychologically, just in case you wonder) and then go back to normal. The idea is not to have a cage on the mind of each developer, but to unlock their potential without making them step on each other's toes.
What do I mean by that?
All developers do not think the same, and you need to build a collective mind. This means that each individual has his own ideas, but the ideas of one developer can be understood by the rest, leveraging the potential of other to improve their own potential. This means you can start with a very lousy team, but you improve each of its members until they understand each other. This means they need to communicate, to pair program, or if they lack the basic instinct of cooperation, to do the poor man's pair program which is to code review before a check-in.
Governance is a very interesting topic, because you can govern like the Stasi or the Nazi regime, or you can govern like if you were in Switzerland or Sweden. The people act better when the government is light and is actively encouraging people to help the government with ideas and independent action. People feel as part of a community and they have a possitive view of the goverment, because they think "we are the goverment, because we do as we feel like it". The goverment just forbids the more aberrant behavior, but people feel everything is permitted and the sky is the limit. You can certainly feel that way in the US where children are encouraged to do as they wish and to not limit themselves.
In Chile, our education was radically different. You had to ask for permission and nothing was allowed. Even if you tried to motivate people, people would look suspiciously because who would allow that? Yes, it sounds silly and funny, but it is not so funny when all days were like that. I mean, if you are not trusted, it means you are a bad person, then it is ok to do bad things as long as nobody sees, because otherwise you would be gounded. See? The logic is perfect, and all that started it was that you were not trusted in the first place.
Is this still true for grown ups? Yep!
Chile has a very special way of treating companies like robbers. For example if you start company, you are allowed to stamp only 3 billing notes. Unstamped billing notes are not valid and you can stamp new billing notes when the other billing notes have been returned to the chilean IRS equivalent (already paid). So nothing really works, because no company could survive that long, and people have to do things under the table in order to operate.
See: you are not trusted, therefore you need to subvert the system, so if you look closely into what companies do, you realize they can't be trusted really, and all is just pose and looks. Welcome to latin america, where nothing works as it is supposed to.
In 1996 I used Sun OS, and Sun workstations had 4 CPUs per machine, consirably faster than Intel CPUs of the time. The OS run on one of the chips and the other 3 chips were used for the user space.
It was really smooth. But Sun had a different idea named Solaris (which must be Sunny in Latin I guess): The operating system was going to be distributed and therefore it would be faster.
That goes against my intuition and against the separation of concerns, against the division of labor, etc. Solaris was a very bad idea from the beginning.
Fast forward to 2007
Now Sun delivers an 8 core CPU with embeded ethernet... An 8 core is something amazing, if it weren't for Azul 768 cores in one machine.
But having ethernet embedded on the CPU? Maybe I'm missing something. Is the CPU going to communicate with main memory using ethernet. Maybe it is, since disks are supposed to get faster by using iSCSI (SCSI over IP), so getting an ethernet inside the CPU is not crazy after all.
I thought the primary drive for iSCSI was to have disks outside any computer and therefore reduce costs. Of course you could also use iSCSI internally, but I always thought the CPU would never speak IP. If it now speaks ethernet by hardware, IP is just a thin layer over ethernet, so 90% of the job is done, the other 10% can trivually be done by software.
But I think this tendency to convert software into hardware will only accelerate. I don't know if people realize how fast new protocols are invented and are crammed into chips. Today you can run an emulator in an applet, so a full computer can run in your browser.
If IP can be crammed in a chip, Java can be crammed in a chip, and therefore computers with 1024 CPUs (or cores) are just arounf the corner. If each of those cores can run Java, there will be a lot of unused computing power.
I know the companies are underserviced, most of them spend as much as half million dollars per month just ro run their datacenters. And that's cheap compared with the amount of money that would be spent if run manually (not considering all the data loss, that also has a price).
So companies are paying really very little for running this datacenters, but in 10 years they will be able to run their whole data center in an applet, if things continue this way. And I know they will.
4K must be enough for everyone
When I was 14 or so I knew a guy who was the CIO of a bank in Equador. He would tell me that when he started working a computer with 4KB of RAM was enough to perform everything at the bank. He didn't tell the processes run all night, but of course it couldn't be any different.
Now things have changed. Computers are commodities and developers are expensive, so people are not expected to work at night. And yet you can run processes at night, like performance testing and see the next day what happened. But even doing that is a waste, because if something doesn't work, you need to retest, and if you wait for the process to run at night again, you lost a full day.
It is much better to buy a machine for running the tests, since if you use Java, a normal PC can do, and they cost less than a $1,000. This is important because developers typically cost between $2,000 to $6,000 per month depending on their experience. If they can perform faster (by having computing resources available), you are saving the big bucks. Saving on computer hardware is rarely a saving.
So maybe Sun is on the right track this time, making it easier for hardware manufacturers to use iSCSI inside computers. But IMHO moving everything from software to hardware doesn't make 20x faster, only 10% faster, and a lot more hard to change.
As I already said, good optimizations are the ones that perform at least 5x faster, even 10x faster. Any optimization that is not at least 2x faster should be disregarded and eliminated. I think we are seeing that kind of optimizations now.
The best ideas always win in the long run
One nice thing about not understanding is that it really doesn't matter. If you don't understand, the market will take care of it. The market always takes the best technologies and makes them blossom, while leaving the bad technologies behind.
Now you think I'm wrong because you know these examples, for example I know of Smalltalk, which was clearly superior to C++ and Java, but the main problem it had was its price. It was too expensive at the time, so Java occupied its place. Smalltalk good ideas were copied one by one until C++ and Java leveraged its potential. First C++, now Java, which is mostly free.
So in the end it is the right technology mix that wins. Even if Smalltalk did not win, its technology its all over the place in Java, so in a sense Smalltalk won, through Java. For all those Smalltalk lovers and Java haters, I know Smalltalk is still better, I know there are many things in Smalltalk that have not been copied (yet), but it is just a matter of time and a new language that leverages the Smalltalk potential that is already left.
Why not Smalltalk directly? I now think Smalltalk has trust issues. I mean, the more I work in collaborative environments (development projects) the more I see people can't be trusted. they perform rather ok at the beggining, but eventually they do tricks and they try to benefit themselves in the sort run (while damaging themselves in the long run, and at the same time damaging the projects). I have no recipe for solving this issue, but certainly using a language that is less restrictive (Smalltalk) doesn't seem like a solution.
Trust issues
Maybe it is. I mean, maybe being in an environment where you are trusted from day one makes you be worth the trust, while being in an environment where you are not trusted makes you rebel and be less trustable. At least that certainly true for children, you need to trust on them for them to act maturely (according to their age of course).
In the case of programmers, I've read it is the same, but I feel tempted to do the opposite, since I have read their code and I know they did tricks, so a natural solution would be to build an even more restricting cage. Psycologically, I suppose the only reason they did tricks was that they needed to solve the problem and they thought the controls were unnecessary obstacles and even offensive. "If you hire professionals, why would you not trust them?"
On the other hand if they didn't have anything to hide, why did they do these dirty tricks? You can imagine that they thought they were trying to say "I'm smarter than you, because I did this and you didn't notice". Or they simply needed those tricks in order to perform appropiately, or so they thought at the time.
I'm sorry but I have to let them know that that is not the purpose of control. The purpose is to find out who is doing tricks, apply some current (only psychologically, just in case you wonder) and then go back to normal. The idea is not to have a cage on the mind of each developer, but to unlock their potential without making them step on each other's toes.
What do I mean by that?
All developers do not think the same, and you need to build a collective mind. This means that each individual has his own ideas, but the ideas of one developer can be understood by the rest, leveraging the potential of other to improve their own potential. This means you can start with a very lousy team, but you improve each of its members until they understand each other. This means they need to communicate, to pair program, or if they lack the basic instinct of cooperation, to do the poor man's pair program which is to code review before a check-in.
Governance is a very interesting topic, because you can govern like the Stasi or the Nazi regime, or you can govern like if you were in Switzerland or Sweden. The people act better when the government is light and is actively encouraging people to help the government with ideas and independent action. People feel as part of a community and they have a possitive view of the goverment, because they think "we are the goverment, because we do as we feel like it". The goverment just forbids the more aberrant behavior, but people feel everything is permitted and the sky is the limit. You can certainly feel that way in the US where children are encouraged to do as they wish and to not limit themselves.
In Chile, our education was radically different. You had to ask for permission and nothing was allowed. Even if you tried to motivate people, people would look suspiciously because who would allow that? Yes, it sounds silly and funny, but it is not so funny when all days were like that. I mean, if you are not trusted, it means you are a bad person, then it is ok to do bad things as long as nobody sees, because otherwise you would be gounded. See? The logic is perfect, and all that started it was that you were not trusted in the first place.
Is this still true for grown ups? Yep!
Chile has a very special way of treating companies like robbers. For example if you start company, you are allowed to stamp only 3 billing notes. Unstamped billing notes are not valid and you can stamp new billing notes when the other billing notes have been returned to the chilean IRS equivalent (already paid). So nothing really works, because no company could survive that long, and people have to do things under the table in order to operate.
See: you are not trusted, therefore you need to subvert the system, so if you look closely into what companies do, you realize they can't be trusted really, and all is just pose and looks. Welcome to latin america, where nothing works as it is supposed to.
lunes, 17 de septiembre de 2007
Being an Idiot
Don't you feel sorrounded by idiots?
By the famous message on the "I see dumb people" t-shirt, I suppose most IT geeks feel the same way, and some even want to let others know.
Apparently in the Unix culture people are called idiots if they ask questions that are responded in the manual or FAQ. This is consistent with the RTFM expression so common in Usenet.
But being an idiot is good if you are learning, because you are asking the right questions (somebody wrote the question in a FAQ, so it must be common enough), besides how are you going to find out there is FAQ if you can't ask?
But there is something not right about redirecting people to read the manual. In Windows it is assumed people do not read manuals, while in Unix there is a manual even for man, the manual reader.
In the Windows culture, if the program does not work as expected it is the program's fault, not the user's fault. There are no manuals and the manuals that exist are mostly useless anyway. Most people prefer Windows because it is simpler and it doesn't treat you like you are an idiot. Also, the user interface is consistent accross several programs because programers know users do not read manuals. Besides Microsoft encourages software developers to write programs that are consistent with the Windows UI look and feel.
Assuming you don't know
Most jobs prefer people who don't know, because they can pay less. And people will learn anyway, right?
But then the same people are asked for not to ask any stupid questions. And since most questions are stupid anyway, I mean even if you ask what is the objective of a team, the objective of a compnay, its mission, its vision, etc. Everything is considered stupid because mostly people doesn't know.
Even when they do know, they feel they can't disclose that privileged information, which almost always means you are repleaceable and your function will be eliminated in the follwoing months.
So, by all means, ask. Even if the question sounds stupid. Even if the reply that you should know the answer, or that it is not of your business. Simply ask. And if they reply that you should know, tell them that you do, but you want to know with which kind of professionals you are working: people who hide vital information or people you can trust.
If they tell you that it is not of your business, tell them that you don't accept to be limited by people who can't answer simple questions.
Write the answers down
Probably the answers they gove weere prerecorded answers for situations like this. People work like this, they memorice lines and repeat like parrots. Thinking requires time ans space, so they go the easy way.
But write the asnwers down. They could be very important down the road.
One of the things I learned the hard way was that doing post mortem for every project is probably one of the most important parts of every project. Doing a post mortem is very easy, you simply ask what went right and what went wrong and you document it, proposing different solutions for the problems found.
It really gives you insight on what you did wrongly.
By the famous message on the "I see dumb people" t-shirt, I suppose most IT geeks feel the same way, and some even want to let others know.
Apparently in the Unix culture people are called idiots if they ask questions that are responded in the manual or FAQ. This is consistent with the RTFM expression so common in Usenet.
But being an idiot is good if you are learning, because you are asking the right questions (somebody wrote the question in a FAQ, so it must be common enough), besides how are you going to find out there is FAQ if you can't ask?
But there is something not right about redirecting people to read the manual. In Windows it is assumed people do not read manuals, while in Unix there is a manual even for man, the manual reader.
In the Windows culture, if the program does not work as expected it is the program's fault, not the user's fault. There are no manuals and the manuals that exist are mostly useless anyway. Most people prefer Windows because it is simpler and it doesn't treat you like you are an idiot. Also, the user interface is consistent accross several programs because programers know users do not read manuals. Besides Microsoft encourages software developers to write programs that are consistent with the Windows UI look and feel.
Assuming you don't know
Most jobs prefer people who don't know, because they can pay less. And people will learn anyway, right?
But then the same people are asked for not to ask any stupid questions. And since most questions are stupid anyway, I mean even if you ask what is the objective of a team, the objective of a compnay, its mission, its vision, etc. Everything is considered stupid because mostly people doesn't know.
Even when they do know, they feel they can't disclose that privileged information, which almost always means you are repleaceable and your function will be eliminated in the follwoing months.
So, by all means, ask. Even if the question sounds stupid. Even if the reply that you should know the answer, or that it is not of your business. Simply ask. And if they reply that you should know, tell them that you do, but you want to know with which kind of professionals you are working: people who hide vital information or people you can trust.
If they tell you that it is not of your business, tell them that you don't accept to be limited by people who can't answer simple questions.
Write the answers down
Probably the answers they gove weere prerecorded answers for situations like this. People work like this, they memorice lines and repeat like parrots. Thinking requires time ans space, so they go the easy way.
But write the asnwers down. They could be very important down the road.
One of the things I learned the hard way was that doing post mortem for every project is probably one of the most important parts of every project. Doing a post mortem is very easy, you simply ask what went right and what went wrong and you document it, proposing different solutions for the problems found.
It really gives you insight on what you did wrongly.
The Cathedral vs. the Bazaar
It seems people bring this over and over: Why don't small teams of programmers in a garage replace them huge teams we see today?
It seems that the explanation is that their respective products are in a different category.
But I doubt it.
Most big projects are just massive copy and paste and they could be replaced with the right prototypes and building abstraction layers (with the right abstractions of course), but it seems managers prefer to fail: They prefer the predictable and long development cycle, because when you fail it is already too late, massive amounts of money have gone through the drain and therefore managers are in a better position (they have already been paid for wasting time and money) to negotiate even better pay.
When people are confronted with 2 alternatives: one that can produce better results, but that if fails means you have no scapegoat, and another that will eventually fail but you have an scapegoat, people prefer the one that will fail, but has an scapegoat. The reason is that people get hired to avoid uncertainity, and I'm sure people in latin america prefer this situation all the time.
There are many examples and you probably have been in many: it is very common for projects to avoid proptotypes, because prototypes can show which design decisions work and which can't probably work. If they work or not can be explained with developers who lack the required knowledge or bad design decisions. It doesn't really matter, since design decisions must be the ones that developers can implement.
But developers tend to avoid writing prototypes because they could be accused of not knowing how to build this little examples, and designers do not like this for the same reason: the code can be corect, but the design can be shown not to do as expected, therefore, people prefer the non accountibility of delivering a mess.
And the big problem of a code that is a big ball of mud is that no one can fix it because no one can understand it.
The different category if Wikis
It is amazing how people think that people in a garage can't produce the complications coming out of their little brains, but they can build a computer in a garage.
I mean, come on, a computer is a lot more complex than the mumblings fo a business analyst or a user. Usually the businesses are plain simple and hide behind a courtain of poorly defined words. There is usually no more than that. A little script in Excel can usually replace the biggest experts.
When it comes to Wikis, I already mentioned that they are great and usually people don't use them for the worng reasons. I think the world would be a lot better if the Wiki was invented just a few minutes before the Web. The Web language is just so complex and lacking even the minimal amount of think ahead.
Almost everything in the web is thought for today business and if you need something else, you need to extend the web protocol or the web language and you end up with incompatible browsers. This rant I heard it first from Alan Kay, one of the inventors of Smalltalk, and he proposed a solution, a little language that would explain itself to the browser.
So you could easily change HTML and the new HTML tags would explain themselves to the browsers, I can already imagine this because tag libraries work like that, although the browser doesn't have a clue about them.
Most systems can be implemented as a Wiki. As I already mentiones, the access restrictions of the Wiki can be introduced using AOP or dynamic proxies.
It seems that the explanation is that their respective products are in a different category.
But I doubt it.
Most big projects are just massive copy and paste and they could be replaced with the right prototypes and building abstraction layers (with the right abstractions of course), but it seems managers prefer to fail: They prefer the predictable and long development cycle, because when you fail it is already too late, massive amounts of money have gone through the drain and therefore managers are in a better position (they have already been paid for wasting time and money) to negotiate even better pay.
When people are confronted with 2 alternatives: one that can produce better results, but that if fails means you have no scapegoat, and another that will eventually fail but you have an scapegoat, people prefer the one that will fail, but has an scapegoat. The reason is that people get hired to avoid uncertainity, and I'm sure people in latin america prefer this situation all the time.
There are many examples and you probably have been in many: it is very common for projects to avoid proptotypes, because prototypes can show which design decisions work and which can't probably work. If they work or not can be explained with developers who lack the required knowledge or bad design decisions. It doesn't really matter, since design decisions must be the ones that developers can implement.
But developers tend to avoid writing prototypes because they could be accused of not knowing how to build this little examples, and designers do not like this for the same reason: the code can be corect, but the design can be shown not to do as expected, therefore, people prefer the non accountibility of delivering a mess.
And the big problem of a code that is a big ball of mud is that no one can fix it because no one can understand it.
The different category if Wikis
It is amazing how people think that people in a garage can't produce the complications coming out of their little brains, but they can build a computer in a garage.
I mean, come on, a computer is a lot more complex than the mumblings fo a business analyst or a user. Usually the businesses are plain simple and hide behind a courtain of poorly defined words. There is usually no more than that. A little script in Excel can usually replace the biggest experts.
When it comes to Wikis, I already mentioned that they are great and usually people don't use them for the worng reasons. I think the world would be a lot better if the Wiki was invented just a few minutes before the Web. The Web language is just so complex and lacking even the minimal amount of think ahead.
Almost everything in the web is thought for today business and if you need something else, you need to extend the web protocol or the web language and you end up with incompatible browsers. This rant I heard it first from Alan Kay, one of the inventors of Smalltalk, and he proposed a solution, a little language that would explain itself to the browser.
So you could easily change HTML and the new HTML tags would explain themselves to the browsers, I can already imagine this because tag libraries work like that, although the browser doesn't have a clue about them.
Most systems can be implemented as a Wiki. As I already mentiones, the access restrictions of the Wiki can be introduced using AOP or dynamic proxies.
The Generalist Specialist
Some people are specialists (most of the people) and some people are generalists (very few people). Industrialization has brought us a lot of job specialization and some people think it is good to be a generalist.
Being a generalist means you must be able to talk to a big crowd of different specialists and be able to not say anything stupid or anything that will make them feel anger, fear, etc. One way to do this is to study politics and learn how to do double talk: each group agrees with what you are saying because they understand different things.
Another politically correct way to manage a crowd is to find agreement on something and talk endlessly about something trivial and uncontroversial. It produces no real results on their understanding, but aren't we having a fine time!
Finally there is a way to convince people in non-confrontational ways, allowing them to save face. But first you need to find configurations in which people are not against each other, at least from the point of view of their convenience and their interests. A good way to say this is "we are all on the same boat, so we all arrive peacefully to a safe port or we sink all together". Ok, the choice of words probably has to be a lot safer and welcoming than that, but the main idea is avoid confrontation.
Why?
When you have different specialists, you have different views, and inevitably people manifest this conflicting views. People are afraid that their work will be meaningless or disregarded, therefore they try to impose their views at all costs.
If they were so good professionals they wouldn't be so scared, because they would have succeeded at other places, so they would like to see if other professionals have the same skills and can do things differently. But not all the people have the lack to have a brain and have it working at the same time.
I know it sounds politically incorrect, but the brain is not turned on all day, sorry. I have not actually measured, because the brain works by memorization, by deduction and pattern matching, so when it working by memorization and by pattern matching it is not calculationg the logical consequences of the possibilities and therefore, for the software development tasks point of view, it is useless.
Maybe in medicine and law professionals can work in full mode of pattern matching and memory and the results would be better than trying to deduct, because in order to deduct you need to know a lot about the subject at hand, and doctors probably can only diagnose according to the known symptoms, and since symptoms are always different, there is a tiny probability of delivering the wrong diagnosis and therefore they tend to say "we need to observe the evolution of the disease", meaning they are not sure what is going on with you.
Eating healthy food (meaning fresh food) and doing exercise every day is a better way to remain healthy, by the way.
In software development we have the same problem when projects are built organically, that is all software is thrown at the project, it is well shaken, and when the results are not what it is expected, you are supposed to debug endlessly to diagnose the symptoms and apply microsurgery. Then they find out your surgery had unexpected side effects (collateral damages), or in my vocabulary, you introduced new bugs.
The project is always 99% finished, it doesn't matter how much money, developers or unpaid overtime you throw at it.
Developers say the code is a mess.
Managers say they need to hire more specialists because they ones that are working now are unable to finish and they are already burnt out.
The problem are the generalists
Why would managers have a blame on how developers behave? Developers wrote the mess disn't they?
When left alone developers will always write unreadable code.
Those who specialize on generic skills, have to know the details of every single skill. If they can't learn, they are useless and drive their companies down with them. Since they generally are the bosses, they tend to be pretty agressive with insubordinate subordinates.
The best defense is to attack first. And the only way to avoid confrontation is to win in a matter that doesn't allow the opponent to retaliate. I'm not advocating turning the office into a battleground, but if you risk being fired you are obligued to fire your opponents at the office.
The main advantage is that if it doesn't turn out as expected and you are fired instead of them, then it is usually better to move into other opportunites.
Being a generalist means you must be able to talk to a big crowd of different specialists and be able to not say anything stupid or anything that will make them feel anger, fear, etc. One way to do this is to study politics and learn how to do double talk: each group agrees with what you are saying because they understand different things.
Another politically correct way to manage a crowd is to find agreement on something and talk endlessly about something trivial and uncontroversial. It produces no real results on their understanding, but aren't we having a fine time!
Finally there is a way to convince people in non-confrontational ways, allowing them to save face. But first you need to find configurations in which people are not against each other, at least from the point of view of their convenience and their interests. A good way to say this is "we are all on the same boat, so we all arrive peacefully to a safe port or we sink all together". Ok, the choice of words probably has to be a lot safer and welcoming than that, but the main idea is avoid confrontation.
Why?
When you have different specialists, you have different views, and inevitably people manifest this conflicting views. People are afraid that their work will be meaningless or disregarded, therefore they try to impose their views at all costs.
If they were so good professionals they wouldn't be so scared, because they would have succeeded at other places, so they would like to see if other professionals have the same skills and can do things differently. But not all the people have the lack to have a brain and have it working at the same time.
I know it sounds politically incorrect, but the brain is not turned on all day, sorry. I have not actually measured, because the brain works by memorization, by deduction and pattern matching, so when it working by memorization and by pattern matching it is not calculationg the logical consequences of the possibilities and therefore, for the software development tasks point of view, it is useless.
Maybe in medicine and law professionals can work in full mode of pattern matching and memory and the results would be better than trying to deduct, because in order to deduct you need to know a lot about the subject at hand, and doctors probably can only diagnose according to the known symptoms, and since symptoms are always different, there is a tiny probability of delivering the wrong diagnosis and therefore they tend to say "we need to observe the evolution of the disease", meaning they are not sure what is going on with you.
Eating healthy food (meaning fresh food) and doing exercise every day is a better way to remain healthy, by the way.
In software development we have the same problem when projects are built organically, that is all software is thrown at the project, it is well shaken, and when the results are not what it is expected, you are supposed to debug endlessly to diagnose the symptoms and apply microsurgery. Then they find out your surgery had unexpected side effects (collateral damages), or in my vocabulary, you introduced new bugs.
The project is always 99% finished, it doesn't matter how much money, developers or unpaid overtime you throw at it.
Developers say the code is a mess.
Managers say they need to hire more specialists because they ones that are working now are unable to finish and they are already burnt out.
The problem are the generalists
Why would managers have a blame on how developers behave? Developers wrote the mess disn't they?
When left alone developers will always write unreadable code.
Those who specialize on generic skills, have to know the details of every single skill. If they can't learn, they are useless and drive their companies down with them. Since they generally are the bosses, they tend to be pretty agressive with insubordinate subordinates.
The best defense is to attack first. And the only way to avoid confrontation is to win in a matter that doesn't allow the opponent to retaliate. I'm not advocating turning the office into a battleground, but if you risk being fired you are obligued to fire your opponents at the office.
The main advantage is that if it doesn't turn out as expected and you are fired instead of them, then it is usually better to move into other opportunites.
domingo, 16 de septiembre de 2007
Unix and Linux design flaws
Like it or not, all software has desing decisions built into them. Those design choices may be good choices or bad choices, in which case we say design flaws.
Some times the design is not clearly either good or bad, only after 20 years you may realize one of those design decisions turned out to be flaws.
Usually the ones that are hard to change, maintain or improve are the worst ones. Also the ones that take time from unsuspecting users (or customers), since the whole point of using computers is that you can save time by using them.
Unix design flaws
Unix design flaws are shared with Linux since Linux is a Unix clone. I already wrote Unix is a Linux clone because Linux is a lot more popular than Unix. But Unix has really old design choices made in the 70's when computers were expensive, and resources in computers were scarce. No one ever though you could possibly have more than 1 GB of RAM, so a lot of the design decision of C and Unix are really dated (C and Unix originally were developed together).
The fact that Unix and C were so popular were that they were almost open source. BSD was freely available to some universities and to this day it is not clear which company has the rights on Unix.
Linux is simply changing the scenario by making a Unix clone free and therefore the price is zero, but the design flaws are still there. I know I'm not going to win any friends with this post, but what the heck.
File descriptors are integers and everything in Unix is a file, so you operate on them using file descriptors, and integers in C are machine dependent, which means that all Unices have different integer sizes and therefore you write some code and it is machine dependant.
So Linux runs some way in one machine, and in other machines it fails. GNU was the inventor of the "GNU is Not Unix" moniquer and what they did was to reimplement Unix from scratch (the APIs can't be copyrighted according to US courts) and also, make GNU code portable across different hardware.
How did they do this?
make
make install
And that's about it!
Therefore GNU, Linux and Unix is portable at the source level when using the GNU mechanism for building software (basically make and configure).
But Windows is portable at the binary level. and this is a superior alternative.
Going crazy
Probably now you think I'm crazy. How it would be possible for Windoze to be superior to the almighty Linux?
I agree that Linux and specially Ubuntu have gone a long way to make Linux a viable alternative to Windows. You can install Windows binaries from a CD without having to recompile and the same can be done with Ubuntu. But the parallel stops there.
I installed Ubuntu in one of my computers and then instead of cabling my house I tried to use an USB based wireless network. The only problem is that the USB stick didn't come with the Linux drivers, small frustration, I downloaded them from the Internet and then I have to compile them. But Ubuntu doesn't come with the compiler. Should I download it? It comes in source form, since if it is compiled, it is not guaranteed to run in my computer.
What can I do now?
It is really a bad decision to be compatible at the source level. Windows design choice was clearly superior.
Java is superior
But I think it is even better in the case of Java, because you can write once and run everywhere, so it is compatible at the binary level.
Some times the design is not clearly either good or bad, only after 20 years you may realize one of those design decisions turned out to be flaws.
Usually the ones that are hard to change, maintain or improve are the worst ones. Also the ones that take time from unsuspecting users (or customers), since the whole point of using computers is that you can save time by using them.
Unix design flaws
Unix design flaws are shared with Linux since Linux is a Unix clone. I already wrote Unix is a Linux clone because Linux is a lot more popular than Unix. But Unix has really old design choices made in the 70's when computers were expensive, and resources in computers were scarce. No one ever though you could possibly have more than 1 GB of RAM, so a lot of the design decision of C and Unix are really dated (C and Unix originally were developed together).
The fact that Unix and C were so popular were that they were almost open source. BSD was freely available to some universities and to this day it is not clear which company has the rights on Unix.
Linux is simply changing the scenario by making a Unix clone free and therefore the price is zero, but the design flaws are still there. I know I'm not going to win any friends with this post, but what the heck.
File descriptors are integers and everything in Unix is a file, so you operate on them using file descriptors, and integers in C are machine dependent, which means that all Unices have different integer sizes and therefore you write some code and it is machine dependant.
So Linux runs some way in one machine, and in other machines it fails. GNU was the inventor of the "GNU is Not Unix" moniquer and what they did was to reimplement Unix from scratch (the APIs can't be copyrighted according to US courts) and also, make GNU code portable across different hardware.
How did they do this?
make
make install
And that's about it!
Therefore GNU, Linux and Unix is portable at the source level when using the GNU mechanism for building software (basically make and configure).
But Windows is portable at the binary level. and this is a superior alternative.
Going crazy
Probably now you think I'm crazy. How it would be possible for Windoze to be superior to the almighty Linux?
I agree that Linux and specially Ubuntu have gone a long way to make Linux a viable alternative to Windows. You can install Windows binaries from a CD without having to recompile and the same can be done with Ubuntu. But the parallel stops there.
I installed Ubuntu in one of my computers and then instead of cabling my house I tried to use an USB based wireless network. The only problem is that the USB stick didn't come with the Linux drivers, small frustration, I downloaded them from the Internet and then I have to compile them. But Ubuntu doesn't come with the compiler. Should I download it? It comes in source form, since if it is compiled, it is not guaranteed to run in my computer.
What can I do now?
It is really a bad decision to be compatible at the source level. Windows design choice was clearly superior.
Java is superior
But I think it is even better in the case of Java, because you can write once and run everywhere, so it is compatible at the binary level.
Suscribirse a:
Comentarios (Atom)