Author: Henri Azibert
In many respects, Engineering is the principal tool used to increase productivity. Most often it will be a better machine, that can do more, increase process output, require less resources to operate, go faster, last longer… just be so much more effective than the previous machine. And let’s not forget that the machine which is replaced was a phenomenal improvement over the one it itself previously replaced. There is also the engineering of a process; how to reduce the footprint, eliminate unnecessary steps, reduce power consumption, increase the output from less raw material. Everywhere Engineering is used to increase efficiency and productivity. However, is there such a thing as engineering productivity?
Although not always measured or even examined, there are ways to scrutinize engineering productivity. The basic engineering process is to analyze a problem, formulate solutions, verify or validate solutions through testing, and then implement the solution. It is a straightforward process, well understood and accepted. It is rational, simple and effective. ‘Fool-proof’ might even be a safe characterization of this time-honored process. Yet there are many ways it can be misapplied, misunderstood, and corrupted. And this is, from the very start, an effort to analyze a problem.
The first step is to address the right problem, not just the symptoms.
The following is an example to illustrate this point: a rather banal assessment and remediation of a fastener failure in a pressure retaining part. A socket head cap screw ruptures where the screw enters the threads. Could it be that the screw is too small? The obvious solution would be to use the next size up. However, this might mean a major design change and problems of retrofitting existing parts in service, so prudence is in order. Furthermore, a quick analysis shows that under the maximum pressure conditions, the fastener is highly stressed but well within the material limits. Furthermore, the design has been used for over two decades without this type of malfunction.
In order to avert the design change, a failure analysis is ordered from an outside consulting firm. After substantial expense and time delay, the conclusion comes in: it is a classic fatigue failure. Quite an unusual finding since this is not a cyclic application. Maybe we will have to use a bigger bolt! To further focus on the fastener, a full metallurgical analysis is ordered. After more expense and delay, the results come in: the fastener is not to the material composition specifications. As we are ready to vindicate the design and blame procurement and quality control, someone notices a small footnote in the report: the yield and ultimate strength of the material are actually higher than specified. Now what?
The fastener failure is the symptom, not the problem. First clue: the fastener was highly stressed under maximum operating conditions but still within allowable limits, and the user assures that normal operation conditions are below the maximum. Second clue: the first failure analysis shows fatigue in a non-cyclic constant pressure loading service. Pressure fluctuations would be required to create a fatigue type of failure. Reported conditions do not match the failure mode. Using a larger fastener would most likely have moved the breakdown to the next weakest component. Finally, searching for the anomaly, and after recording instrumentation is eventually installed, the culprit turns out to be a fast cycling solenoid valve that creates repeated water hammer and pressure spikes. (Of course, this never happened when someone was present to observe it.) The problem was not the fastener, although it certainly had to be ruled out.
Another typical example we are likely to face is that of a particular machine failure which seems intractable. Explanations or various attempts from the vendor do not result in improvements of the failing device. The temptation is to go to another vendor. If the supplier is a reputable vendor, the problem is not likely to be the product. Reputable vendors all have good products, that operate reliably, but only when they are used properly. Another vendor could stumble on a solution or set up a different system that is not sensitive to a specific condition, but this is highly improbable. The likelihood is that the new vendor will have to go through the same learning curve, all over again. And this is rather unproductive.
These are just a couple of examples of inefficient engineering approaches. There are many other ways engineers can waste time and resources, while at the same time appearing authoritative and convincing. In no specific order, and certainly not an exhaustive list, here are a few other common instances:
- Applying a great solution to the wrong problem.
- Applying the wrong solution to the right problem.
- Applying the latest, superb, trendy, complicated, and lengthy analytical method to a simple, basic, ‘you should know better’, problem.
- Being in love with a fantastic solution to a non-existent problem.
- Spend hours, or weeks, or months to obtain a .05% performance improvement.
- Going for a 2% improvement when you are 100% off.
- Going for a 100% improvement when you are 2% off.
- Never knowing when to stop.
- Stopping when the solution is millimeters away.
So even though the entire raison d’être (purpose) of engineering is to improve efficiency, there are many ways the process itself can be inefficient.