Google

Monday, August 22, 2005

What is Constrained Optimization?


Constrained optimization models are based on a set of underlying assumptions. The main assumption is that most, if not all, of the various constraints in the model are static. The general idea is to find the optimum solution given a set of static constraints. Most of the models in accounting, finance, management, economics and quantitative methods fall into this category and can be criticized from the continuous improvement perspective (e.g., Deming's theory, JIT, TOC, ABM, etc.). Some of these models include the following:
1. The theoretical microeconomic non-linear cost-volume-profit model. This is perhaps one of the earliest constrained optimization models. In this model the capacity of the firm and resulting fixed costs are constant or static, but unit sales prices and unit variable costs are allowed to vary. Sales prices reflect the law of demand, i.e., consumers are willing and able to buy more at lower prices than higher prices. Unit variable costs change as a result of chances in productivity, i.e., output per input. The optimum profit is obtained where production and sales are where marginal revenue is equal to marginal cost. This produces the greatest difference between total revenue and total costs. The theoretical model is illustrated on the left-hand side of Exhibit 1.

2. The conventional linear cost-volume-profit model (derived from the microeconomic model) provides another example. In this model, the sales price and unit variable costs are assumed to be constant or static. The constant sales price reflects a horizontal demand curve, while the constant unit variable cost reflects an underlying assumption of constant productivity and input prices, e.g. materials, labour and indirect inputs. Optimum profit is obtained where the company produces at capacity. The simplified linear model is illustrated on the right-hand side of Exhibit 1.
3. The standard cost variance analysis model provides another example, although this model is merely an extension or variation of the linear cost-volume-profit model. Standard input prices and quantities allowed per output are set to reflect some acceptable level of performance. This level could be the optimum level, but the standards are more meaningful if they are set at the mean of the possible outcomes when the system is stable. This model falls into the constrained optimization category because it assumes a static system with a static set of constraints. Standard sales prices, standard variable costs per unit, (i.e., standard input prices of direct materials, direct labour, indirect resources and productivity), budgeted fixed costs and sales mix are all assumed to be constant as far as the standards are concerned. Some of the problems with this model are illustrated in Exhibit 2, which focuses on factory overhead. Overhead variance analysis does not even isolate the price and quantity variances for indirect resources. In addition, the model ignores many of the cost drivers of indirect resource costs and ignores the concept of system variability identified as randomness in Exhibit 2. Therefore the standard cost model is of little help in identifying potential improvements.
4. The economic order quantity and economic batch size models provide other examples of constrained optimization techniques. For example in the economic order quantity (EOQ) model carrying costs are equated with ordering costs to find the optimum order quantity as illustrated in Exhibit 3. Carrying costs increase for larger order quantities because of increases in the costs of materials storage, handling, obsolescence, insurance and other cost related to carrying a larger inventory. Ordering costs decrease because there are fewer orders. Supposedly, the two cost functions intersect which produces a U-shaped total cost curve as illustrated in Exhibit 3. Of course the intersection of the two curves identifies the EOQ as indicated above. However, the main problem with the EOQ model is that it conflicts with the idea of continuously finding ways to reduce inventory to a minimum. In other words, do not accept the constraints and attempt to optimize. Instead find ways to continuously reduce or remove the constraints. For example, using JIT purchasing concepts, ordering costs are reduced by allowing the vendor to have access to the buyer’s production schedule. This means that the vendor performs part of the purchasing function for the buyer.
5. The quality cost conformance model provides another example of a constrained optimization approach. In this model the economic conformance level (ECL) is obtained where prevention and appraisal costs are equal to external and internal failure costs. Prevention and appraisal costs increase as the level of conformance quality increases. Conformance quality refers to conformance to specifications as opposed to design quality. Failure costs are expected to decrease as the level of conformance quality increases. Therefore, the total costs associated with conformance quality will be U-shaped as indicated in Exhibit 4. The optimization concept and related calculations are essentially the same as in the EOQ model. Prevention costs include quality engineering, training and related supervision costs. Appraisal costs include inspection, testing and supervision related to these activities. Internal failure costs include spoilage, scrap, rework and the associated downtime costs, while external failure costs include warranty costs and the costs of lost customers.
The ECL model is associated with Juran. Deming and others, such as Crosby view the calculation of the ECL as a waste of time. From this perspective, the main problem with the ECL methodology is that the model is likely to be mis-specified by underestimating or ignoring the costs associated with lost customers. A revised model with the "quality is free" perspective is provided in Exhibit 5. This is a long run view where the lost sales dollars resulting from past failures are included in external failure costs. Of course, lost sales dollars are unknown amounts, but there is adequate reason in most industries to believe that they represent substantial amounts, perhaps so large that the two curves never intersect.
Quality Models Compared
Two quality models have appeared in the accounting literature in recent years. Juran's quality cost conformance model is associated with the zero defect philosophy shown on the left side in the illustration below. Juran's model includes a target value and tolerance, or specification limits, for the variation that occurs in a parameter or characteristic (X). In Juran's model, no loss occurs if the value of X is within the specification limits, i.e., it is considered acceptable. If the value is outside the limits it is considered unacceptable or a defect and becomes either scrap, spoilage or rework.
Deming, on the other hand, was associated with the robust quality philosophy based on Taguchi's loss function shown in the center of the illustration and combined with a distribution of X on the right hand side. Taguchi and Deming believed that some loss occurs for the manufacturer, the customer, or society when the value of X is not on target. In the graphic illustration above, the distribution of X is drawn so that it appears that the mean of X is on the target value, but of course this is not usually the case. The idea in the robust quality philosophy is to continuously improve the process by moving the mean value of the parameter closer to the target value and by reducing the amount of variation in the parameter.
6. Relevant cost (incremental, differential or cost-benefit) models such as special order pricing decisions, product mix decisions, make or buy (out-sourcing) decisions, to process joint products further decisions and similar short term decisions all fall into the constrained optimization category because a static environment is typically assumed. Some relevant cost problems, such as the product mix decision model, use techniques such as linear programming because there are a large number of constraints that must be considered simultaneously.
7. The conventional capital budgeting investment decision model. This model is essentially a long run relevant cost model that emphasizes the discounted cash flow methodology, i.e., net present value and internal rate of return approaches. It falls into the constrained optimization category because a static environment is typically assumed. The investment management concept adds the moving baseline approach to the model to make it more dynamic.
8. The statistical process control (SPC) model might also be included in the constrained optimization category from the perspective that it measures the variability within a stable or static system. However, sample means that fall outside the control limits indicate that the system is no longer stable. Changes in the system that cause changes in the mean or changes in the range are also reflected on the control charts. This adds a dynamic element to the SPC methodology that makes it useful as a tool for measuring improvements in the system. A system improvement represents a change that either improves the mean outcome or reduces the variability within the system. For the illustration in Exhibit 7, a change in the production process that reduced the mean drilling time or reduced the variability in drilling time would represent a system improvement.
9. Converting static models into dynamic models. In defense of the constrained optimization techniques, one can argue that all of the methods discussed above can be converted into dynamic, rather than static models by adding sensitivity analysis to the model. Sensitivity analysis involves testing how sensitive the solution is to changes in the constraints associated with the model. For example, sensitivity analysis is frequently used in product mix decisions based on linear programming. One can also argue that ignoring potential improvements in the system when using a model does not represent a flaw in the model, but instead indicates a myopic misuse of the model. A rebuttal is that most users may ignore the sensitivity issue because it is not a formal part of the model.

Thursday, August 18, 2005

The Goal - Summary

The first two chapters get the reader acquainted with Mr. Alex Rogo and his problems with his production plant and family life. His boss Mr. Peach, the Division Vice President has given him three months to show an improvement or the plant will be shut down!
At his family front, things are not any better. Since moving back to his hometown six months ago, it seems adjustment isn’t going well for his family. It’s great for Alex, but it’s a big change from the city life that his wife is used to.

In a meeting of plant managers, Mr. Peach finds out how bad things are and are given goals to achieve for the next quarter. Through the grapevine Mr. Rogo finds out that the Division has one year to improve or it’s going to be sold, along with Mr. Peach.

While at this meeting, Alex starts pondering about his accidental meeting with an old physics professor, Jonah, at the airport. Jonah has no knowledge of where Alex is employed. But still, Johan predicts the problems of high inventories and not meeting shipping dates. He also states that there is only one goal for all companies, and anything that brings you closer to achieving it is productive and all other things are not productive.

Alex decides to leave the meeting at the break. He feels that he needs to understand what the "goal" is. After a pizza and a six pack of beer it hits him, money. The "goal" is to make money and anything that brings us closer to it is productive and anything that doesn’t isn’t.

Mr. Rogo sits down with one of his accountants and together they define what is needed in terms of achieving the goal. Net profit needs to increase along with simultaneously increasing return on investment and cash flow. Now all that is needed is to put his specific operations in those terms.
Alex makes the decision to stay with the company for the last three months and try to make a change. Then he decides he needs to find Jonah.

Alex finally manages to speak to Jonah. He is given three terms that will help him run his plant, throughput, inventory, and operational expense. Jonah states that everything in the plant can be classified under these three terms. "Throughput is the rate at which the system generates money through sales." "Inventory is all the money that the system has invested in purchasing things which it intends to sell." "Operational expense is all the money the system spends in order to turn inventory into throughput."

After explaining everything, Alex and his staff (Bob from production, Lou from accounting and Stacey from inventory control) hammered out the meaning of throughput, inventory and operational expense until satisfied. Lou, states the relationships as follows. "Throughput is money coming in. Inventory is the money currently inside the system. And operational expense is the money we have to pay out to make throughput happen." Bob is skeptical that everything can be accounted for with three measurements. Lou explains that tooling, machines, the building, the whole plant are all inventory. The whole plant is an investment that can be sold. Stacey says, "So investment is the same thing as inventory."
Then they decide that something drastic is needed to be done with the machines. But how can they do that without lowering efficiencies? Another call to Jonah is placed and Alex is off to New York that night.

The meeting with Jonah is brief. Alex tells Jonah of the problems at the plant and the three months in which to fix them. Jonah says they can be fixed in that time and then they go over the problems the plant has. First, Jonah tells Alex to forget about the robots. He also tells Alex that "A plant in which everyone is working all the time is very inefficient." Jonah suggest that Alex question how he is managing the capacity in the plant and consider the concept of a balanced plant. According to Jonah, this "is a plant where the capacity of each and every resource is balanced exactly with demand from the market." Alex thinks a balanced plant is a good idea. Jonah says no, "the closer you come to a balanced plant, the closer you are to bankruptcy." Then Jonah leaves Alex with another riddle, what does the combination of "dependent events" and "statistical fluctuations" have to do with your plant? Both of those seem harmless and should work themselves out down the production line.

Stuck for the weekend as troop master for his son's friends, Alex discovers the importance of "dependent events" in relation to "statistical fluctuations". Through the analogy between a single file hike through the wilderness and a manufacturing plant, Alex sees that there are normally limits to making up the downside of the fluctuations with the following "dependent events". Even if there were no limits, the last event must make up for all the others for all of them to average out.

Finally, through the match bowl game, or experiment, it becomes clear that with a balanced plant and because of "statistical fluctuations" and "dependent events" throughput goes down and inventory along with operating expenses goes up. A balanced plant is not the answer.

Fully understanding the "dependent events", Alex puts the slowest kid in the front of the hike and he relieves him of extra weight he has been carrying in his backpack. This balances the fluctuations and increases the kid’s productivity, which increased the throughput of the team.

Now Jonah introduces Alex to the concept of bottlenecks and non-bottlenecks. Jonah defines these terms as follows. "A bottleneck is any resource whose capacity is equal to or less than the demand placed upon it. "A non-bottleneck is any resource whose capacity is greater than the demand placed on it." Jonah explains that Alex should not try to balance capacity with demand, but instead balance the flow of product through the plant.
Later, Alex and his team recognize the bottlenecks, the areas where capacity doesn’t equal demand, like the slow kid Herbie on the hike. With this discovery goes the ideas related to reorganizing the plant like Alex did with the hike. Production is a process and it cannot be moved around so easily. Many processes rely on the previous one to be able to complete the next. Alex would need more machines, which takes more capital, and division is not going to go for that.

Well, Jonah makes a visit to the plant. Jonah tells Alex that a plant without bottlenecks would have enormous excess capacity. Every plant should have bottlenecks. Alex is confused. What is needed is to increase the capacity of the plant? The answer is more capacity at the bottlenecks. Some ways to increase capacity at the bottlenecks are not to have any down time within the bottlenecks, make sure they are only working on quality products so not to waste time, and relieve the workload by farming some work out to vendors. Jonah wants to know how much it cost when the bottlenecks (X and heat treat) machines are down. Lou says $32 per hour for the X machine and $21 per hour for heat treat. How much when the whole plant is down? Around $1.6 million. How many hours are available per month? About 585. After a calculation, Jonah explains that when the bottlenecks are down for an hour, the true cost is around $2,735, the cost of the entire system. Every minute of downtime at a bottleneck translates into thousands of dollars of loss throughput, because without the parts from the bottleneck, you can’t sell the product. Therefore, you cannot generate throughput.

Alex organizes the bottlenecks to work on only overdue orders from the most overdue to the least. The crew works out some of the details for keeping the bottlenecks constantly busy. In the process they find that they need another system to inform the workers what materials have priority at non-bottlenecks. Red and green tags are the answer. Red for bottleneck parts to be worked on first as to not hold up the bottleneck machine, and green for the non-bottleneck parts. That concludes another week. The true test will be next week.

Great, twelve orders were shipped. Alex is pleased, but he definitely needs more. He puts his production manager on it. His production manager rounds up some old machines to complement what one of the bottlenecks does. Things are looking up.

They are becoming more and more efficient, but lag time arouse with the two bottlenecks because of workers being loaned out to other areas and not being at the bottlenecks when needed to process another order. It seems there was nothing to do while waiting for the bottleneck machine to finish the batch. Therefore, in keeping with the notion that everybody needs to stay busy, workers were at other areas between batches. Alex decides to dedicate a foreman at each location all the time. Then one of those dedicated foreman, the night foreman, discovers a way to process more parts by mixing and matching orders by priority, increasing efficiency by ten percent. Finally, one process being sent through a bottleneck could be accomplished through another older way and therefore free up time on the bottleneck.

Now that the new priority system is in place for all parts going through the bottlenecks, inventory is decreasing. That’s a good thing right? But lower inventory revealed more bottlenecks. This intrigues Jonah so he comes to take a look.

"There aren’t any new bottlenecks", says Jonah. What actually has happened is a result of some old thinking. Working non-bottlenecks to maximum capacity on bottleneck parts has caused the problem. All parts are stacked up in front of the bottlenecks and others are awaiting non-bottleneck parts for final assembly. There needs to be balance. The red and green tags need to be modified. It seems as if the bottlenecks will again control the flow, by only sending them exactly what they need and when they need it.
Jonah says with the same data out of the bottlenecks to final assembly, you should be able to predict non-bottleneck parts as well. This will make some time, but there are enough parts in front of the bottlenecks to stay busy for a month.

There is another corporate meeting. Mr. Peach doesn’t praise Alex like Alex thinks he should. Alex decides to talk with him in private. Mr. Peach agrees to keep the plant open if Alex gives him a fifteen percent improvement next month. That will be hard because that relies heavily on demand from the marketplace.

Fifteen Percent!! Fifteen Percent!! Just then Jonah called to let Alex know that he will not be available to speak with in the next few weeks. Alex informs him of the new problem of more inventories and less throughput. Jonah suggests reducing batch sizes by half. Of course, this will take some doing with vendors, but if it can be done, nearly all costs are cut in half. Also, they get quicker response times and less lead times for orders. Sounds good.

Alex is propositioned with a test. They can greatly increase sales, current and future, if they can ship a thousand products in two weeks. Impossible without committing the plant to nothing but the new order? Wrong! How about smaller batch sizes. Cut them in half again. Then promise to ship 250 each week for four weeks starting in two weeks. The customer loved it.

Seventeen percent!! That’s great, but it’s not derived from the old cost accounting model. The auditors sent down to the plant from Division find just 12.8% improvement. Most of it accounts from the new order. Which by the way, the owner of the company that placed the order came down personally to shake everybody’s hand in the plant and to give a contract to them for not a thousand parts but ten thousand. Anyway, tomorrow is the day of reckoning at division.

Well the meeting at Division started out rough. Alex thought he would be meeting with Mr. Peach and other top executives. Instead, he met with their underlings. He decides to try and convince them it doesn’t work. Just before leaving he decides to see Mr. Peach. It’s a good thing he did, because he just got promoted to Mr. Peach’s position. Now Alex has to manage three plants as the whole division. He calls Jonah desperately and asks for help. Jonah declines until he has specific questions.

Now is the time to assemble Alex’s team for Division. Surprisingly the accountant with two years to retirement is on board, but the production manager isn’t. He wants to be plant manager to continue their efforts. Everything is totally into place at the plant but more is needed for division.

Alex is firmly engrossed with the problems of taking over the division. With advice from his wife he decides to enlist the help of his team at the plant. Every afternoon they will meet to solve the problem. After the first day it is obvious , they will need them all.
The second day they are led in a discussion about the periodic table of elements, and how the scientists actually got a table of any sort. Maybe that is how they will solve the massive problems of division, by understanding how the scientists started with nothing and achieved order. A way to define them by their intensive order is needed.
The team finally comes up with the process: Step one – identify the system’s bottlenecks; Step two- decide how to exploit those bottlenecks; Step three- subordinate everything else to step two decisions; Step four- evaluate the systems bottlenecks; Step five- if, in a previous step, a bottleneck has been broken, go to step one. It seems so simple, just different.

The team decides to revise the steps: Step one – identify the systems constraints; Step two – decide how to exploit the systems constraints; Step three – subordinate everything else to step two decisions; Step four – evaluate the systems constraints; Step five- warning!!! If in the previous steps a constraint has been broken, go back to step one, but don’t allow inertia to cause a system constraint.
It also has been discovered that they have been using the bottlenecks to produce fictitious orders in an effort to keep the bottlenecks busy. That will free up twenty percent capacity, which translates in to market share.

Talking with the head of sales. Alex finds out that there is a market order to fill the capacity. It’s in Europe, so selling for less there will not affect domestic clients. If it can be done, will open a whole new market. Then Alex ponders Jonah’s question, to determine what management techniques should be utilized. Alex determines how a physicist approaches a problem. Maybe this will lead to an answer.

Alex experiences a problem at the plant. It seems all the new orders have created new bottlenecks. After analyzing the problem, they agreed to increase inventory in front of the bottlenecks and tell sales to not promise new order deliveries for four weeks, twice as much as before. This will hurt the new relationship between sales and production, but it is needed. Production is an ongoing process of improvement, and when new problems arise they need to be dealt with accordingly.

Finally, struggling with the answer to Jonah’s question, Alex comes up with some questions on his own: What to change? What to change to? How to cause the change? Answering these questions are the keys to management, and the skills needed to answer them are the keys to a good manager and ultimately the answer to Jonah’s question.

Tuesday, August 09, 2005

Benchmarking and Biased Data

After reading several books and articles on successful leaders and entrepreneurs, it seems that the world agrees that all great leaders and entrepreneurs inevitably have two characteristics common in them: perseverance and ability to persuade others. Great leaders continue to persist despite initial failures and during testing times, they have the ability to persuade others in believing that what they are doing is right. So the implicit (and sometimes explicit) conclusion is drawn that for anybody do become a successful leader, presence of these two traits is a must. What could make more sense? What could be more dangerous?
If we apply not-so-uncommon common sense, we’ll see that these two selfsame traits are present in all those leaders also who led their men to disastrous ends. One need to have great persistence to follow the same path even after meeting failure after failure and one also need to have great persuasion skills to convince investors to pour their money through drainage.
Such notions of formula-of-success are so prevalent in the corporate world because of mutual dependency of following two statements: One, managers learn by example and two, success feeds itself.
Breeding managers are taught through case studies, putting them into simulated environment. Corporate managers are told to adopt best practices and realign their process to achieve benchmarks of operational efficiency. Throughout his lifetime a manager is supposed to learn from what other are doing. There is always a sword of benchmarking hanging over his/her head. But how many times, a manager is exposed to the flip side of a best practice? How many managers seek to find why a particular company doomed even after adopting CRM, TQM, Six Sigma or any other best-of-the-world concepts?
The second statement merely postulates a fact that over a period of time, only successful companies remain in the game and failures are erased. Erased not only from the game, but from the minds of the people also, unless they have created havoc by their failure like Enron. So in a mature industry like steel or cement, when a manager looks out of his window he sees only successful companies. And he is made to believe that these companies are present here because what they did was a best practice. He simply doesn’t have the data to see and check whether a failed company also adopted a particular best practice or course of action. He is actually seeing what statisticians call “a biased sample”. In contrast, a newly born industry (internet based business model) is plentiful of failures. Every day smaller companies are either being engulfed by big sharks or being deserted by investors. Here the data is actually available to test the hypothesis of adopting a best practice. Here the sample will more accurately reflect the population.
The word of caution for anyone who wants to adopt a particular business model, best practice or management concept is that don’t fall into the trap of biased or skewed data. Always look for the other side also. Try to collect as much as data on failures also. Tools are available to correct such anomalies in a given data. In fact two people have won noble prize by working on this. Still who don’t want to pay heed to this may read the anecdote of Abraham Wald.
[During World War II, Royal Air Force was suffering major causalities. Top brass called upon statisticians and engineers to suggest which parts of airplanes should be reinforced so that there are lesser events of crashes. Extensive data was collected from planes returning with a hit. Certain parts/areas were showing more vulnerability than others. People unanimously decided that those area/parts should be reinforced. However Mr. Abraham Wald, the project manager, had a different opinion. He told to reinforce parts showing less vulnerability. And he made a right decision. Can you guess the logic?]