A Guide to Implementing the Theory of
Constraints (TOC) |
|||||
Tell Me How You Will Measure Me It has been said; “Tell me how you will measure me,
and I will tell you how I will behave (1).”
The measurements in any organization are the No. 1 formal feedback
system in that organization. So let’s
start with measurements in order to better understand our current approach to
business, and to help us do that let’s also return to our simple model that
we first saw in the introduction. How would you be measured in such a system? If you recall it was likely that you were
in work up to your eyeballs. Is that
an issue? It certainly shows that your
position is important and necessary and that you have lots of work to get
through. If others have even larger
piles of work it might lead you to believe that they are not as efficient as
you are. Is relative efficiency an
issue? It would seem so.
Does this frustration ever look like becoming
despair? Well probably annually if
your company has performance reviews.
Maybe even more frequently if you are accountable for departmental
performance. Accountability for
departmental performance might be some derived efficiency report, it might be
some sort of derived profit or cost report. The whole
internal business performance measurement system is based upon local
optimization, either in the form of departmental utilization/efficiency
measures or as departmental cost/profit performance measures - or both! And we don’t need to limit ourselves to
departments within a business. It
could equally be businesses within a company, or companies within a
corporation. It takes some
conscious effort to realize that the formalization of local efficiency
measures through the activities of “scientific” management – Taylorism – is
only about 100 years old (2, 3).
Scientific management is such a seductive idea because it legitimizes
the actions that we as individuals find so effective in our local settings
(family, friends etc) and applies it directly to our work processes. We assume that
the total performance of the system is the sum of all the local
performances. In fact it is so common
that we probably don’t even give it much thought. This approach then is the reductionist/local optima
approach; departmental cost or efficiency is just a symptom or an output of
this method. Let’s look at local profit centers a little closer.
However there is condition that must be met in order
to sum the local profits as we have just done. The condition is that there is independence
between departments. Let’s examine
that condition.
We typically determine success in several ways. In absolute terms by net profit, in
relative terms by return on investment, or in survival terms by cash flow (4). But how do we relate our on-going
operational decisions – our departmental decisions – to overall system
success? How do we bridge between our
operational decisions and system success?
Goldratt and Fox have coined this bridge the “cost” bridge (4). Let’s draw it.
In the Haystack Syndrome,
Goldratt presented a small thought experiment, known as the P & Q problem
(5). It is named after the two products
it produces – “P’s” and “Q’s.” This
elegant little example, strictly educational, seems to have taken on a life
of it’s own as it turns up in various examples to illustrate concepts as
broad as total preventative maintenance.
Often the numbers and the story have mutated somewhat in the process,
but at the heart is the P & Q problem. Because the example is educational, I have split it
out here into separate pages (you wouldn’t cheat but you might accidentally
see the answer before you see the question).
The P & Q question is here.
Try to work through it first.
The first part of the answer is here. Check the answer after your first attempt. The P & Q is important. Please try to do it before you go any
further. What happen in the P & Q when you worked through
it based upon your experience? You probably
tried to optimize it following some fairly rational arguments, you probably
also got a less than desirable answer.
What went wrong? Saving cost is synonymous with local
optimization. However, we have seen
from the example above that saving cost alone is not a sufficient bridge
between local actions and the bottom line measures of; net profit, return on
investment, and cash flow. In fact
neither is maximizing labor utilization nor any other of the local optimizations. Maybe the P&Q was an exceptional example? However if you want to believe this, then
please read Debra Smith’s war stories in “Unbelievable decisions by companies
you would know if I could name them (6).”
In fact to do justice, please don’t stop at the end of the first
chapter of that book, read the whole book. What then if you were to observe the financial
manager of a large business or the owner/manager of a small business over a
period of time? Then, I think that you
will see that decisions based upon cost are indeed undertaken – up to a
point. That point is where the
person’s intuition takes over and the cost based decision is overturned or
moderated (moderated by common sense as it happens). The important point is that intuition
should at some point take over. The
danger is that non-operational people, or financial people who “believe” the
numbers but don’t have access to their composition, may make erroneous
decisions as a consequence. Cost-based
decisions often give rise to the wrong answers. Thus the bridge between local actions and the bottom
line results of net profit, return on investment, and cash flow must be based
on cost + intuition, and not on cost alone (4).
Well human endeavor sort of “grew” into this
mess. Think about it for a
moment. It’s not all that long ago
that there were no process-based businesses.
Certainly the industrial revolution – the steam powered one that is –
is only about 200 years old. And the
earlier phase of the industrial revolution – that waterwheel powered one –
extends that time frame back about another 100 years. Before the industrial revolution there were
no process industries only cottage industries. In a cottage industry – even where whole towns were
involved, there were many small parallel systems of few parts – fulling,
spinning, dyeing, and weaving wool spring to mind. In fact they are rather like the first
profit/cost system diagram that we drew with totally independent inputs and
outputs in each department. With the
onset of the industrial revolution there was a move from many small parallel
systems to a smaller number of larger serial systems of many parts. The first processes industries grew out of
linen milling and similar agricultural based processing. The rest, as they say, is history. Of course a large number of parallel systems in a
loose network is an extremely robust form of process (8). However, as we will learn in later pages,
there are also ways to create very robust serial systems as well. In all pre-industrial history local optimization
equaled global optimization – they were one and the same. However, as a process becomes more serial
in nature it is less likely that local optimization can equal global
optimization. We outgrew local
optimization in serial (industrial) processes, but we forgot to replace local
optimization with something else. Let’s turn to
natural systems for a moment for some guidance, “Living systems have
integrity. Their character depends on
the whole. The same is true for
organizations; to understand the most challenging managerial issues requires
seeing the whole system that generates issues (9).” Let’s start
again from scratch then – or if we want to be more proper – first
principles. It seems that we need to
know what the system is that we are dealing with, where does it start, and
where does it end. We need to know
what the system exists for, and we need to know how to measure progress
towards the reason for its existence. Scheinkopf expresses this as (10); (1) Define the system
and its purpose. (2) Determine the system’s fundamental
measurements. Why do we have this particular order? Well, the organization in fact defines the
measurements rather than the other way around – the measurements define the
organization. Margaret Wheatley is more articulate.
She argues that in too many organizations “… the measures define what is
meaningful rather than letting the greater meaning of the work define the
measures. As the focus narrows, people
disconnect from any larger purpose, and only do what is required of them
(11).” We can’t afford to have people
disconnect from the larger purpose, we are not going to let that happen here. So, let’s expand this expression out a little more
to get the following; (1) Define the system. (2) Define the goal
of the system. (3) Define the necessary
conditions. (4) Define the fundamental
measurements. This is going to be our basis, our rules of
engagement. Let’s look at each facet
in turn. Let’s return to our simple model of a system
again. It seems that we are defining
our systems as something like the beginning + middle + near-the-end +
end. We will call it “our system” for
short.
And let’s not leave out not-for-profit. How about a public health system as an
example?
“’The owners have the sole right to determine the
goal.’ If we are dealing with a
privately held company, no outsider can predict its goal. We must directly ask the owners (12).” For a company whose shares are traded in
the open market “a company’s goal is to make more money now as well as
in the future.” I underlined the word “open” because in some instances
publicly held companies are not traded in the open market. Consider Japan for instance. Many publicly listed companies in Japan
have tightly held multiple cross-shareholdings (and often considerable debt
finance). In instances like this we
might expect that these companies will behave more like a private company
would in other parts of the world. It
is not enough to assume that making money is the goal of these organizations. Returning to our definition of the goal in openly
traded companies; most often we find that the “more money” has been dropped
from the definition, and that is what we will adopt here. However, the word “more” indicates that the
goal is in fact open-ended. This
highlights that fact that in contrast to a “necessary condition” you can’t
have enough of the goal. Maybe in the
context of money therefore, the word “more” is redundant. You can test this, ask someone, anyone,
whether they wouldn’t like to make more money now and in the future or
whether they are contented with what they currently receive. Let’s write the goal for a public company traded on
the open stock exchange.
Once we have
defined the goal, we must define any necessary conditions. Necessary conditions are minimum levels of
other entities that must be
present in order to satisfy the goal.
In this respect necessary conditions can be viewed as having
limits. Once a necessary condition is
satisfied, additional levels of input will not result in an increased
attainment of the goal. The two most
generic necessary conditions are (12); (1) Provide employees with a secure and satisfying
workplace now and in the future. (2) Satisfy customers now and in the future. Let’s add
these to our goal.
Is this too
pecuniary for you? We could rearrange
it a little.
But what if we
are a not-for-profit? Such as a
government health provider. “Necessary
conditions are important to identify, especially for not-for-profit
organizations. Sufficient cash is
usually a necessary condition.
Expenses might constitute a necessary condition if, for example,
funding levels are fixed (15).” That
is to say, some not-for-profit organizations, charitable trusts for instance,
might carry out some trading activity that is used to raise sufficient cash
to carry out their goal. Others that
rely upon fixed funding must watch their outgoings with utmost care. Let’s rewrite
this diagram for a not-for-profit.
We need to
determine the fundamental measures for our system and then ensure that our performance
measures are subordinated to these fundamental measures. “Not just any measurements, but
measurements that will enable us to judge the impact of a local decision on
the global goal (16).” “Measurements
are a direct result of the chosen goal.
There is no way that we can select a set of measurements before the
goal is defined.” The measurements
should enable us to judge whether a local decision has an impact of the
global goal (16). In a
commercial organization the fundamental measures are defined by the following
questions (16); (1) How much money is generated by
our company? (2) How much money is captured by
our company? (3) How much money do we have to spend
to operate it? Essentially we
are asking; what is the flow of money into the system, what is the flow of
money out of the system, and how much is kept in the system? Goldratt calls these 3 measures;
Throughput, Inventory, and Operating Expense.
These are often shortened to T, I, and OE and are defined as follows
(16); (1) Throughput is the rate at which the system generates
money through sales. (2) Inventory is all the money that the system invests
in purchasing things which it intends to sell. (3) Operating expense is all the money the system spends
in order to turn inventory into throughput. Throughput can
be considered to be revenue less totally variable costs (17). A totally variable cost is anything that
varies in a direct 1:1 relationship with sales; for instance transportation
charges, or commission charges, might be totally variable costs. If the company produces and sells another
unit of product it will incur this cost, and if it produces one unit less it
will not incur this cost (18). Direct
labor therefore is not included as a totally variable cost. We need to be clear; variable costs are
variable per unit sale. Throughput is
the financial value of an organizations output and must be measured in
monetary units. Output is the volume
of product or service produced by the organization and is measured in
physical units of some sort (19). The next major
measure, inventory, includes not only the traditional classes of; raw
material, work-in-process, and finished goods inventory, but all other money
invested in the organization such as in buildings, machinery, and other
capital items – in other words the total investment. More recently the term investment has been
used synonymously with inventory. Operating
expense is the on-going cost of running the business including both direct and
indirect labor. You might like to
consider these as the unavoidable costs of doing business. In an example that I like to use, I ask
people to imagine for instance what would happen if we were to close a ward
in a public service hospital to patients for one week. This has certainly happened in the name of
saving costs. How much cost do you
think is saved? In all honesty? Probably not very much at all. The unsaved portion is the unavoidable
operating expense. Of the saved
portion in this example – the avoidable cost; some may be directly variable
cost, and some may indeed be operating expense. Although changes in operating expenses are
not directly variable with volume this certainly doesn’t mean that they are
not variable over time (20). As production
increases or decreases over time, so too might the operating expense. Although as we shall soon see, we should
strive to hold operating costs constant while increasing throughput. It seems that
accountants are more familiar with the term “period costs” or period expenses
rather than operating expense (21).
This more clearly accommodates changes over time. Thus another way to consider operating
expense is that if an expense is incurred by unit of time – be that; hourly,
daily, weekly, monthly, or whatever – then it is an operating expense. It isn’t considered to vary in any direct
relationship with the number of units processed. We can now see
that considering all labor as an operating expense is consistent with this
definition. “Labor is normally
purchased in units of time – by
compensating people for hours per week or month, or, in the case of salaried
employees, for a year (22).” Let’s attach
these new labels to the bar graph that we used in the previous section on the
bottom line.
Throughput = Sales - Totally Variable Costs Net Profit = Throughput - Operating Expense Using period expenses rather than operating expense
may make matter clearer (21). Net Profit = Sales - Totally Variable Costs - Period Expenses Net profit is the bottom line measure that we use the
most. But we must also consider return
on investment (16, 17).
There is a
further useful measure, expressed as a ratio of these fundamental operating
measures (16, 17), it is a measure of productivity;
Let’s draw both of these situations.
We could also graph these two situations.
The accent is on process profit, not individual
product profit. We should apply the
productivity definition as a test not only at a tactical level (improvement)
but also at a strategic level (investment). We need to be careful when we define the fundamental
measurements. It is insufficient to add new and relevant measures. We must also remove old
and irrelevant measures.
Leaving old and irrelevant measures in place is a common and
disastrous mistake. We might think
what harm could leaving our old and familiar measurements in place possible
do? The answer is that they can do a
great deal of damage. Well a not-for-profit, or expressed more positively
a for-cause organization, doesn’t look much different.
Scheinkopf
notes that at not-for-profit organizations “there is a tendency to believe
that the measures are so intangible and that attainment of purpose is such a
subjective call, that such measures are simply not discussed. The focus ends up to be on measuring and
managing the things we call ‘tangible,’ such as money (23). We can easily see this in the New Zealand
public health system where district health boards are charged with making an
adequate return to the Government on its investment. One way to meet that is to defer
operations! In the section
on marshalling and replenishment in supply chain two easy to implement and
highly relevant non-financial measures are offered for healthcare. We saw earlier
in this section that the cost bridge does not always lead to the best
decisions being made. You might have
experience this directly if you used your experience to answer the first part
of the P&Q problem. We were left
in the undesirable position of having to use cost + intuition if we wanted to
link local actions to bottom line results. Taking a more
systemic approach we have defined 3 measures, throughput, inventory and
operating expense and shown through a set of definitions that each of these
measures has an effect on the bottom line measures that we have
proposed. If we look at the
definitions above for a profit based organization we can see the following; If throughput
increases, then net profit, return on investment and cash flow will also
increase. If we decrease operating
expense, then net profit, return on investment, and cash flow will
increase. If inventory decreases, then
return on investment and cash flow to increase, while net profit
decreases. Goldratt and Fox summarized
the situation in the following diagram (24).
Accrual
accounting tells us that as inventory increases net profit must also
increase, and yet nowadays most people understand that increasing inventory
in the long run is harmful. We will
examine the role of inventory as outlined by Goldratt and Fox in 1986 in the
section on drum-buffer-rope. It is
sufficient for now to know that in fact decreasing inventory increases
future throughput. Thus we can
reconcile our experience of the success of low inventory systems such as just-in-time
and complete our bridge (24).
In fact maybe
our thinking in drawing these diagrams is also subject to inertia. We should remove the departmental
boundaries that we have used to date.
Let’s see how it looks.
Well yes, an
“extreme form of variable costing” and one in which “financial reports are consequently
much simpler and easier to understand and can be compiled more quickly and
frequently than conventional financial reports (25).” Hmm; quick to compile, frequent, and easy
to understand. Sounds like a
prescription for a real management decision support methodology. Let’s discuss this more fully in the
section on accounting for change. How did we get
this far in discussing measurements without even mentioning constraints. The cue was a paragraph or so ago. If we want increase throughput we had
better know where the constraint is in the system and how to maximize its
capability. Or to put it another way;
we now know how to relate local actions to the bottom line, but we still need
to know how to evaluate the local actions themselves. “The key to know what to do locally is the
realization of the role the system constraints are playing (26).” In fact now
would be a good time to return to the P&Q problem for the second part of
the answer. I strongly recommend that
you have a look at this here before continuing on. So long as
system throughput exceeds operating expense then we know we are making a
profit. At the product level however
it is essential to know at least the relative contribution of
different products. In order to do
this we need to know where the constraint is. We can modify
our departmentalized system model to reflect this reality. Let’s draw it with a constraint in the
department near-the-end.
Resist all temptation to allocate the total operating expense to the
constraint operating time to derive an operating cost per unit time on the
constraint. Many people do
this - I have done it, it seems so natural.
It is not, however, a part of Theory of Constraints. Some software vendors sell this type of
calculation as bottleneck accounting, it isn’t (27). Purge it from you mind. In fact, let’s replace our departmentalized
view once again with a more systemic approach. And for completion let’s add all of our
flows in and out of this system. This
is an important model, we will refer to it again.
Yes, you are
absolutely right. In fact, if you
approached an unknown process and you had sufficient data, and that data was
accurate – then linear programming and Theory of Constraints would both
arrive at the same results over the location of the constraint and the
throughput that it could
generate (31). But tell me in
all honesty – have you ever achieved a bottom line result that was anything
like the objective target in the linear program? Probably not, not without lots of padding
in the assumptions. Sure, the data
probably wasn’t complete, the picture changed after the model was run, the
numbers weren’t as accurate as you would have liked, but the real issue is
that linear programming still does not furnish the logistical scheduling and
control needed to obtain the calculated result. Drum-buffer-rope does. Put another
way, linear programming yields the “what” – the result, without ever
addressing the “how” – the process. It
addresses an ideal “end” without addressing the “means” from which it is
derived. The production solution for
Theory of Constraints, drum-buffer-rope, gives you a real chance of realizing
the objective function of a linear program.
It gives you an operational methodology that will allow you to attain
the objective function. If we look at
linear programming carefully then it becomes apparent that it is, in fact, a
detail complexity approach to a dynamic complexity problem. Without the detail you can not solve the
dynamics in this instance. As Johnson
and Kaplan explain, the whole of the operations research development, of
which linear programming is a major part, is an outgrowth of scientific
management from 50 years earlier (32).
Scientific management deals in detail complexity. Perhaps a more
fundamental point, however, is that linear programming is an optimization
process within the bounds of existing constraints. As we are going to learn soon, we want at
the very least to challenge the assumptions about the existing constraints,
not just to accept them as they are, and if at all possible to bust the
existing constraint in favor of the next constraint. That way we move the whole system forward
to a new level of achievement. However, the
undeniable power of linear programming is as a tool in drum-buffer-rope
analysis. Once a constraint has been
located and managed under drum-buffer-rope, linear programming allows you to
evaluate complex product mix considerations with ease. Even using rough and ready data will provide
ready indications for multiple “what ifs?”
The point is that once you know what data is important and what data
is not and you can tailor the model accordingly.
As we have
drawn the diagrams and considered the situation so far, the constraint has
been internal to the system. What
happens when the constraint moves out into the market? Well, firstly, there will still be one “weakest
link” in the internal system even when the constraint is in the market. As we shall see in the section on the
production application, drum-buffer-rope, the internal constraint becomes a
control point synchronized with the market demand. However, financial considerations may
change. Let’s examine this with the fourth
and final part of the P&Q answer here. Once the
constraint is truly in the market place a number of new possibilities exist
for the manufacturing process. Rather
than use drum-buffer-rope, a more recent development called simplified
drum-buffer-rope can be used (33).
This is described briefly at the end of the section on
drum-buffer-rope. In addition the
process may be able to switch to frequency based refill as described in the section
on distribution. We have
successfully derived the global operating measurements of throughput,
inventory, and operating expense. We
now know how to leverage these through knowledge of the constraints to
maximize our bottom line impact. But
how do we ensure local alignment within subsections of our system? We can’t use throughput, inventory, and
operating expense for parts of the system because they are whole system
measures. “Local performance
measurements should not judge the end result, rather they should judge only
the impact the local area being measured has on the end result. Local performance measurements should judge
the quality of the execution of a plan, and this judgment must be totally
separate from judging the plan itself (34).” Let’s use our
own experience of public health waiting lists to examine the two key local
performance measures. What is the plan
in this case? Surely it is to provide
a timely and appropriate outcome. Well
the appropriateness of the outcome will be on a case-by-case basis but we can
investigate the timeliness of the matter. One aspect of
timeliness is how long we have to wait.
To answer this we need to know what the inventory is in a public
health system. How about the patients,
they are certainly a major component – that is, after all, why the system
exists. Let’s say for instance that a
certain outpatients’ clinic for referrals has 50 people on the waiting list
at any one time and last year these people waited on average for 12 weeks,
this year we still have 50 people on the waiting list at any one time but
they now wait on average for 16 weeks.
If you are a politician you will say the waiting list is exactly the same. However, we know that last year that there
was on average 12 weeks by 5 days per week by 50 people = 3000
patient-days-waiting. In comparison,
this year there are 4000 patient-days-waiting on the list. Is the performance better or worse? It’s worse of course. If we can stop patient-days-waiting from
increasing, or better still reduce it, then we must have improved the
system. How would such a local
performance measure look? Let’s add it
to our diagram.
What happens
then in a for-profit situation? Well,
we can attach the raw material cost to each item of inventory in the system
and multiply it by the days of waiting in a particular subsystem of the
process to obtain total inventory-dollar-days
waiting in that subsystem. Businesses
have all the components of this information already; it just needs a line of
code to produce the result. Another aspect of timeliness is that regardless of how long we must
wait, do we still receive attention at the end of the wait or are we
late. Let’s continue with our analogy.
We have 50 patients on our waiting
list and we assumed that last year our patients were expected to be seen by a
specialist within a recommended guideline of 12 weeks of referral. Some, however, weren’t seen within this
time-frame. Let’s say that 3 patients
were seen after 13 weeks and 2 were seen after 14 weeks. Again we might argue that just 1 in 10
patients were not seen within the recommended guidelines. However, a more realistic measure is that 3
were 1 week late and 2 were 2 weeks late.
This gives us 1 week by 5 days per week by 3 patients plus 2 weeks by
5 days per week by 2 patients = 35 patient/late days. Is this bad? Of course it is, it should be zero. Let’s add this measure to our diagram as
well.
In a
for-profit situation we can attach the throughput value (sales price –
totally variable costs) to each late product/procedure and multiply by the
number of days late to obtain throughput-dollar-days
late. Again businesses have all the
components of this information already; it just needs a line of code to
produce the result. In addition to
subsystems within an existing business we could equally use either of these
measures in individual businesses that are aligned with one another in a
supply chain. By using inventory
waiting and lateness expressed as unit/days we obtain a measurement of system
stability, we can quickly see where the system or a subsystem is becoming
misaligned and therefore take corrective action. Inventory-dollar-days waiting and
throughput-dollar-days late are measures that can be used within all of the
Theory of Constraints logistical solutions, and along with buffer management
they are a vital part of the feedback and control mechanism. We could have
used these measures in departmental example at the beginning of this page –
except that they weren’t treated a sub-systems; they were treated as a group
of stand alone systems. Would using
these local performance measures have made that system easier to manage and
more satisfying to work in? I think
so, but to do so requires explicit knowledge of the system, the goal, the fundamental
measures, and the role of the constraints in order to be effective. Let’s use an
airline as an analogy to make sure that we completely understand the
ramifications of throughput per unit of scarce resource. Let’s use for this example the current
discounted prices for an around the world ticket originating from New
Zealand. The prices are; NZD 2,200 for
economy class, NZD 7,100 for business class, and NZD 9,700 for first class. These are our products. What is the constraint? Well for an airline it must be the seating
space per flight as the ultimate constraint.
We can’t add an extra carriage like a train, neither can we shorten
it. Planes come in units of 1. And usually quite big units of 1 at
that. In fact in this example we will
use the main deck seating on a 747-400. If we treat
the flight as the constraint then how do we maximize our profit? We could sell only first class tickets –
and wait for a plane load of first class passengers – maybe once a month. Clearly on price they are the most
valuable. Or because we could most
easily get a load of economy class passengers we might just forget about the
other two classes and become a no-frills airline. However, at the moment, we assume a
full-service airline, so let’s work with that reality. Let’s work out
the throughput. In order to
know the throughput we must subtract any variable costs. Fuel is obviously a variable cost but
probably commercially sensitive.
However, it is also uniform regardless of class so we can ignore it
for the moment. Let’s assume then that
an economy class passenger consumes, in value, about $200 worth of food. The business class passenger twice as much,
and the first class passenger three times as much. This allows us to calculate the throughput.
So a first
class passenger generates $9100 of throughput per seat, business class $6700 per
seat, and economy just $2000 per seat.
First class passengers still look like the best deal. However, let’s see how many seats per row
we have.
First class
has 4 seats per row, yielding $36,400 per row. Business class has 7 seats per row,
yielding $46,900 per row. Economy has
10 seats per row, yielding 20,000 per row. Hmm, suddenly the
business class passenger is looking like a decidedly better deal. Economy class certainly doesn’t generate as
much as first class. However to be
more accurate yet, we really need to know how many rows of each different
class we can get in. As you know, the
knee room in economy isn’t always generous; you can get more rows of economy
in than either of the other two classes.
Let’s look at the seat pitch then.
In fact what
we have done is converted throughput per row to throughput per row inch. Of course you can’t replace just an inch, but
it does give us a good idea on the relative earning power of each type of
product (sorry – customer). Now, the
surprise is that a business class passenger is worth almost twice as much as
a first class passenger and 60% more than an economy class passenger. And a first class passenger is worth less
than an economy class passenger; even though they paid 4 times as much for
the ticket! This example
holds when airline capacity is the constraint – when the constraint is
internal to the system. When the constraint
becomes external – when it moves to the market then a different set of logic
applies. We will examine this in more
detail on the accounting for change page. Of course
airlines are masters at load maximization within these classes, opening and
closing discounted fares according to complicated algorithms based upon
demand and date of departure. However,
look at the original ticket prices. If
you were are sales person from outside of the airline industry, then based
upon ticket price, which type of seat would you rather sell? Which type of seat would your prefer that
your marketing people supported? Now
based upon revenue per row inch which type of seat is better for the
airline? Which type of seat should the
marketing people support? Well we were
just playing with numbers. We should
hope there is sufficient throughput to pay for all the operating expenses
(including the fuel). However, this
example does help to illustrate the absolute need to determine the
constraint. Because we can only make
relevant business decisions after we know where the constraint is. In fact it doesn’t matter what we do;
whether it is machining one-off endmills for cutting the inside casing of a
steam turbine, or converting a saw log into timber, the principle is exactly
the same. Did you feel
at all uneasy at the prospect of the suggested 10% across-the-board reduction
in operating expense in the previous section on the bottom line? Sure it looked good; it should have
resulted in a 13% increase in profit.
But would we have really achieved that? Was that
unease your intuition telling you that across-the-board cuts not only cut the
fat, they also cut production? Let’s
have a look at this. And let’s assume
that our 10% cut does in fact reduce our output by 10% as well because it
reduces the productivity of our scarce resource – our constraint – in direct
proportion. Of course it could be
worse, it could be better.
90% total – 36% operating expense – 27% raw material
= 27% profit. (27% new profit - 30% old profit) / 30% old profit =
10%
decrease Well maybe
this is a more rigorous way of saying that if we decrease operating expense
by 10% and we cause output to decrease by 10%, then we must decrease raw
material by 10% and profit also. There remains
another question, if you do cut operating expense by 10% and the market picks
up, are we going to be on the ball, or behind the game? Does this sort
of thing really happen? I think you
can answer that question yourself.
We then
developed a sequenced that allows us to adopt a more systemic/global optimum
approach. We can summarized this as; (1) Define the system. (2) Define the goal
of the system. (3) Define the necessary
conditions. (4) Define the fundamental
measurements. (5) Define the role of the constraint(s). We might call these our rules of engagement. Using the same
example we found that using this sequence and our own intuition we were able
to tie local decisions to the bottom line objectives of the system. We still need to examine the focusing
process for improvement, but first let’s examine the role of people in these
systems. (1) Goldratt,
E. M., (1990) The
haystack syndrome: sifting information out of the data ocean. North River Press, pg 26. (2) Kanigel, R., (1997) The one best
way: Frederick Winslow Taylor and the enigma of efficiency. Viking, pp 490-499. (3) Johnson, H.
T., and Kaplan, R. S. (1987) Relevance lost: the rise and fall of management
accounting. Harvard Business School
Press, pp 47-59. (4) Goldratt, E. M., and Fox, R. E.,
(1986) The Race. North River Press, pp
20-31. (5) Goldratt, E.
M., (1990) The
haystack syndrome: sifting information out of the data ocean. North River Press, pp. 64-78 & 86-99. (6) Smith, D., (2000) The measurement nightmare: how the theory of constraints can resolve conflicting strategies, policies, and measures. St Lucie Press/APICS series on constraint management, pp 1-20. (7) Schragenheim, E., and Dettmer, H.
W., (2000) Manufacturing at warp speed: optimizing supply chain financial
performance.
The St. Lucie Press, pg 235. (8) Wheatley,
M. J., and Kellner-Rogers, M., (1996) A simpler way. Berrett-Koehler Publishers, pg 23. (9) Senge, P.
M., (1990) The fifth discipline: the art & practice of the learning
organization. Random House, pg 66. (10) Scheinkopf,
L., (1999) Thinking for a change: putting the TOC thinking processes to use.
St Lucie Press/APICS series on constraint management, pp 23-24. Also see;
Mabin, V. J., and Balderstone S. J., (2000) The world of the Theory of
Constraints: a review of the international literature. St. Lucie Press, pg 7. (11) Wheatley, M. J., and Kellner-Rogers, M., (1999)
What Do We Measure and Why? Questions About The Uses of Measurement. Journal for Strategic Performance
Measurement, June. (12) Goldratt,
E. M., (1990) The
haystack syndrome: sifting information out of the data ocean. North River Press, pp 11-13, 49. (13) Goldratt,
E. M., (1994) It’s not luck. The North
River Press, Chapter 30. (14)
Reichheld, F. F., (1996) The loyalty effect: the hidden force behind growth,
profits, and lasting value. Harvard
Business School Press, 322 pp. (15) Newbold,
R. C., (1998) Project management in the fast lane: applying the Theory of
Constraints. St. Lucie Press, pg 228. (16) Goldratt,
E. M., (1990) The
haystack syndrome: sifting information out of the data ocean. North River Press, pp 10, 14, 19, 23, 29,
31-35. (17) Noreen,
E., Smith, D., and Mackey, T., (1995) The Theory of Constraints and its
implications for management accounting.
North River Press, pp 12, 13, 80. (18) Corbett,
T., (1998) Throughput Accounting: TOC’s management accounting system. North River Press pg 43. (19)
Schragenheim, E., and Dettmer, H. W., (2000) Manufacturing at warp speed:
optimizing supply chain financial performance. The St. Lucie Press, pp 228-229. (20)
Schragenheim, E., (1999) Management dilemmas: the Theory of Constraints
approach to problem identification and solutions. St. Lucie Press, pg 128. (21) Caspari,
J. A., and Caspari, P., (2004) Management Dynamics: merging constraints
accounting to drive improvement. John
Wiley & Sons Inc., pp 3-4 & 36. (22)
Schragenheim, E., and Dettmer, H. W., (2000) Manufacturing at warp speed:
optimizing supply chain financial performance. The St. Lucie Press, pg 42. (23) Scheinkopf,
L., (1999) Thinking for a change: putting the TOC thinking processes to use.
St Lucie Press/APICS series on constraint management, pg 25. (24) Goldratt, E. M., and Fox, R. E.,
(1986) The Race. North River Press, pp
31-67. (25) Noreen,
E., Smith, D., and Mackey, T., (1995) The Theory of Constraints and its
implications for management accounting.
North River Press, pp xxviii. (26) Goldratt,
E. M., In: Cox, J. F, and Spencer, M. S., (1998) The constraints management
handbook. St Lucie Press, pg x. (27) Caspari,
J. A., and Caspari, P., (2004) Management Dynamics: merging constraints
accounting to drive improvement. John
Wiley & Sons Inc., pg 24. (28) Dettmer,
H. W., (1998) Breaking the constraints to world class performance. ASQC Quality Press, pg 37. (29)
Schragenheim, E., and Dettmer, H. W., (2000) Manufacturing at warp speed:
optimizing supply chain financial performance. The St. Lucie Press, pp 225-244. (30) Corbett,
T., (1998) Throughput Accounting: TOC’s management accounting system. North River Press, pp 41-80 & 119-137. (31) Mabin, V. J., and Gibson, J., (1998) Synergies from
spreadsheet LP used with the Theory of Constraints: a case study. Journal of the operational research
society, 49 pp 918-927. (32)
Johnson, H. T., and Kaplan, R. S.,
(1987) Relevance lost: the rise and fall of management accounting. Harvard Business School Press, pp 169-172. (33)
Schragenheim, E., and Dettmer, H. W., (2000) Manufacturing at warp speed:
optimizing supply chain financial performance. The St. Lucie Press, 342 pp. (34) Goldratt,
E. M., (1990) The
haystack syndrome: sifting information out of the data ocean. North River Press, pp 144-155. This Webpage Copyright © 2003-2009 by Dr K. J.
Youngman |