
Cycle Counting
|
This department is provided to answer technical questions regarding problems in production and inventory control. Readers are invited to contact George Johnson, APICS National Research Committee, Rochester Institute of Technology |
The March column dealt with the topic of procedure manuals, cited references thereto, and invited readers to share their experiences on the subject. Two readers replied with information which is summarized below for the benefit of all.
Jim Krupp called to suggest that those interested in creating procedure manuals might refer to QS-9000 in particular the OEM Tier 1, and the supplement to QS-9000, MS-9000, which deals specifically with materials management. QS-9000 is an automotive industry version of ISO-9000 which was created by the joint efforts of Chrysler, Ford and General Motors. It is intended to define the basic quality system requirements for internal and external suppliers of production and service parts and materials.
Jim Dean of Northwest Airlines called to explain how his organization creates procedure manuals such that they may also be used directly for training. His guidelines are:
Introduction. Explain why the procedure exists and who is responsible for its origin and maintenance.
List key policy statements and standards (e.g., obtain three bids for buys exceeding $5,000. Use Form 273L. The hurdle rate for capital projects is 20 percent, etc.)
Flowchart of the process (for an overview and to portray connectedness)
Procedures, written in steps. Include decision tables (if-thens) which correspond to decision diamonds in the flowchart. (This creates white space and focus.) Amplify relatively simple policy statements and standards, if appropriate.
Appendices. Use as needed for supporting details and illustrations (e.g., screen prints with highlighted fields, forms with highlighted fields, field descriptions, expanded explanations of very complex policies or standards.)
In Mr. Dean's experience, if these guidelines are followed, the documentation can be used directly for training.
Reply: According to the standard you set in advance, if the physical count of a C-item is within plus/minus 5 percent of the on-hand balance indicated in the inventory record, the result is acceptable. Thus, according to the rules, you could simply tolerate the 2 percent error as common variability and ignore both the potential record adjustment and the root cause of the error. It can be costly to do root cause analysis, change and standardize procedures, and document and post changes to the records, possibly more than the C-item's excess or absence is worth. Alternatively, you could adjust the inventory record and ignore the error's cause, but perhaps log the occurrence for investigation if and when the team has done all its higher priority work. Of course, the trail will be cold by then.
Some would argue that C-items don't warrant a cycle counting degree of attention and expense; that they should be managed using simple procedures such as the two-bin system and keep lots of extra stock on hand. This certainly could be argued for inexpensive hardware, for example, where the total annual usage of an item might be worth $200. Just posting a record change can exceed the annual cost of the item. A counter argument is that a small, inexpensive item can be a critical ingredient of dependent demand, the absence of which can shut down lines or cause other costly disruptions. This is true, but then is it really a C-item? Doesn't such an item deserve closer control than the C category implies? This calls for a brief discussion of ABC classification and the relationship of classification to standards, the third option cited in the original question.
The underlying reason for using ABC classification is the assumption that some items require closer control than others. This seems logical since cotter pins and washers are not of the same importance as jet engines, gold ingots and nuclear weapons, for example. The focus of typical ABC analysis is the annual usage of items expressed in cost dollars. Find the 10 to 20 percent of items with the largest annual flow of dollars through the system and put them in the A category. Put the next 30 percent or so into the B category, and all the others (the majority) into the C class. A items get the closest control; C items the least.
While not as commonly discussed, there are reasons other than annual dollar flow to classify items for various degrees of control. Years ago, Kenneth Campbell of General Electric wrote an article titled, "What Comes After the ABC's?" In it he cited some of these factors, including:
Surely there are other factors, too, like impact on the operations or customers if an item isn't available when needed. Whether it is high annual dollar flow or other factors such as those just cited that qualify items for various categories, the implicit assumption is still that some items require closer control than others. In cycle counting, closer control translates into more frequent counts and tighter standards for acceptable error all the way to zero tolerance for things like aircraft engines, gold ingots and nuclear weapons. So, in the case cited (C-items), should the 5 percent standard be changed when a 2 percent error is found? Let's take a Deming-like approach to an answer.
Deming would view inventory control as a process which has inherent variability. To improve the process, variability must be reduced and the key purpose of cycle counting is to do exactly this through root cause analysis and removal of error causes. If cycle counting has been used in your organization primarily as a way of adjusting the records to agree with what's actually on hand, the overall process has not been improved. The 2 percent result, which is better than 5 percent, could well be due to random causes alone.
In this circumstance, changing the standard to 2 percent would be equivalent to "tightening the specs" without changing the process. The end result would be an increase in the reject rate, i.e., more mismatches of actual quantity on hand vs. recorded quantity, thereby causing more non-value-added work to make the additional record adjustments needed per period. All this activity is an attempt to inspect quality into the records.
If, on the other hand, the 2 percent error rate is relatively consistent, perhaps even declining gradually, it suggests that the cycle counting program has been effective in removing root causes of record errors for this particular item. The process has really improved. Now you have an economic decision to make. Is a 2.5 percent error on this C-item as serious as a 5.5 percent error on another C-item, or as a 3.5 percent error on a B-item, or as a 1.5 percent error on an A-item? Where does it make the most economic sense to apply the process improvement capacity of the cycle counting team? These are the kinds of issues to consider when deciding whether to change the standard.
To summarize The choices are: (1) ignore the mismatch and its root cause because the 5 percent standard has been met; (2) because the 5 percent standard has been met, ignore the more costly root cause analysis for now and simply adjust the record balance; or (3) change the standard if it makes operational and economic sense. The prerequisite condition is that the process really has improved, so that tightening the standard won't simply cause more rejects based on common variability.