Applied properly, OEE can improve line efficiencies—but knowing its limitations is important
So, how useful is OEE? What are its limitations and application—so we can derive the most benefit from it?
Pask says most companies place too much emphasis on results (i.e., looking backward at OEE scores) and too little emphasis on the factors that actually drive results. So then, two questions arise. Is OEE a useful tool in improving manufacturing capability? And, should manufacturers be using something beyond just OEE to improve their process? In a previous Food Engineering article, Pask also described the “IDA equation,” which is another tool derived from OEE.
“Your use of the word ‘tool’ in the question is important. Like any tool, OEE needs to be applied correctly to the right task, to create the outcome that you want,” says Pask. If the objective is to optimize the manufacturing efficiency of a machine-constrained process, then OEE is a tried-and-tested best practice tool for identifying and categorizing the causes of lost production.
Some companies are installing overhead status boards on new production lines, which can be helpful if they provide useful information besides OEE—for example, downtime losses, changeover losses, speed loss and defect count. Image courtesy of Imagemakers Inc.
For example, imagine a manufacturer has two identical processes, utilized equally for a week. One process is scheduled to produce five SKUs, so the manufacturer might hypothetically expect a good OEE to be 85%-90%. On the second process, the manufacturer makes 35 SKUs, so it might hypothetically expect a good OEE to be 50%-55%. At the end of the week, the actual OEE scores are 70% for process 1, and 48% for process 2. Pask asks: “Which process had the better week?” (Hint: it’s not the one with the higher score!)
“We advise that people only compare OEE scores for one process over time, to see how the productivity of that process has changed over time,” says Pask.
“To your example of the identical lines with different operators, if the operators are the only variable maybe OEE can provide some interesting insight; but I’d be more tempted to focus on other metrics that more directly measure the effectiveness of the people,” says Pask. For example, mean time between failure (MTBF) and mean time to repair (MTTR) could be more useful.
There are other mistakes processors make in applying OEE, says Pask. The two biggest mistakes are:
1. Boosting the score—That is, manufacturers hide their causes of lost time to boost the OEE number. There are several ways to artificially boost the OEE score, says Pask. The two biggest and most common mistakes are:
• Soft Ideal Cycle Time: Using an Ideal Cycle Time that’s either based off a ‘budget/standard,’ or has been reduced to account for a running condition (such as insufficient operators or material problems) hides the real capability of the process.
• Hiding Lost Time: Converting legitimate causes of machine stops into not scheduled time, to exclude them from the OEE (such as no operator, no material).
2. No Action—Measuring OEE should not be a goal. Using the tool analogy, picking up a hammer is not the same as using it to build a house. “If you’re measuring OEE you’ve ‘picked up the hammer’ but OEE has no intrinsic value until you start using the losses captured identified by the OEE calculation to improve productivity,” says Pask. “I’m fond of saying ‘Data without action is waste!’”
Next, what are best practices for applying OEE properly? “First, make the measurement of loss absolutely brutal,” says Pask. “By ‘brutal,’ I mean that when we first measure OEE it should include every possible cause of lost production time. My goal when implementing OEE is to create the lowest possible initial OEE score, by including as many losses as possible. This provides the best baseline from which you can subsequently improve.
“Second, actively consider and identify when and how you will use OEE in your decision-making processes. For example, how will you use your OEE-related data in a shift handover, in a daily production meeting, to set improvement goals, to identify maintenance activities. Going back to the ‘tool’ analogy: I’m advocating that you actively consider when and how you will use your OEE tool to build better productivity.”
In a SensrTrx blog post, Marketing Manager Lindsey Andrews states that OEE by itself is not a good metric, but its components—availability, quality and performance—are. I asked Pask if this is where the results-oriented equation he mentioned in my 2016 FE article comes into play. That equation is R (results) = I (Information) × D (Decisions) × A (Action). I wanted to know how R = I×D×A works, and its benefits when compared to a raw OEE score?
“Yes, I agree that the OEE percentage score has limited intrinsic value,” says Pask. “I only care how the score has changed over time. Knowing the score doesn’t guide your improvement actions; for this you need to know (and act on) your losses.”
1. Display efficiency: The extent to which the team is on track to hit their shift production target. Ideally this is like a ‘pace clock,’ so that if a process starts to fall behind their target we can prioritize focus to get them back on track.