Main content


DP1. What focus on forecast accuracy is typical in an S&OP process?  

Many people start an S&OP process or attempt to improve it with the specific goal of improving forecast accuracy.  So in almost every case, this is reviewed throughout  the S&OP process.  But it's critical that a reasonable expectation of forecast accuracy exists.  It'll never be 100 percent, and rarely can it be held even in the high 90% 's.  Also needed is an understanding that it may drift based on factors outside the control of the organization, such as customer, market place, competitive, economic, regulatory, etc. issues.  What's important is that sales performance and trends are monitored once a month and the forecast adjusted based on the latest best information, to minimize forecast errors in the future.


DP2. What level of forecast accuracy is considered best practice or world class?

This question is literally impossible to answer.  First, because so much of forecast accuracy is outside the company's control, as described in the answer to the previous question.  Forecast accuracy is measuring how well the sales and marketing people are tracking and anticipating the performance in the marketplace, and sometimes this can be very good, while in other cases, unexpected events can blindside anyone.  It is also affected by the number of products and the type of deliveries offered by a given company.  If they have quite a few low volume items, statistics tells us that the smaller the number, the harder it will be to forecast accurately, and the greater the percentage error.  In our opinion.  "Best Practice" for any company is to be constantly measuring forecast accuracy, setting goals for improving it, and continuing to do that month after month, year after year.


DP3. Our product managers are focused on their committed forecast to Corporate.  How do we get them interested in a more granular level?  We (in planning) often know more about forecasts than they do.

I'm presuming that the "committed" forecast you refer to is the financial target for the year, which they're not allowed to change within the year except in unusual circumstances.  One of S&OP's key strengths is to allow a more realistic recognition of shifting demand patterns during the year.  This will help keep production and inventory better tuned to what's actually selling, thereby keeping customers happy and financial numbers healthy.  But equally important, it would give the product managers the opportunity of tracking how well they're doing against these "committed" numbers and help them identify when they need to take action to get back on track or to make up shortfalls from other products or product lines.  The idea is to convince them that the S&OP process will benefit them by helping them hit their numbers more easily and profitably, and keep their customers happy.


DP4. We have a big disconnect between the monthly family S&OP forecast  and the demand management forecast that is sent to the plants. What's the best way to make them meet?

Before we can answer this, we need to know at what level these forecasts are developed, and what inputs and methods are used to develop them.  Do you measure accuracy against both of them?  Is one better than the other?  Is sometimes one better, then at other times, the other one is better?  Do you know why?

Or is the issue simply that the S&OP forecast is by family and the other by SKU?  If that's the case, if one is better than the other, then it should be the driver.  If the SKU one is more accurate, than just sum the total for the SKU’s in each family and use that for S&OP, as long as the responsible sales and marketing people understand and are committed to that. 

If the aggregate, family forecast is more accurate, and there are many SKU’s, then you'll have to develop a tool to “disaggregate"  this family forecast down to units, using either the historical mix factors of the items within the family or projected future mixes based on sales and marketing input.  Some forecasting or APS software packages provide this functionality; it's often called "pyramid" forecasting.  But beware, the fancier the software package, the harder it is to use, and the more features and functions you have to learn  "Not" to use.

If it's just a matter of spreading one large number down to many small ones, you probably could do this yourself with a simple spreadsheet tool.

But what's most critical is that the two numbers are reconciled.  Otherwise, the detailed functions are making decisions based on one set of numbers, and management making them on another, which sooner or later will cause problems and missed targets and objectives.


DP5. Does  S&OP review both SKU and brand level forecasts?  

Typically, S&OP should be looking at family or aggregate totals.  It's presumed that demand management, and sales and marketing people are managing the SKU and brand level forecasts as part of their detailed forecasting job, and the results of this are then rolled up or reconciled to the S&OP number, as discussed in the previous question.

However, if an SKU or brand forecast played a significant part in the total, it could be discussed either by exception in an S&OP meeting, or perhaps routinely in a forecast meeting or partnership meeting prior to the executive meeting.  Some companies have a few  SKU’s  that represent a significant percentage of their total sales in a family, so they routinely review those SKU’s in each meeting.

Other companies hold very extensive demand review meetings prior to their S&OP meetings, where they spend hours reviewing brand level forecasts with key customers, since they may sell huge amounts through a few large customers, such as mass merchandisers. 


DP6. Do most of the companies you deal with incorporate a forecasting tool as well as raw sales input into their consensus decisions?

Probably the majority of the companies we’re aware of use some sort of statistical forecasting tool.  But virtually all of them will use management input to override at least some portion of the statistically generated numbers.

In some cases, when past demand is a very poor indicator of future demand, companies just develop the forecast numbers on their own, using the forecast package merely as a place to hold the number that they input.  An example of this would be manufacturers of large, expensive capital equipment, where economic factors, marketplace trends, and the number of customer inquiries and bids, are the best indicators of future demand.  


DP7. Should the sales forecast be changed in the current month for the S&OP process?  

Identifying and communicating changes in demand is almost always a useful thing to do.  These demands may be harder to support in the current month, but the sooner the people in the supply side know about them, the more likely they can handle them or at least identify how much they can indeed handle.  This could also lead to identifying when you need to prioritize the demand from different marketplaces, customers or products, if a forecast increase cannot be completely supported.

If it is a decrease in forecast,  knowing about it as soon as possible can only help to avoid committing resources to production that may not be needed in the immediate future.

But much of this may not affect the S&OP process directly.  This may be handled as a demand shift within the monthly cycle, just like unexpected customer orders would be handled through master scheduling, and customer order management processes.  However, any shifts in demand and changes in forecast and production plans, should be documented and reviewed in the next S&OP cycle as part of the review of past performance.


DP8. How do you define "forecast" vs. "plan"?

A forecast is another name for demand plan.  However, sometimes the word “plan” might refer to requirements that are directly shared with you by major customers, which can be used in lieu of an internally generated forecast for that segment of the total demand picture.  But caution is advised here, since some customers may not have totally reliable planning processes.  You should measure their plan accuracy, just like you would measure forecast accuracy, and where their plan proves not to be accurate enough, overriding their plan based on your own judgment and experience may prove to be useful.


DP9. How can you gain the improvements of lean/S&OP planning without a fairly good forecasting input to the planning phase?

The better the the forecast, the better everything works.  The worse the forecast, the more variability, and required flexibility, along with frequent fine-tuning and changing of sales and supply rates that will be required.

S&OP by its very nature is a superior monitoring and management device that provides a monthly review of forecast accuracy, along with a discussion of the root causes and actions that should be taken to improve the planning accuracy in the future.

In addition, supply plans, inventory hedges, and flexible adjustment of resources to support changes in plans should be reviewed and adjusted whenever necessary as part of the monthly S&OP cycle of monitoring performance, and rebalancing demand and supply plans.


DP10. What lag should you use when you measure your forecast error if you are also implementing lean?

The only effect implementing lean may have on how you measure forecast error, is that the cumulative supply chain lead times may be shorter.  Some companies like to measure forecast accuracy based on the cumulative supply chain lead time to monitor how much additional flexibility or inefficiency may result by changing plans inside of lead time to support changing forecasts.  For instance, they might measure the accuracy of the forecast developed 90 days in advance of the actual sales. 

There are many varying opinions on this approach.  Ours is that this measurement is of limited value.  Regardless of the supply-chain lead times, the demand plans or forecasts should be updated whenever updated information is available to lead to more accurate numbers.  Sales and marketing should endeavor to forecast as accurately as they can over the entire planning horizon, but they should also adjust the short-term numbers where they can, since knowing about what customers will order at least somewhat in advance, can lead to better handling of these demands.  This is not to imply that every short-term shift in demand can be handled with an equivalent change in supply plan, but it does imply that companies always are better off knowing about such an impending shift as soon as possible, to see what can be done to support customer demand.

In any environment, lean or not, we recommend measuring forecast accuracy by comparing the forecast at the beginning of a monthly, the actual sales occur during that period.  This measurement gives some indication of how well sales and marketing are making final adjustments to the demand plan.  In addition, we also encourage the use of waterfall charts, to monitor how these forecasts or demand plans are changed over time out into the future.  These will provide an indication of situations where forecast changes may be chasing actual sales up and down over time.  In some cases they may be better off left alone until several periods of sales prove out that a new sales rate is occurring.


DP11. Please clarify "Review by exception" in the Demand Planning process

This refers to reviewing forecast items selectively, rather than analyzing every forecast item equally. The goal should be to review those items whose variance is greatest relative to the expected variance. Tracking signals can be calculated by forecasting software, or tolerance factors can be set which vary by item, indicating that some items have more expected variation than others. Ideally, the software package would then identify month by month which items' actual sales vary the most compared to their tracking signal or tolerance factor. Another way of reviewing by exception is to specify the items which are most sensitive to demand variation from a supply side viewpoint, for instance, items that are subject to the longest material lead times, or limited by scarce manufacturing or supplier capacity. Forecast variance on these items would be most difficult to react to, therefore they should be monitored more closely.

Regardless of the size of company, typically the most difficult part of implementing S&OP is gaining and maintaining active participation by the sales and marketing folks. Often they see forecasting and planning as the job of operational, planning or manufacturing people. They would rather spend their time working with customers and marketplaces external from the company. Therefore it is vital to get their attention and show them how an effective S&OP process can benefit them and the customer. In addition, inevitably, a strong, urgent and continuing support from senior management is usually necessary to win and sustain sales and marketing participation.


DP12. Are there any demand forecast packages that you would recommend?

There are at least a dozen different credible demand forecast packages available. They vary in functions, complexity and cost. And certainly an individual company's needs may also vary. 

I'd suggest you go to my partner Chris Gray's web site at and click on "software directories" to find a list of packages that may be worth reviewing. 



If you have specific questions about this article or want to discuss it with us, call John Dougherty at 1 978-375-7808.

The Partners for Excellence specialize in helping companies set up comprehensive measurement programs and improving overall resource management performance.  Contact us at 1 978-375-7808 or email This email address is being protected from spambots. You need JavaScript enabled to view it..