The COVID-19 pandemic has forced retail to innovate for survival. BII presents its assessment of how brick and mortar retail can apply technologies to assist in its re-emergence and to ultimately utilize human associates to their greatest potential.
Recent history is filled with stories of companies and sometimes even entire industries that have made grave strategic errors because of inaccurate industrywide demand forecasts. For example:
In 1974, U.S. electric utilities made plans to double generating capacity by the mid- 1980s based on forecasts of a 7% annual growth in demand. Such forecasts are crucial since companies must begin building new generating plants five to ten years before they are to come on line. But during the 1975–1985 period, load actually grew at only a 2% rate. Despite the postponement or cancellation of many projects, the excess generating capacity has hurt the industry financial situation and led to higher customer rates.
The petroleum industry invested $500 billion worldwide in 1980 and 1981 because it expected oil prices to rise 50% by 1985. The estimate was based on forecasts that the market would grow from 52 million barrels of oil a day in 1979 to 60 million barrels in 1985. Instead, demand had fallen to 46 million barrels by 1985. Prices collapsed, creating huge losses in drilling, production, refining, and shipping investments.
In 1983 and 1984, 67 new types of business personal computers were introduced to the U.S. market, and most companies were expecting explosive growth. One industry forecasting service projected an installed base of 27 million units by 1988; another predicted 28 million units by 1987. In fact, only 15 million units had been shipped by 1986. By then, many manufacturers had abandoned the PC market or gone out of business altogether.
The inaccurate suppositions did not stem from a lack
of forecasting techniques; regression analysis, historical trend smoothing, and others
were available to all the players. Instead, they shared a mistaken
fundamental assumption: that relationships driving demand in the past would continue unaltered. The companies
didn’t foresee changes in end-user behavior or understand their market’s saturation point. None realized that history can be an unreliable
guide as domestic
economies become more international, new technologies emerge, and industries evolve.
As a result of
changes like these, many managers have come to distrust traditional techniques. Some even throw up their
hands and assume
that business planning must
proceed without good demand forecasts. I disagree. It is possible
to develop valuable insights into future market conditions and demand
levels based on a deep understanding
of the forces behind total-market demand. These insights can sometimes make the difference between a winning
strategy and one that flounders.
of total-market demand won’t guarantee a successful strategy. But without it, decisions on investment, marketing support, and other resource
allocations will be based on
hidden, unconscious assumptions about industrywide requirements, and they’ll often be wrong. By gauging total-market demand
explicitly, you have a better chance
of controlling your company’s destiny. Merely going through the process
has merit for a
management team. Instead of just coming
out with pat answers, numbers, and targets, the team is forced to rethink
the competitive environment.
Total-market forecasting is only the first
stage in creating
a strategy. When you’ve finished your forecast, you’re not done with the
planning process by any means.
There are four steps in any total-market forecast:
Define the market.
Divide total industry demand into its main components.
Forecast the drivers of demand in each segment and project how they are likely to change.
Conduct sensitivity analyses to understand the most critical assumptions and to gauge risks to the baseline forecast.
Deﬁning the Market
At the outset, it’s best to be overly
inclusive in defining
the total market. Define it broadly enough
to include all potential end users so that you can both identify
the appropriate drivers of demand and reduce the risk of surprise
The factors that drive forecasts of total-market size
differ markedly from those that determine a particular product’s market share or product-category share. For example, total-market demand for office telecommunications products nationally depends in part on the number
of people in offices and their needs
and habits, while total demand for PBX systems depends on how they compare on price and benefits with substitute products like the local telephone
company’s central office switching service. Beyond this, demand for a particular PBX is a function
of price and benefit comparisons with other
In defining the market, an understanding of product substitution is critical. Customers might behave differently if the price or performance of potential substitute products changes. One company studying total demand for industrial paper tubes had to consider closely related uses of metal and plastic tubes to prevent customer switching among tubes from biasing the results.
Understand, too, that a completely new product could displace one that hitherto had comprised the entire market—like the electronic calculator, which eliminated the slide rule. For a while after AT&T’s divestiture, the Bell telephone
companies continued to forecast volume of long-distance calls by using historical trend lines of their revenues—as if they were
still part of a monopoly. Naturally, these forecasts grew more inaccurate with time as end users were
presented with new
choices. The companies are now broadening their market definitions to take account of
heightened competition from other
There are several ways you can make sure you include
all important substitute products (both current
and potential). From interviews with industrial customers you can learn about substitutes they are studying
or about product usage
patterns that imply future switching
opportunities. Moreover, market research can lead to insights about consumer
products. Speaking with experts in the relevant technologies or reviewing
can help you identify
potential developments that could threaten your industry.
Finally, careful quantification of the economic value of alternative products to different customers can yield
deep insights into potential switching behavior—for example, how oil price movements would affect plastics prices, which in turn would
affect plastic products’ ability to substitute for metal or paper.
Analyses like these can lead to
the construction of industry demand curves—graphs representing the relationship between price and
volume. With an appropriate definition, the total-industry demand curves will often be steeper
than demand curves for individual
products in the industry. Consumers, for example, are far more likely to switch from Maxwell House to Folgers
coffee if Maxwell
House’s prices increase than they are to stop buying coffee if all coffee prices rise.
In some cases, managers can make quick judgments about market definition. In other cases, they’ll have to give their market considerable thought and analysis. A total-market forecast may not be critical to business strategy if market definition is very difficult or the products under study have small market shares. Instead, your principal challenge may be to understand product substitution and competitiveness. One company analyzed the potential market for new consumer food cans, and it concluded that growth trends in food product markets were not critical to the strategy question. What was critical was knowing the value positions of the new packages relative to metal cans, glass jars, and composite cans. So the company spent time on that subject.
Dividing Demand into Component Parts
The second step in forecasting is to divide total demand into its main components for separate
There are two criteria to keep in
mind when choosing segments: make each
category small and homogeneous enough so that the drivers
of demand will apply consistently across its various
elements; make each large enough so that the analysis will be worth the
effort. Of course, this is a matter of judgment.
You may find it useful in
making this judgment to imagine alternative
segmentations (based on end-use customer groups, for example, or type of purchase). Then hypothesize their key drivers of demand
(discussed later) and decide how much detail is required
to capture the true situation. As the assessment continues, managers can return to this
stage and reexamine whether the
initial decisions still stand up.
Managers may wish to use a “tree” diagram like the accompanying one constructed by a management
team in 1985 to study demand for paper. In this disguised
example, industry data permitted the division of
demand into 12 end-use categories. Some categories, like business forms and reprographic paper, were big contributors to total consumption; others, such as labels, were not. One (other converting) was fairly large but too diverse
for deep analysis. The team focused on the four segments that accounted for 80% of 1985 demand. It then developed secondary branches of the tree to further
dissect these categories and
to determine their drivers
of demand. It analyzed the remaining
segments less completely (that is, via a regression
against broad macroeconomic trends).
Other companies have used similar methods to segment
total demand. One company divided
demand for maritime satellite terminals by type of ship (e.g., seismic ships, bulk/cargo/container ships). Another divided demand for long-distance telephone service into business and residential customers and
then subdivided it by usage level. And a third segmented consumer appliances into three purchase types—appliances used in new home construction, replacement appliance sales in existing
homes, and appliance penetration in existing homes.
In thinking about market divisions, managers need to decide whether
to use existing data on
segment sizes or to commission research to get an independent estimate. Reliable public information on historical
demand levels by segment is available for many big U.S. industries (like steel, automobiles, and natural gas) from industry associations, the federal
government, off-the-shelf studies by industry experts, or ongoing market
data services. For some foreign markets and less well-researched industries
in the United States, like the labels industry, you may have to get independent estimates. Even with good data sources, however, the readily available information may not be divided into the best
categories to support an insightful analysis. In these cases, managers must decide whether to develop their forecasts
based on the available historical
data or to undertake their own market research
programs, which can be time-consuming and expensive.
Note that while such segmentation is sufficient for forecasting total demand, it may not create categories useful for developing a marketing
strategy. A single product may be driven
by entirely different factors. One study of industrial components found that consumer industry categories provided a good basis
for projecting total-market demand but gave only
limited help in formulating a
strategy based on customer preferences: distinguishing those who buy on price
from those who buy on service, product quality, or other benefits. Such buying-factor categories generally do not correlate
with the customer
industry categories used for forecasting. A strong sales force, however, can identify customer preferences and develop
appropriate account tactics for each one.
Forecasting the Drivers of Demand
The third step is to understand and forecast
the drivers of demand in each category. Here you can make good use of regressions and other
statistical techniques to find some
causes for changes
in historical demand. But this is only a start. The tougher challenge is to look beyond
the data on which regressions can
easily be based
to other factors
where data are much harder
to find. Then you need to develop
a point of view on how those other factors may themselves change in the future.
An end-use analysis from the commodity paper example, reprographic paper, is shown in
the accompanying chart. The management team, using available data, divided reprographic paper into two categories: plain-paper copier paper and nonimpact page printer
paper. Without this important differentiation, the drivers
of demand would have been masked, making it hard to forecast effectively.
In most cases, managers can safely assume that demand is affected both by macroeconomic variables and by industry-specific developments. In looking at plain- paper copier paper, the team used simple and multiple regression analyses to test relationships with macroeconomic factors like white-collar workers, population, and economic performance. Most of the factors had a significant effect on demand. Intuitively, it also made sense to the team that the level of business activity would relate to paper consumption levels. (Economists sometimes refer to growth in demand due to factors like these as an “outward shift” in the demand curve—toward a greater quantity demanded at a given price.)
Demand growth for copy paper, however, had exceeded the real rate of economic growth and the challenge was to find what other factors had been causing this. The team hypothesized that declining copy costs had caused this increased usage. The relationship was proved by estimating the substantial cost reductions that had occurred, combining those with numbers of tons produced over time, and then fashioning an indicative demand curve for copy paper. (See the chart “Understanding Copy Paper Demand Drivers.”) The clear relationship between cost and volume meant that cost reductions had been an important cause of past demand growth. (Economists sometimes describe this as a downward-shifting supply curve leading to movement down the demand curve.)
Further major declines in cost per
copy seemed unlikely because paper costs were expected
to remain flat, and the data indicated little increase in price elasticity, even if cost per copy
fell further. So the team
concluded that usage growth (per level of economic performance) was likely to continue the flattening trend begun in 1983: growth in copy paper
consumption would be largely a function
of economic growth, not cost declines as in the
past. The team then reviewed several econometric services forecasts to develop a base case economic
Similar studies have been performed in other industries. A simple one was the industrial components analysis mentioned before, a case where the total forecast was used as background but was not critical to the company’s strategy decision. Here the team divided demand into its consuming industries and then asked experts in each industry for production forecasts. Total demand for components was projected on the assumption that it would move parallel to a weight-averaged forecast of these customer industries. Actual demand three years later was 2% above the team’s prediction, probably because the industry experts underestimated the impact of the economic recovery of 1984 and 1985.
example, a team forecasting demand for maritime satellite terminals extrapolated past penetration curves for
each of five categories of ships. These curves were then adjusted for major
changes in the shipping industry (e.g., adding the
depressing effect of the growing oil
glut, taking out of these
historical trends the unnatural
demand growth that had been caused
by the Falklands
war). The actual figure three years later was within 1% of the forecast.
Knowing the drivers of demand is crucial
to the success of any total-market demand forecast. In 1974, as I mentioned
earlier, most electric utilities used an incomplete total- demand forecast to predict robust
demand growth. In the early 1980s, one company’s management team, however, decided to study potential changes in end-user demand as well. The team divided electricity demand into the three traditional categories: residential, commercial, and industrial. It then profiled differences in residential demand because of more efficiency in home appliances and changes
in home size and the ratio of
multi-unit to single-family dwellings. Industrial demand was analyzed by evaluating the future of several
key consuming industries, paying special attention to changes in their
total production and electricity use. This end-use approach sharply reduced the utility’s initial forecasts and led to cancellation of two $700 million generating plants then in the planning stage.
In 1983, forecasters in the U.S. personal computer industry were saying that demand would continue to rise at a rapid rate because there were 50 million white-collar workers and only 8 million installed PCs. One company, however, did a more detailed demand forecast that showed that growth would soon flatten out. It found that more than two- thirds of white-collar workers either did not require PCs in their jobs—actors and elevator operators, for instance—or were supported mostly by inexpensive terminals linked to large computers, as in the case of many clerical workers. The potential market was not big enough to support the growth rate. Indeed, the market began to flatten the next year.
Forecasting total demand became crucial for another company that was thinking about acquiring a maker of
video games. Many thought that low overall
market penetration (10% of U.S. households) signified a lot of room for growth before the market became saturated, when about 50%
of the households would have games. Using available data, however, the management team created categories based on family
income and children’s ages. The analysis made clear that the main target market, upper-income families with children, was already well penetrated. Families with incomes exceeding $50,000 and children between the ages of 6 and 15 already were 75% penetrated. This finding convinced management
that demand would fall and that the proposed acquisition did not
sense. The dramatic decline in video
game sales shortly thereafter confirmed the wisdom
of this judgment.
Managers who rely on single-point demand forecasts run dangerous
risks. Some of the macroeconomic variables behind the forecasts
could be wrong. Despite the best analysis, moreover, the assumptions behind the other demand drivers could also be wrong, especially
loom on the horizon. Imaginative marketers who ask questions like “What things could cause this forecast to change
dramatically?” produce the best estimates. They are more likely to identify
potential risks and
discontinuities— developments in competing technologies, in customer industry competitiveness, in supplier cost structures—than those who do not. So once a baseline forecast is complete, the challenge is to determine how far
it could be off
At one level, such a sensitivity analysis can be done by simply varying assumptions and quantifying their impact on demand. But a more targeted approach usually provides better insight.
Begin such an analysis
by thinking through and quantifying the areas of greatest strategic
risk. One company’s strategy decision may be affected
only if demand
is well below the
baseline forecast; in another case, big risks may result from small forecasting errors.
Next, gauge the likelihood
of such a development. In the
white paper example, the baseline
forecast called for continued market growth, though below historical levels. In any particular year, demand could fluctuate with the economy, but the critical question
was whether demand
would at some point
begin a long decline. If so, the companion supply-curve analysis indicated that prices would probably fall dramatically.
The team created two scenarios of a gradual decline, one based largely on changes
in the economy
and the other on changes
in assumed end-use trends. These scenarios showed what would make demand fall (e.g., different rates of decline in copier prices) and thereby provided a basis
for evaluating the
likelihood of a downturn.
The forecasting framework outlined above can work for both comprehensive and simple assessments, but there are different
ways to carry out these analyses. A big challenge in demand
forecasting (just as with
other types of market analysis) is to gauge the
appropriate effort for the
project’s purpose. It’s useful to ask: “How much do I need to know to make the decision at hand?”
Managers can invest a lot of time in such analyses—the paper example took about 8 man- weeks and the large-scale electricity forecast about 14 man-weeks. Some companies have forecasting departments who work year-round on these subjects. The more thorough, though time-consuming, approach generates greater confidence, and the effort will be appropriate where the demand projection can significantly influence corporate strategy (whether to make a several hundred million dollar capital investment, for example), or where there is great uncertainty about total demand.
Often, however, the issues are not complicated, time is limited, or the total demand forecast is not important enough to merit that commitment (for example, the company is looking to add a couple
of points to its small market share). In such cases, managers should proceed quickly and inexpensively. They can, for example, rely on experts’ judgment or unsophisticated regressions to forecast
drivers of demand. Even the limited approaches
can yield insights. Furthermore, beginning the demand analysis
process can help managers determine
whether important demand issues exist that should be analyzed
in greater depth.
can be important to
strategy decisions. Developing
independent forecasts through the four-step framework
will not only lead to
better recommendations but also help
build conviction and consensus for action by creating understanding of the drivers of
demand and the risks in forecasts.
Even when the work is sound, though, uncertainties will remain: discontinuities will still be difficult to predict, especially if they are
rooted in momentous political, macroeconomic, or technological
changes. But managers who push their
thinking through the steps in this
framework will have
a better chance of finding these discontinuities than those who do not. And those who base their
business strategies on a
solid knowledge of demand
will stand a much greater
chance of making
wise investments and competing
A version of this article
appeared in the July 1988 issue
of Harvard Business Review.
Big Data is an integral aspect of predictive analytics for luxury retail. The complexity of social media presents an abundance of feedback channels that make real time 360 customer feedback challenging. Added to the challenge of real-time feeds is the distortion that reselling has created for luxury brands. It is difficult to discern whether a customer’s feedback relates to the actual brand quality or to the reseller experience. Also, once an item is sold, luxury manufacturers do not benefit from any ensuing resells. With the luxury reseller market attracting sustainability conscious shoppers and a broader range of shoppers who love luxury brand names/quality but cannot afford the high price tag, it will be increasingly challenging for luxury manufacturers to sustain their exclusivity.
Luxury brands are synonymous with premium and quality. These brands enjoy a high value for their exceptional quality and aesthetic appeal. From the days in the 1800s when Thierry Hermès was called upon to make saddles, bridles and leather riding gear for royal horse carriages, and also in the 1800s when 16-year old Louis Vuitton became a trunkmaster and designed sturdy yet attractive luggage pieces for the wealthy traveler, luxury brand manufacturers have done a tremendous job to separate their perceived and intrinsic values from competitors. It is not surprising that year-after-year, the same luxury brands have ranked as the top market influencers. With the advent of social media, retail in general has been able to closely connect with voice of customer. There are moments that this has been good and moments where it has been not so good. The real-time feeds from around the world have given good press and bad press. When it is good press, a brand benefits with surges in product demand. When it is bad press, brands not only lose market share, but damage to reputation can prolong the loss recovery. Managing social media trolling is important to luxury manufacturers seeking to maintain their brand exclusivity. The scatter plot below illustrates an example of a positive tweet data point.
The Meta Data Experience
Amassing customer data from a myriad of sources provides rich
insight for predictive analytics. With
an eye on privacy, customer data must not share personal elements like name,
social security number, payment information, etc. As countries instill strict guidelines on
data privacy, meta data will be employed more to allow for data to be freely
shared. Meta data is essentially data
about a customer that is stripped of personal elements. The Sankey diagram below is an example of
meta data about global Spice Chip membership demographics.
Big Data meta data has massive insight potential for customer analytics. A lot can be gathered about a brand that can shape future buying outcomes. An example of this is seen in the correlation and predictive plots below. A correlation is indicated between income and type of Mercedes Maybach® preference. Then, a predictive equation is identified to predict preference of type of Mercedes Maybach® of future customers within income levels.
Through the use of Big Data in analysis, we can acquire data
points from real-time data, processing current data and historical data in
business operations. The ability to
accomplish successful analysis of the data points in a continuous learning cycle
is what is coined as Continuous Intelligence, and introduces machine
learning and artificial intelligence methods in predictive analyses. With
automation taking such a prominent role in analytics, luxury retail
manufacturers should be encouraged to expand their data sources to include not
only data points about their respective products but also those of equal
competitors. Doing so will allow
comparative analyses across luxury brands that can lead to product development partnerships
like the Hermes® Apple® Watch.
Brand Democratization and Exclusivity
Luxury retail manufactures have defined their respective
brands on high prices and the premises of scarcity and exclusivity. To achieve this aim, a select customer
segment is targeted that can afford higher than normal prices for products of
high quality. This customer segment is exploited
on the basis of social status and conspicuous consumption. By
limiting the production of certain high-priced goods, this customer segment will
accept delayed gratification as an assumption of scarcity. The exclusivity model has proven successful
with the Baby Boomer generation, and luxury retail has grown to a multi-billion
While there is no clear relationship between income and
democratization, generations beyond the Baby Boomers have redefined their
consumption habits. Gen X and Millennials
emphasize sustainability and are willing to acquire personal goods through
consignment channels. The emphasis is
true across income strata. To attract these consumer groups, luxury
retail manufacturers will need to refine their exclusivity approach or offer
something in the alternative. As
example, luxury car companies have expanded their lines to include affordable
offerings like the Mercedes® A class. Recently, Lambroghini® released an SUV version after
long maintaining its signature line of luxury sports cars. Meanwhile, Ferrari® has remained true to its
brand and stayed in its lane of luxury sports cars.
In the luxury apparel and accessories arena, brand democratization
has led to a proliferation of consignment resellers of high-end luxury
goods. This market has successfully bit
into the luxury retail manufacturing profits by selling salvaged high-end goods
at lower price points due to their consignment status. The consignment reseller channel also provides
customers who are willing to pay high prices the ability to acquire vintage
luxury items like older models of Hermès® Birkin bags. More and more, luxury retail manufacturers
will need to step up their game to continually refresh their branding and
product offerings that cater to their Baby Boomer stalwart customer base and bring
in Gen X and Millennials.
It remains to be seen how far luxury retail manufacturers
will go to acquire loyal Gen X and Millennial customers. Regardless of the decision to acquire these
customers, luxury retail manufacturers will need to monitor brand sentiment
from wherever it comes. Social media has
given rise to consumers sharing their opinions in real time. Positive comments are helpful to promoting
brands while negative comments can sway customers away. Customer loyalty is constantly challenged by
momentary ticks and tweets. Assessing
the impact of sentiment and discerning if the sentiment is directed toward the
brand or toward the experience with acquiring it is necessary for luxury retail
manufacturers to face the challenge to brand exclusivity that consignment reselling
Sentiment Analysis and the Taguchi Loss Function
Loss Function is part of a methodology to improve quality and reduce costs
developed by Genichi Taguchi, a Japanese engineer who studied industrial
signal-to-noise ratio experiments. The Loss
Function is a quadratic equation that when plotted produces a hyperbola. The model below illustrates the concept of
how an increase in variation away from specification limits (represented by
lower or L.S.L. and upper or U.S.L., respectively) will exponentially increase
Applying the Loss Function to our discussion about brand democratization and exclusivity involves analyzing emotionality in texts by measuring sentiment and intensity. Also needed is to discern where the sentiment is directed – on the brand or on the purchase experience – measured through adding the variable of purchase location. For emotionality in texts, we could assign weighted values to a spectrum of intensity on a scale of negative to positive. For our Loss Function, the target value would be a positive tweet about the brand having purchase experience with the manufacturer. The outer or specification limits would be negative tweets (L.S.L.) or positive tweets having purchase experience through alternative channels like consignment resellers (U.S.L.). So, any tweet we encounter that falls in the upper and lower specification regions will be deemed as a contributor to the loss of customer that the manufacturer will experience. The hyperbola of a loss example is shown below.
This discussion has presented the challenge of maintaining luxury brand exclusivity as new generations change views about their shopping experiences. The Baby Boomer generation was a loyal following to luxury brands and this contributed to the multi-billion dollar establishment luxury retail has become. With differing views about sustainability and cause-related buying habits, Gen X and Millennials are emphasizing brand democratization. With this challenge to brand exclusivity, luxury retail manufacturers must rely more and more on Big Data analytics to guide them through the complexity of social media in the abundance of feedback channels.
Dr. Banks is CEO of I-Meta Inc, a company specializing in Big Data and retail analytics. She combined her passions for data science and luxury shopping when she invented the patented Spice Chip® technology system. She holds several degrees and certifications with over 20 years of experience in deploying and optimizing global systems. Dr. Banks has served on the Board of Examiners for the Malcolm Baldrige National Quality Award and understands the connecting points that successful organizations follow to meet the needs of their customers. Her other government service consists of having served honorably in the U.S. Air Force and worked for the Central Intelligence Agency. In her previous job, Dr. Banks was Director of Continuous Improvement and Innovation for a biopharmaceutical company. Her industry experience spans biopharma, retail, software, and gaming, to name a few. Throughout her career, Dr. Banks has developed applications and managed the deployment of large-scale database systems and global technology projects.