Feeds:
Posts
Comments

Archive for January, 2016

The collapse of a company is not necessarily an outcome of a single calamitous event. More likely, a final collapse will follow a period of several months or years of gradual deterioration in the management and performance of the failing company. The causes are usually a mixture of external events or market factors and not least internal actions or non-actions committed by the company. This seems to be the case with Mega Retailing, the second largest chain of supermarkets in Israel, that practically collapsed this month (January ’16) but the process of its deterioration may be traced through at least five years backwards.

The current Mega retail chain is in fact a successor of a consumer co-operative chain, “Co-op Blue Square”, established in the 1930s. That co-operative existed until the late 1990s when it could no longer sustain itself. Consumers who had a stake in the enterprise were required to sell their shares and a majority stake in “Blue Square” (73%) was acquired by the Alon energy group.

The company, renamed in Israel as “Alon Blue Square”, expanded since 2003 and added more business units in different retail areas. For instance, Alon brought under the rooftop of Blue Square (holding a 78% stake) its compounds of car fueling stations with adjacent trade services. The chain of supermarkets received the new brand name Mega (after going through an earlier short phase under the name “Super Centre”), instituted as a subsidiary in full control of Blue Square, its home company. Yet another critical move saw the establishment of “Blue Square Real Estate”, a subsidiary of the home company, which divorced Mega from control over its physical locations, making the real estate company its landlord. At the end, Blue Square is about to lose the core business that carried its name to begin with.

  • Alon Blue Square also acquired a chain of convenience stores (“am:pm”) that is separate from Mega but competes with it in city centres and neighbourhoods.

Multiple reasons can be given for the poor condition of Mega as proposed in the media. Some blame the high operating costs of Mega on wages and benefits for employees being higher than standard in the food retail industry; it is probably a legacy inherited from the older days of Co-op Blue Square when it was affiliated with a strong labour union. Not to be confused, employees in stores still earn relatively low wages, but with the low margins in the industry, the differences from competitors are claimed to be crucial. On the other hand, the management could be held responsible for keeping deficiencies reminiscent of the culture of Co-op in those older days. The owners on their part did not seem to be interested enough in what was happening to their supermarket company. Mega was lacking in strategic (marketing) thinking and mindset that would allow it to adapt better to new realities of a competitive market and higher standards of servicing and merchandising.

One should not go far to find what is wrong with the Mega chain. The problems of Mega show most visibly and strikingly in its stores. A particular branch is used here as an exemplar to demonstrate some troubling aspects. It is a neighbourhood store in the northern part of Tel-Aviv. The supermarket is not large (estimated at a little less than 300sqm or about 2500sqf) with a main hall (75% of its area) and an extension (two “corridors”). The store was established in the early 1970s and for a decade or two it was considered spacious and modern. The last major renovation took place about fifteen years ago but unfortunately within a few years it has lost much of its newly gained attraction.

There are six columns of displays across the main hall which leave too little space for moving in the aisles between them. In addition, the displays reach high above the head. The whole arrangement of this hall makes a shopper feel lost in space and closed-in. Bad merchandising appears to make the supermarket look crowded and untidy. A whole new concept should have been applied to this store with fewer columns, lower displays (e.g., no more than 1.50m) that would allow shoppers to look beyond the aisle, and fewer product types and brands (SKUs) on offer — in this supermarket ‘less is more’ would be perfectly right. Shoppers obliged to get essential products like bakery and dairy in corridors may find it unpleasant.

Another troubling matter in the store concerns the shopping carts and baskets, or rather the lack of them. To pick-up one of the few shopping carts available one has to pass across the cashiers and away from the entrance. Even if one was lucky to get a shopping cart, he or she would find the cart difficult to navigate with in the aisles, especially as the store, like the whole chain, moved to larger carts definitely not useful in this specific store. What shoppers should have been provided are hand-held baskets (or wheeled baskets), and these should have been arranged in ‘towers’ near the entrance of the supermarket (not hidden under cashier desks). The baskets would serve a much better purpose in the entry area than a large unappealing promotional stand positioned there. As a result, the store also does not have a welcoming and convenient “decompression zone”.

  • Considering the competition in vicinity (e.g., a large supermarket of the leader chain “Supersol” in a nearby shopping centre; a minimarket store across the street), the approach in that  Mega’s store, whether out of flawed thinking or lack of care, could not be affordable.

Similar problems can be found in other Mega stores: (a) A delivery service interrupts and blocks the way out for customers who take home their shopping bags  — delivery boxes are piled in a passage on the exit from cashiers, personnel are handling deliveries in the same area where customers should complete their shopping and leave, and preparing deliveries for some shoppers halts others for long minutes; (b) The staff arranges merchandise on shelves during the day, often blocking aisles with box-loaded shopping carts or boxes left on the floor — shoppers have to make their way competing with personnel on access to displays; (c) Product displays do not look neat and tidy, some items get out-of-place, some are falling to one side or another — even if shoppers are responsible for not leaving items in place, a store worker should always pass by, check and fix displays. If shoppers find a store in good order, clean and tidy, they will (mostly) feel obliged to make an effort to keep it that way for everyone to enjoy.

It does not seem to be a question of good will. Mega stores are missing order and organisation. Moreover, the employees may not have a guiding hand and initiative they need from either general management or store managers to get the supermarkets look and feel the way they probably aim at. In a  presentation (in Hebrew) of a strategic plan from 2013 (Blue Square’s website) the management of Mega shows that on top of every other goal they want the customers to love their stores; Mega’s vision through its history is “At Mega (we’re) listening to you! Always, at every place and in every encounter, because we really care.” Yet the stores could not show for it. The employees may have wanted it to happen but the management was not behind them to show them how, and it is still unclear why store managers were not helping or how well coordinated they were with top management.

A seasoned consultant in marketing and retailing (Galit Moor, “Shopoholist”) told “The Marker” Israeli business newspaper about rivalries and non-coordination between the trade and operational departments of Mega — the trade people would reach agreements with suppliers but operations people would not respect them and not follow them through in the stores, causing confusion and loss of trust of suppliers. She also related to lack of understanding of consumers and not really listening to their concerns, a top management detached from the stores, and mistakes in running stores, particularly failures in dealing with details at the store level (MarkerWeek, 24 July 2015). The management was not focused, undecided whether to compete on price (e.g., to fight off discount chains) or on enhanced customer experience (blending price perception, service, convenience, variety and quality of products), and therefore it must have had difficulties setting clear priorities to staff at stores. It is not too surprising that staff and managers could neither treat properly details of service and merchandising in the stores.

In mid-2015 Mega was in debt of 1.3 billion shekels (~$340m), 700 million shekels of which owed to suppliers and the rest to banks. The delay in payments to suppliers has led to sour relations with them, where some have also froze or reduced further supplies to the retail chain. Mega started with an aggressive plan of cuts, primarily closing stores, but it could not save it at this stage. By the end of 2015, just before the court intervened (stay of proceedings), the debt accumulated to 1.5 billion shekels, half of which to suppliers who largely lost confidence in and patience with Mega.

In the previous decade Mega has expanded while defining three sub-chains: “Mega City” supermarkets for serving neighbourhoods, large central “Mega”-stores, and large discount stores (“Mega Bull”, i.e., “target”). The latter was re-named just three years ago “You” and added more stores.  Mega was actually responding to a move similar in kind by the leading competitor Supersol with their sub-chains “My Supersol” neighbourhood  supermarkets, “Supersol Express”, and large discount stores “Supersol Deal” (a confounded fourth sub-chain of ‘warehouse’ discount stores “Big” was later eliminated). Probably not by coincidence, the restructuring of chains by Mega and Supersol resemble a strategic move by Tesco in the 1990s. The expansion, and especially the establishment of very large stores, has led the Israeli chains, like the British one, into trouble. The suspect reasons are failure to adapt in time to changes in economic atmosphere and consumer behaviour since 2008 vis-à-vis an inadequate reply to the challenge from new discount chains. (It is now revealed that Tesco faulted in delaying payments to suppliers, as Mega did.)

  • Mega operated in total about 185 stores in mid-2015. Initially the plan was to close 32 stores, mainly their “You” discount stores. However, it eventually closed at least 55 stores by the end of the year, and Mega is now left with fewer than 130 stores. Its number of employees was intended to be reduced from 6,000 to 5,000 but actually dropped to 3,500 (most of them were store employees, but the staff in headquarters was also significantly cut).
  • In the first half of 2015 Mega reported sales of 2.6 billion shekels (~$685m), down from almost three billion shekels in same period of 2014. Of total sales, 80% were attributed to the stores Mega expected to keep and 20% to the 32 stores intended to be shut down. In profit, stores planned-to-continue earned 55m shekels whereas stores planned to close lost 577m shekels. As it turned out, the initial recovery plan was not sufficient.
  • Mega is second to Supersol in the food retailing industry yet not so close behind: Supersol’s market share in 2014 was estimated 18% versus Mega with 9% (a ratio of 2:1). The private discount chains held together 28% [stable 45% attributed to open-air markets, groceries and minimarket stores]. It should be noted that according to predictions (2013-2015) the private chains were expected to gain mostly at the expense of Mega with a small but not negligible slide down for Supersol (The Marker, 30 Dec. 2014) — Mega found itself in a classic disadvantageous ‘sandwich’ position.

From start Mega committed to selling at lower prices than other stores in towns and cities. At the same time, it aimed for each store to be an integral part of its community, so that residents-shoppers will feel at home in their supermarkets. However, Mega did not succeed in maintaining its ‘low price’ position according to price comparisons published over time. It is questionable whether restructuring its chain, following Supersol, was necessary and suitable for Mega. The position of Mega City on prices may have been only weakened and diluted relative to its discount sub-chain. Mega has already had a well-entrenched network of neighbourhood supermarkets with emphasis on lowering cost to consumers — it should have concentrated its efforts on this chain. Yet, Mega did not succeed to keep lower prices as well as invest in the shopping experience and product variety in its stores, potentially conflicting objectives; it did not offer thereof a consistent value proposition.

It is difficult to understand how the owners of Alon Blue Square did not notice what was happening at Mega. They are accused of taking high dividends over time (the owners claim they have been misinformed about real profits, ringing bells from Tesco). The owners may have also acted irresponsibly by means of an over-charging rental policy of its real-estate subsidiary towards Mega’s stores.

The interests of the owners at this stage are vague. Blue Square chose to rent properties to chains that took over stores of Mega-You — was it to salvage Mega or to protect other interests of Blue Square? Proposals published to buy Mega retail chain actually focus on Blue Square Real Estate. Truly, one has to buy the properties in order to be able to continue operating stores in them, but that is only due to a status created by the owners that may now play against Mega. Hence, it could be a major difference if the potential buyer is a retailer or a real-estate developer. It is in the public’s and the food retailing industry’s interest that the buyer is required to take over also the supermarket business of Mega and not dispose of it. It is furthermore important that the supermarket industry has at least one other strong retail chain as a challenger to Supersol, not leaving Supersol over-powered against a competition too dispersed among several small and medium chains.

There is not really a good reason to miss “Blue Square” as a co-operative. A new competitive business ownership and directive has had an opportunity to re-create the supermarket chain and its brand. The chain was re-branded as Mega and yet it disappointed because core components of strategy, culture and implementation were flawed. It is now time to re-invent the concept of the chain and its brand. Nonetheless, the title “Blue Square” at Alon without the supermarket retail chain will be quite void and meaningless.

Ron Ventura, Ph.D. (Marketing)

Read Full Post »

Surveys, being a major part of marketing research, seem to be in perpetual movement of change and development. Many of the changes in recent years are tied with technological advancement. About fifteen years ago online surveys — delivered over the Internet — began to rise as a dominant mode of survey administration; but now, researchers are pushed to perform more of their surveys via mobile devices, namely smartphones and tablets, in addition or as a replacement to being administered on desktop and laptop computers.

Yet some important distinctions between those two modes can make the transfer of surveys between them flawed. Just as much as it was wrong to suggest in the past that survey questionnaires administered in face-to-face interviews could be seamlessly transferred to phone interviews, it would be wrong today to suggest a seamless transfer of surveys from web browsers on desktops/laptops to mobile browsers (or apps).

In the latest Greenbook Research Industry Trends (GRIT) Report of Q3-Q4 2015, the authors suggest that there is still much room for improvement in adjusting online survey questionnaires to run and display properly also on mobile devices. They find that 45% of their respondents on the research supplier side and 30% on the research buyer (client) side claim that their companies design at least three quarters (75%-100%) of their online surveys to work effectively on mobile phones; however, “that tells us that over 50% of all  surveys are NOT mobile optimized” (p. 14, capital letters are in origin). The authors hereby implicitly call on marketing researchers to do much more to get their online surveys fully mobile-optimized. But this is not necessarily a justified or desirable requirement because not all online surveys are appropriate and applicable to be answered on smartphones nor on tablets. There could be multiple reasons for a lack of match between these modes for administering a particular survey: the topic, the types of constructs measured and instruments being used, the length of the questionnaire, and the target population relevant for the research. Consumers use mobile devices and personal computers differently (e.g., purpose, depth and time) which is likely to extend also to how they approach surveys on these products.

  • The GRIT survey of marketing researchers was conducted in a sample of 1,497 respondents recruited by e-mail and social media channels, of whom 78% are on the supplier-side and 22% on the client-side. Nearly half (46%) originate in North-America and a little more than quarter (27%) come from Europe.

Concerns about coverage and reach of a research population have followed online surveys from the beginning. Of different approaches for constructing samples, including sampling frames (e.g., e-mail lists) and ad-hoc samples (i.e., website pop-up survey invitations), the panel methodology has become most prevalent. But this approach is not free of limitations or weaknesses. Panels have a ‘peculiar’ property: If you do not join a panel you have zero probability of being invited to participate in a survey. Mobile surveys may pose again similar problems, and perhaps even more severely, because users of smartphones (not every mobile phone is able to load surveys), and moreover tablets, constitute a sub-population that is not broad enough yet and the users also have rather specific demographic and lifestyle characteristics.

  • Different sources of contact data and channels are being used to approach consumers to participate in surveys. Companies conduct surveys among their customers for whom they have e-mail addresses. Subscribers to news media websites may also be included a in survey panel of the publisher. Members of forums, groups or communities in social media networks may be asked as well to take part in surveys (commissioned by the administrator).

Decreasing response rates in phone and face-to-face surveys has been an early drive of online surveys; these difficulties have got only worse in recent years so that online surveys remain the viable alternative, and in some situations are even superior. Online self-administered questionnaires (SAQ) of course have their own genuine advantages such as ability to present images and videos, interactive response tools and greater freedom to choose when to fill the questionnaire. However, as with former modes of data collection for surveys, response behaviour may differ between online surveys responded to on personal computers and on mobile devices (one should consider the difficulty to control what respondents do when filling SAQs on their own).

The GRIT report reveals that the greatest troubling aspects of panels for marketing researchers are the quantity and quality of respondents available through those sampling pools (top-2-box satisfaction: 36% and 26%, respectively). In particular, 33% are not at all satisfied or only slightly satisfied with the quality of respondents. The cost of panel is also generating relatively low satisfaction (top-2-box 34%). Marketing researchers are more satisfied with timeliness of fielding, purchase process, ease of accessing a panel and customer service (49%-54%). [Note: 33% is compared with ~20% for ‘quantity’ and ‘cost’ and ~12% on other aspects.]

The GRIT report further identifies four quadrants of panel aspects based on satisfaction (top-2-box) versus (derived) importance. The quality and quantity of respondents available in panels occupy the ‘Weaknesses’ quadrant as they generate less satisfaction while being of higher importance. Customer service and purchase process form ‘Key Strengths’, being of higher importance and sources of higher satisfaction. Of the lower-importance aspects, cost is a ‘Vulnerability’ whereas access and timeliness are ‘Assets’. The ‘Weaknesses’ quadrant is troubling especially because it includes properties that define the essence of the panel as a framework for repeatedly extracting samples, its principal purpose. The assets and strengths in this case may not be sufficient to compensate for flaws in the product itself, the panel.

Surveys allow researchers to study mental constructs, cognitive and affective: perceptions and beliefs, attitudes, preferences and intentions; they may broadly look onto thoughts, feelings and emotions. Survey questionnaires entail specialised methods, instruments and tools for those purposes. Furthermore, surveys can be used to study concepts such as logical reasoning, inferences, relations and associations established by consumers. In the area of decision-making, researchers can investigate processes performed by the consumers or shoppers, as reported by them. Advisedly, the findings and lessons on decision processes may be validated and expanded by using other types of methods such as verbal protocols, eye tracking and mouse tracking (web pages) as research participants perform pre-specified tasks. However, surveys should remain part of the research programme.

Much of the knowledge and understanding of consumers obtained through surveys cannot be gained from methods and techniques that do not directly converse with the consumers. Data from recording of behaviour or measures of unconscious responses may lack important context from the consumer viewpoint that may render those findings difficult to interpret correctly. Conscious statements of consumers on their thoughts, feelings, experiences and actions may not be fully accurate or complete but they do represent what they have in mind and often enough guide their behaviour — we just need to ask them in an appropriate and methodic way.


The examples below are brought to demonstrate why different approaches should be used collaboratively to complement each other, and how surveys can make their own contribution to the whole story:

  •  Volumes of data on actions or operations performed by consumers, as entailed in the framework of Big Data, provide ‘snapshots’ or ‘slices’ of behaviour, but seem to lack the context of consumer goals or mindsets to meaningfully connect them. One has indirectly to infer or guess what made the behaviour occur as it did.
  • Big Data also refers to volumes of verbatim in social media networks where the amount of data gives an illusion that it can replace input from surveys. However, only surveys can provide the kind of controlled and systematic measures of beliefs, attitudes and opinions needed to properly test research propositions or hypotheses.
  • Methods of neuroscience inform researchers about neural correlates of sensory and mental activity in specific areas of the brain, but it does not tell them what the subject makes of those events. In other words, even if we can reduce thoughts, feelings and emotions to neural activity in the brain, we would miss the subjective experience of the consumers.

 

It is not expected of marketing researchers to turn all their online surveys to mobile devices, at least not as long as these co-exist with personal computers. The logic of the GRIT’s report is probably as follows: Since more consumers spend more time on smartphones (and tablets), they should be allowed to choose and be able to respond to a survey on any of the computer-type products they hold in time and place convenient to them. That is indeed a commendable liberal and democratic stance but it is not always in best interest of the survey from a methodological perspective.

Mobile surveys could be very limiting in terms of the amount and complexity of information a researcher may reliably collect through them. A short mobile survey (5-10 minutes at most) with questions that permit quick responses is not likely to be suitable to study adequately many of the constructs previously discussed to build a coherent picture of consumers’ mindsets and related behaviours. These surveys may be suitable for collecting particular types of information, and perhaps even have an advantage at this as suggested shortly.

According to the GRIT report, 36% of researchers-respondents estimate that online surveys their companies carry out take on average up to 10 minutes (short); 29% estimate their surveys take 11-15 minutes (medium); and 35% give an average estimate of 16 minutes or more (long). The overall average stands at 15 minutes.

These duration estimates correspond to online surveys in general and the authors note that particularly longer surveys would be unsuitable for mobile surveys. For example, 16% of respondents state their online surveys take more than 20 minutes which is unrealistic for mobile devices. At the other end, very short surveys (up to five minutes) are performed by 10%.

There are some noteworthy differences between research suppliers and clients. The main finding to notice is that clients are pressing to shorter surveys, such that may also be applicable to respond to on mobile devices:

  • Whereas just near to 10% of suppliers perform surveys of up to 5 minutes on average, a little more of 15% of clients perform surveys of this average length.
  • Suppliers are more inclined to perform surveys of 11-15 minutes on average (approx. 33%) compared with clients (about 23%).
  • Suppliers also have a little stronger propensity for surveys of 16-20 minutes (20% vs. 16% among clients).

Researchers on the supplier side appear to be more aware and sensitive to the time durations online surveys should take to achieve their research objectives and are less ready to execute very short surveys as clients drive to.

  • Interestingly, the report shows that the average estimated time length in practice is similar to the maximal length respondents think an online survey should take. The authors propose these results can be summed up as “whatever we answered previously as the average length, is the maximal length”. They acknowledge not asking specifically about mobile surveys — the accepted maximum is 10 minutes. This limit is more in accordance with clients’ stated maximum for online surveys (52%) whereas only 36% of suppliers report such a goal (32% of suppliers choose 11-15 minutes as the maximum, above the expected maximum for mobile).

Online surveys designed for personal computers are subject to time limits, in view of respondents’ expected spans of attention, yet the limits are expected to be less strict compared with mobile devices. Furthermore, the PC mode allows more flexibility in variability and sophistication of questions and response scales applied. A smartphone does not encourage much reflective thought and this must be taken into consideration. Desktops and laptops accommodate more complex tasks, usually executed in more comfortable settings (e.g., consumers tend to perform pre-purchase ‘market research’ on the their personal computers and conduct quick queries of the last-minute during the shopping trip on their smartphones) — this works also to the benefit of online surveys on personal computers. (Tablets are still difficult to position, possibly closer to laptops than to smartphones.)

Online surveys for mobile devices and for desktops/laptops do not have to be designed to be the same in content of questionnaires (adapting appearance to device and screen is just part of the matter). First, there is justification to design surveys specifically for mobile devices. These surveys may be most suitable for studying feedback on recent events or experiences, measuring responses to images and videos, and performing association tests. Subjects as proposed here are afforded in common by System 1 (Automatic) — intuition and quick responses (immediacy), emotional reactions, visual appeal (creativity), and associative thinking.

Second, it would be better to compose and design separate survey questionnaires for personal computers and for mobile devices at different lengths. Trying to impose an online survey of fifteen minutes on respondents using mobile devices is at considerable risk of early break-off or worse of diminishing quality of responses as the survey goes on. At least a short version of the questionnaire should be channeled to the mobile device — though it still would not resolve issues of unfitting types of questions posed. Even worse, however, would be an attempt to shorten all online surveys to fit into the time spans of mobile surveys because this could make the surveys much less effective and useful as sources of information and miss much of their business value.

Marketing researchers have to invest special effort to ensure that online surveys remain relevant and able to provide useful and meaningful answers to marketing and business questions. Reducing and degrading surveys just in order to obtain greater cooperation from consumers will only achieve the opposite — it will strengthen the position of the field of Big Data (that worries some researchers), as well as other approaches that navigate the unconsciousness. Instead, marketing researchers should improve and enhance the capabilities of surveys to provide intelligent and valuable insights, achieved particularly by designing surveys that are best compatible with the mode in which the survey is administered.

Ron Ventura, Ph.D. (Marketing)

Read Full Post »