Feeds:
Posts
Comments

Posts Tagged ‘Consumers’

‘Disruption’ has become a highly accepted concept in business and management, an event one can only expect to happen at some point in time, whether in production, marketing, distribution and retail, or in other functions of business. Disruptive innovation, mostly technological and digital, can be helpful in fixing market weaknesses due to lack of progress in methods and processes applied by ‘legacy’ companies; operational inefficiencies; and insufficient competition in a market. A disruptive innovator may also succeed by capturing consumer needs hidden or left ignored by existing complacent competitors. But disruptive innovation is not a magical cure; actually, it tends to be quite a radical form of cure. Innovations of this kind have the potential to destabilise a market, create disorder and confusion, and cause dysfunction if the transformation is spiralling out of control, a matter of real concern to all parties involved.

Disruptive innovations have been introduced in various industries or categories of products and services. It often occurs when a technological company imports a method or a tool developed in the hi-tech community into a specific product or service category, whose agents (e.g., providers, customers) are mostly unaccustomed and unready for. Yet the innovation can hit roots if it meets a large enough group of innovative or tech-orientated consumers who welcome the new solution (e.g., a way of acquiring or using a service). Thereafter, incumbent competitors find themselves obligated to adopt, if capable, similar or comparable methods or tools in their own operations. High-profile examples include: (a) Uber that expanded the concept of taxi-rides and ridesharing; (b) Airbnb that disrupted the field of hospitality and short-term lodging (‘home-sharing’ vs. hotels and guest houses); (c) Netflix that altered the habits of television viewing. Also, companies in a new sector of financial technology (‘fintech’) offer digital tools (mobile app-based) for consumers to manage their banking accounts, budgets and investments, challenging ‘legacy’ banks and financial service providers.

Certain technological innovations turn out, however, to be disruptive across-the-board. For instance, online social media networks and digital marketing methods (reliant on Big Data and analytic techniques) have been influencing dramatically how companies approach customers and interact with them in many product and service categories (beyond technological goods or information and communication technology services). Furthermore, developments in artificial intelligence (AI) and robotics promise to introduce even more significant changes, from manufacturing to marketing and retail, and in the functioning of various products (e.g., smart home appliances and devices, the ‘upcoming’ driverless car).

Much damage may be caused if the innovative alternative solution is incomplete or the planning of its implementation is flawed. Overall, everyone should be prepared for a turbulent period of resistance, adjustment and adaptation that may extend until the ‘new-way-of-doing-things’ is assimilated in the market, or rejected. The story of an episode regarding taxi transportation at the international airport near Tel-Aviv exposes how wrongful introduction of a disruptive innovation in this service domain can lead to blunder and service failure. Mistakes made because of flawed planning in a highly sensitive process of market transformation may turn the disruption into a mess-up instead of improvement of the service.

The management of the Israel Airport Authority (IAA) launched earlier this year (2017) a new bid for taxi service operators to ride passengers into and from Tel-Aviv (Ben-Gurion) International Airport. In the end of May the 10-year permit of the primary taxi company licensed to provide service in terminals at the airport expired; the IAA wanted to open the service to competition in expectation that it will lead to fare reduction and perhaps other improvements (e.g., availability, time keeping of taxi journeys).

  • The competition is concentrated in fact on picking-up passengers from the airport; if prohibited, taxi cars will have to return empty after dropping off their former passengers at the flight departure terminal. A primary taxi company was given the advantage.
  • Note: Shuttle or minibus service providers are allowed in addition to take passengers  to more distant cities like Jerusalem and Haifa.

Only two companies responded and participated in the bid: the incumbent service provider (“Hadar-Lod”) and the mobile app company Gett that mediates taxi service. The veteran taxi company has been riding passengers to and from the airport for 40 years. It has definitely developed proficiency in riding air travellers over the years but there were also misgivings about its practices, linked to its status as mostly an exclusive taxi service for individual passengers (alone, family and friends). A few years ago the Ministry of Transport intervened by publishing and issuing a calculator of recommended fares to help passengers ensure they pay fair prices.

Gett (originally GetTaxi, founded in 2010) is managing a network connecting subscribed taxi drivers with passengers through its mobile app. The company is now operating in over 100 cities in four countries (Israel, United States, United Kingdom, Russia). The location-based app facilitates matching between a passenger and a driver, from service ordering, through journey planning and pricing, and concluding with payment via Gett. Unlike Uber, Gett is working only with professional licensed taxi drivers and is not involved in supporting informal ridesharing journeys by unauthorised drivers (e.g., UberPop). The app of Gett is focused on benefits of convenience of ordering (no street hailing, no phone call), efficiency of matching through the network, and of course promising a lower journey cost.

Still, the company hires its subscribed taxi drivers but is not their employer — they divide the fare income between them to the will of Gett. The company is commending itself on its website for higher pay to drivers, in-app tips and 24/7 live support, motivated by the idea that if Gett treats drivers better, they will reciprocate by treating their riders better. However, the arrangement has repeatedly emerged as a source of friction. Gett has changed its name, removing ‘Taxi’ from the title, to allow for extending its brand into a variety of delivery services (e.g., food, parcels) to domestic and business clients.

  • Taxi cars of member drivers in Gett’s network are marked by a label with its logo on the car’s side. Taxi drivers that belong also to a traditional local taxi company (‘station’) may carry its small flag on top of the taxi. However, in recent months taxi cars can been seen more frequently in Tel-Aviv area carrying only a flag of Gett.

The absence of more traditional taxi companies from the bid could be the first sign of a problem. Those companies may have found it not worthwhile for them to commit to provide regular service at the airport. But as a replacement, Gett is not truly a ‘physical’ taxi company and has unique characteristics. It leaves the operation of taxi service by Gett open to much ambiguity. Drivers subscribed with Gett can ‘double’ by riding passengers either via Gett’s app or with a standard taxi meter installed in the car. Are traditional taxi companies ‘hiding’ behind drivers also associated with Gett? But if Gett had the permit, would it allow drivers in its network to take passengers also without its app? (i.e., leave money on the table from such journeys.) Yet, Gett’s drivers have to choose in advance in what periods they act as standard taxi drivers or as taxi drivers riding passengers on call from Gett’s app. This situation could lead to confusion: under what ‘hat’ are the drivers allowed to get in and out of the airport and at what time are they allowed to choose what type of passenger-customer to ride.

Furthermore, the service could be binding and unfairly restrictive for passengers who are not subscribed customers of Gett, especially when arriving from abroad. There could be several reasons for passengers to find themselves in an inferior position: Passengers may not have a mobile phone that supports software applications; they may not feel comfortable and skilled in using mobile apps; or passengers may not be confident in paying through a mobile app (e.g., prefer to pay taxis in cash). It may be hard to believe but such people do exist in our societies in different walks of life. It is also known that smartphone users are selective in the number and sources of apps they are willing to upload to their devices. It could be futile to try to force consumers to upload a particular app, but it would be especially unfair to require users to upload an app of Gett so they can be driven away from the airport. The IAA should have not allowed from start an outcome in which a company of the type of Gett becomes a single provider of taxi service at the airport, primarily for riding returning residents or visiting tourists (the latter may not even be aware of Gett beforehand). The ‘disruption’ would have actually become a distortion of service, leaving customers either with no substitute or with confusion and frustration.

But something else, awkward enough, happened. The two companies reached an agreement to bid a joint offer in which they committed to lower fares by 31% on average from the current price level. It is unclear who initiated the move, yet it is reasonable that Gett was about to offer a much lower price for taxi rides affordable by its model and platform, and probably the management of the Hadar-Lod taxi company was alerted and in order to secure its stay in business felt compelled to match such an offer or simply join hands with Gett. The drivers belonging to Hadar-Lod thought otherwise and started at the end of May a spontaneous strike. The two bidders tried to reach a new agreement but eventually the veteran company had to retreat. One cannot be certain that drivers with Gett would have co-operated — the new price level may have been affordable for Gett but not necessarily worth the ride for the drivers. Apparently, the recommended official price was already or about to go down 7%, and with the further reduction committed in the bid offer, the taxi fare would drop on average by 38%. One would have to work many more hours to fill the gap. The cut was too deep — it may have worked well for the companies and their management but could never work for the drivers. (Note: An explanation from a taxi driver with Gett helped to describe the situation above.)

  • Having taxis from both companies in service would have provided some remedy with a transportation solution for every type of customer-passenger. But a certain mechanism and a person to supervise would be needed to keep order on the taxi platform. For instance, travellers subscribed with Gett may schedule their ride while in the luggage hall, and there would be Gett taxis waiting ready to pick them up. One would have to make sure there are enough taxi cars available for the other passengers.

That bid is now cancelled. The IAA declared that it would soon publish a new bid, and until its results are known, any licensed taxi driver can arrive and leave the airport with passengers as long as they register with the IAA. Are the official recommended prices still in place? Who will regulate the operation and watch that taxi drivers respect consumer rights of their passengers? Who will supervise in particular the allocation of passengers to authorised taxis at the arrival terminal (i.e., dispatching)? Answers will have to be found on ground. It is no surprise that the new situation has been received with apprehension by consumers-travellers and taxi drivers alike.

Consumers will have to learn from experience or relatives and friends what are acceptable price ranges for rides into and from the airport, and form anew their references for a fair price and the highest (reservation) price they are willing to pay. They may also set a low price level under which the reduced price may be suspected as “too good to be true”. A discounted price by a single driver to attract passengers, which deviates too much from a ‘normal’ price, should alarm the customer-passenger that something could be wrong with the service, or else there is a logical reason for the reduction. For example, the taxi driver may suggest ridesharing a few arriving passengers to a common destination area in Tel-Aviv — some passengers may be happy to accept, but the terms must be stated in advance. It is unclear how long the interim period will last, but the notions about pricing described above may remain valid even afterwards in a new service regime.

Making changes like adding competition, and especially by involving a disruptive innovation in the service domain, can improve matters. However, the process must be handled with care and watched over to avoid the system from derailing during the transformation. In this case, the IAA could and should have planned and managed the bid and implementation of its plausible outcomes more wisely. At this time, there must be at least one traditional taxi service operator allowed in addition to an innovative service mediated by a company like Gett at the airport, and rules have to be set and respected. Rushing into any drastic and innovative transformation of service will not do good for its chances of success, just invoke confusion and resentment — sufficient time and support must be given for the customers-passengers and taxi drivers to accommodate and adapt to the new service settings at the international airport.

Ron Ventura, Ph.D. (Marketing)

Read Full Post »

Tremors continue to shake ground in the field of television viewing since the start of this decade,  especially with respect to how televised content is consumed. It is possible to say that the concept of television is being transformed. The move of Netflix away from rental of DVDs, delivered by mail, towards online streaming, seems to have signaled the start of this transformation which is affecting the consumers and the media industry together. Netflix is of course not doing it alone, changing how televised video content is made available to consumers — other digital services include Hulu, Amazon Prime, and YouTube, as better known examples. Many consumers, most of all Millennials, are attracted to having better control and discretion of what, how and when they watch video programming content of the kind we are used to associate with TV.

Two primary shifts in TV viewing need to be considered: (1) watching less TV programmes in real-time; but more important and characteristic of the changes in the past decade: (2) watching less TV on the screen of a TV set in favour of screens of personal computers, tablets and smartphones. The first shift is not so new — watching a programme not at the time of its scheduled broadcast can be traced to at least thirty-five years ago thanks to the VCR (Video Cassette Recorder); later on came the DVD, but most DVD sets used by households were non-recorders.

The popularity of watching TV content in shifted-time has increased somewhat more with the introduction of new devices and services, namely Digital Video Recorder (DVR) and Video-on-Demand (VOD). The VOD service has actually created new options for viewers to access a special video library and watch programmes or films that are not available on broadcast channels (not available at all or not at that period).

The second shift noted is more closely associated with the Internet availability of video content, including TV programmes, that is not dependent on traditional broadcasters (networks, but also cable and satellite TV services). Whether the application for watching the content is web-based or a mobile app is less crucial to our matter. Streaming potentially breaks the tie between the televised content and the physical TV set (but that is an option, not a necessity). It is not surprising that new-age digital media companies that offer video content by streaming have been followed by the TV networks worldwide that now allow consumers to view their programmes online (viewing may be free or by paid subscription).

As we look into greater details of various possibilities of viewing televised content, we may find that defining TV viewing is not clear-cut. For instance, if one is streaming the content of a programme (e.g., a comedy episode) ordered from Netflix to a laptop that is connected to a TV set (or streaming directly to a Smart TV), is he or she watching TV? Apparently, 79% of US streamers primarily stream the video content to their TV sets! (CSG International, Insights Blog, 8 Sept. 2016). Vice versa, if one is watching a programme (e.g., evening news) at the time of its broadcast but on a laptop by live streaming, is he or she not watching TV? It can be confusing. Consumers stream TV content of the type of TV series episodes (e.g., comedy, drama, crime), documentaries, news editions, as well as cinema films. The key to achieve greater coherence and consistency may be to define ‘TV’ by the kind of its programming content, not by the technology of devices or distribution platforms (BARB’s Annual Viewing Report: UK’s Viewing Habits, April 2016, pp. 6-7, also see pp. 8-9).

Many consumers, mostly the younger ones (under 35), are turning away from fixed timetable TV programming. They are more selective in regard to the programmes they wish to watch at the time convenient to them. The choice of more savvy TV viewers is getting more in a form of cherry-picking. They may receive recommendations what programmes to watch from friends who have similar tastes or from the online content provider based on their stated preferences and past viewing behaviour (possibly adding what similar-in-kind programmes others are viewing). Consumers may choose to watch, for example, legacy TV series or programmes of a particular genre, as well as new selections of TV-type programmes produced independently by video content providers such as Netflix, exclusive to their subscribers.

For consumers who have less leisure time it may also become a matter of efficiency in allocating their limited TV viewing time to the programmes they really want to watch, thus not being constrained by the programmes scheduled for those hours. People watch only a small portion of the channels available to them in their TV cable or satellite packages, and are less comfortable with their inflexible programming plans. TV viewers wander to other channels such as a VOD service of their regular provider or by streaming from online video content providers.

When looking at ‘traditional TV viewing’, which includes live TV (aka real-time or linear) and adjunct time-shifted services like DVR and VOD(†), the number of weekly hours US young persons ages 18-24 spend watching TV has decreased across all quarters from a level of 23-26 hours in 2011 to a level of 15-18 hours in 2015, and further to 14-16 hours in 2016 (excluding Q4). In addition, when comparing between age groups, it can be seen that the 12-17 years (teenagers) and 18-24 years age groups are most similar to each other in the number of their TV watching hour(decreasing between Q1/2011 and  Q3/2016) and are now even closer to each other than five years ago. There is a decline in number of weekly TV watching hours also in the 25-34 age group, and though more modest a decline, also among those ages 35-49, but there is no discernible trend in age group 35-64 and even a small increase among seniors 65 years old and above (based on data from Nielsen, visualised by MarketingCharts.com). Note that the band of weekly hours mounts as the age rises. Another chart summarises those differences more sharply: the number of hours spent watching TV per week decreased in the younger age groups (12-17 & 18-24) by 37%-40% in five years, 28% still in age group 25-34, and only ~11% among ages 35-49 (weekly hours increased 7% among 65+).

  • Note: The original figures by Nielsen indicate that most traditional TV viewing time is still ‘live’ or linear, accounting for 85%-90% of time, and the rest goes to viewing programmes by DVR or VOD (figures of BARB for the UK indicate a similar ratio).
  • † Clarification: ‘Live’ TV includes of course live programmes or events that are taking place and filmed as they are broadcast or transmitted, but ‘live’ refers here more generally to programmes that are viewed at the time scheduled by a TV company for broadcasting, to be distinguised from programmes watched at a time chosen by a viewer. One possible development is that the concept of VOD will be expanded dramatically.
  • Statistics about streaming usage are still difficult to obtain. Figures brought by MarketingCharts from a less familiar source (‘SSRS’ online and mobile survey company) suggest that the number of hours Millennials spend weekly watching streamed content through various modes increased gradually and almost continuously in 2013-2015, though the overall level is still modest (~1.5 hours in 2013, rising up from 3.3 to  4.5 hours during 2014 until standing on a plateau of 5.5-5.7 hours in 2015).

Research of the British Broadcasters’ Audience Research Board (BARB) notably shows that, other than the TV screen, personal computers (desktop/laptop) and tablets are the  more frequently used for watching TV content on their screens, usually in the evening, while smartphones remain much less popular — smartphones are probably less accepted for TV viewing because their screens are too small to enjoy the images and are not convenient to watch for an extended time. Furthermore, with no evidence showing otherwise, it seems a greater part of time spent viewing not on a TV screen in the UK is still done at home (BARB’s Annual Viewing Report, April 2016, pp. 10-11).

The more traditional TV networks and service providers (cable and satellite) have to acknowledge that the rules of the game in the TV domain have changed: “As both traditional service providers and streaming services alike are looking to engage viewing audiences and hold on to subscribers, catering to the individual needs of each viewer is increasingly paramount” (CSG International: Insights Blog, see their infographic of global content viewing trends). Indeed we can see that TV networks and service providers try to adapt at different levels of extent and pace to changes in the competitive environment and shifts thereof in viewing patterns of consumers. Networks are getting more deeply into the Internet space, availing at least some of their content to viewers by streaming on their websites or through mobile apps, and even greater programming content may be offered to customers by subscription (e.g., BBC iPlayer). Especially in the area of news, we see news websites inserting more video content into their reports, including live coverage. The cable and satellite service providers are working to enhance their channels and VOD libraries to bring content that viewers may be seeking by streaming (e.g., HBO) and thus dissuade customers from churning to other services.

The media industry, most broadly speaking, sees a lot of movement in recent years. It seems that media companies, mainly the larger corporations, are trying to get into each other’s territory — TV (broadcast, streaming), digital media content (Internet & mobile), telecom infrastructure, TV and cinema production, etc. The new business and technology mixtures are created through mergers and acquisitions, accompanied by integration of different functions (horizontal and vertical) that will enable companies greater capabilities along the wider spectrum of the production and transmission of video content to consumers. Notably, there is emphasis on delivery of digital content through various platforms in combined modes (e.g., text, still images, video and audio). Content characteristic of TV programmes is just part of the media ‘basket’ companies are planning to offer their customers. A further implication is that content of different domains in Internet and TV formats will be increasingly blended on the same screen — from news information and education to entertainment and shopping.

In a highly remarkable merger in sight, AT&T is planned to acquire Time Warner for $85 billion. The deal between the parties is already agreed, but approval by the American antitrust authority is still in question (peculiarly, a political matter is also involved because of the bitter dispute between President Trump and CNN owned by Time Warner). This merger would combine the competencies of Time Warner mainly in content (including production and broadcasting) with the telecommunication infrastructure and services of AT&T in Internet and wireless (‘mobile’) communication. It should allow the integrated corporation to reach a wider audience with a richer and greatly varied content on screens of TV sets, personal computers and mobile devices.

Time Warner holds a broad and impressive portfolio of TV channels, production units and digital services for delivering their content. The most famous brand is probably CNN which includes its American and International channels. But Time Warner also owns HBO for TV programmes and Cinemax for cinema films, both available to subscribers by streaming. In fact, Time Warner also owned in the past the AOL Internet company. In 2015 AOL moved hands to Verizon telecom company, and just within months Verizon also acquired the troubled Internet company Yahoo. The expansion of Verizon is yet another example of an integration of telecom infrastructure and services with digital  content capabilities, but it does not have yet a strong TV presence.

  • Time Warner spun off its print arm of magazines in 2014, identified now as the independent company Time Inc.. Some of its better known magazine brands include the Time magazine of current affairs and Fortune magazine of business affairs, but overall the company publishes magazines in various areas of interest (e.g., entertainment, fashion & beauty, photography, home and design, as well as politics, business and technology).

The TV business is reshaping. Frahad Manjoo of the New-York Times (October 2016 [1]) foresees a future of TV “built on lots of bold, possibly speculative experiments”. Advanced digital technology companies (information, Internet & mobile) of the kind of Google, Amazon, Facebook and Netflix may readily disrupt efforts of the large telecom and “old-guard” media companies. When TV viewing habits also are fluctuating and reforming, business decisions involve much “educated guessing”.  In another article, Barnes and Steel [2] consider repercussions of the AT&T-Time Warner deal on other media and telecom companies. While some of those companies declare their objection and intent to fight the merger, they may follow a similar trail.  The cases of four companies are reviewed:

  1. The Walt Disney Company insists on its market position as a predator and not as a prey to technology companies. Its current objective is to bring more premium TV channels directly to consumers by streaming; that may entail the purchase of desired brands (e.g., Pixar, Marvel, ESPN).
  2. Comcast, a telecom company (cable [TV] and broadband), is also the owner of TV networks or channels  (e.g., NBC and MSNBC), as well as film and TV studios. It may not sit idle and may try to take over another telecom company to complement its coverage (e.g., extending to wireless).
  3. 21st Century Fox (cable TV, films) is claiming to have no plans of expansion and entering into new areas. It previously failed to acquire Time Warner (2014). But the recent AT&T-Time Warner deal may change their plans. (The Murdoch family is currently aiming to take full control of UK-based Sky satellite TV).
  4. CBS and Viacom engage together in TV broadcasting, a cable TV service, and TV and cinema productions. The controlling owner (Redstone family) may now be encouraged to unite the sister companies (once again) into a single corporation.

We should not rush to assume that watching TV programmes on a TV set or watching programmes in real-time are about to end anytime soon. Traditional live TV viewing and digital video viewing are complementary, possibly preferred on different devices at different times and in differing circumstances (eMarketer, 14 March 2016).

There may be personal but also social benefits or incentives to watching programmes or films in the ‘old-fashioned’ way. First, enjoyment and visual comfort, particularly for certain types of video content, are expected to be greater when viewed on a large screen (37” and above). Second, there is real value in watching a programme such as an episode of a popular TV series at the same time with others so that one can share impressions and discuss it with acquaintances the next day or right after the show. A similar argument can be made for watching live a sports contest, on top of the thrill of watching the event as it unfolds. Third, there is still pleasure and enjoyment in watching TV together with family or friends on a large TV set at one’s home (and in some cases in a pub or bar).

The new technological developments in distributing and displaying video content in high quality, offer opportunities for consumers to improve and enrich in several ways their experience of viewing TV programing and other types of video content. Consumers would be given more freedom of choice and flexibility to watch TV as they truly like. Companies in the wide spectrum of media, telecom and Internet may also find new business possibilities to enhance their services to customers, including in particular TV-like content. But it will take more time to see how the TV domain shapes-up.

Ron Ventura, Ph.D. (Marketing)

Notes:

In New-York Times (International Edition), 26 October 2016:

[1]  “A Risky Bid With the TV Industry Up for Grabs”, Farhad Manjoo,

[2] “A Chilly Reaction to AT&T Deal”, Brooks Barnes and Emily Steel

 

 

Read Full Post »

One of the things people probably most dislike is getting sick because of some food they have eaten — usually an annoying and unpleasant experience. The sickness can happen within hours or two to three days after eating the contaminated food. The trouble is that oftentimes one has no way of anticipating the disease until feeling sick, and sometimes even after becoming ill it is not easy to connect the disease with consumed food. A food item may come from a respected and trusted brand, the expiry date looks fine, the food may also taste good, and still without suspicion it may cause poisoning and sickness. Food companies are walking on the edge of food safety when they skip necessary precautionary measures to prevent and detect contaminations in time, but furthermore when they conceal problems or try to solve them quietly in the factory without warning of a looming health risk to their customers.

  • The most common infections and poisoning are caused by bacteria of the type of Salmonella, Listeria, and E-Coli. But a foodborne disease may also be viral (e.g., norovirus) or being caused by insects (e.g., food moth). For most people a foodborne disease is not dangerous; it will cause sickness and inconvenience, passing after a few days without medical treatment. Yet, these diseases may be troublesome and cause more serious complications in people whose health is vulnerable (e.g., little children, seniors, pregnant women, prior illnesses, weaken immune system).

This summer there were a number of incidents of food contamination revealed in Israel. Yet two of the cases are more significant and instructive: the cornflakes of Unilever (Telma) and ready-to-eat salads by Shamir Salads.  First, the failures exposed in the conduct of the two concerned companies commend particular attention and taking lesson from them. Second, these incidents were the earliest to become public (late July, beginning of August) and have put the matter of food safety under a spotlight. A number of additional incidents of contamination may have been revealed just because of that, partly reported by alarmed food companies themselves (e.g., salmon fish, halva, frozen potato fries, pre-prepared grilled hamburger).

Unilever (Telma) — A contamination of salmonella was discovered by Unilever in Israel in packages of a few of its cornflakes products under the brand name of Telma (an Israeli-grown brand acquired by Unilever). The company insisted, however, that all contaminated packages remained in a company’s facility to be disposed of (they were converted into corn oil to be used as energy source for another industrial process). When upset consumers and the Ministry of Health pressured Unilever to provide assurances no packages reached food stores, the company claimed they had checked that the marked packages were separated and excluded from delivery in its facility. Only that this information was not accurate, not properly verified. It was soon after revealed that some 240 contaminated packaging parcels found their way out of the facility and distributed to food stores. Some of those cornflakes packages were probably consumed though no complaint of sickness was firmly connected with the cornflakes. Nevertheless, since cornflakes of the type contaminated are largely eaten by children, it is understandable that parents were strongly agitated by that belated discovery.

Unilever directed responsibility for the ‘mishap’ to an employee of a local logistics contractor who apparently mistakenly misplaced labels of some parcels for delivery and sent out the wrong packages. Even so, responsibility for the whole chain of supply of the products of Unilever rests with the company marketing them (not physically distributing them). That is the onus of the brand’s owner towards its customers. That Unilever failed to verify this mistake earlier makes the explanation just weaker.

  • Food safety experts suggest that it is unusual for a dry product like cornflakes to contract bacterial contamination of salmonella. Additionally, the cornflakes are roasted at a very high temperature that kills any bacteria that might have settled in the material. Therefore, it is much more likely that the culture of salmonella developed during the packaging or storage in preparation for distribution.

Shamir Salads — A contamination of salmonella was found in Mediterranean salads that contain tehina. Shamir Salads, like other food producers, buys the tehina mix as a raw material from a supplier, in this case a company named “HaNasich” (meaning “The Prince”). Badly enough, the grave problem for Shamir Salads is that the company did not identify the contamination itself. It failed twice: by not testing initially its raw material and by not testing the final salad product before delivery to retailers for any possible contamination. It should be clarified that laboratory tests are run on samples and therefore they cannot eliminate absolutely any contamination, but if sampling is conducted appropriately it gives a good chance of detecting the traces in time for further checks and corrective action. Skipping any sampling and tests cannot be excused.

The management of Shamir Salads argued in its defence that the company trusted its supplier, HaNasich, and therefore did not see any need to continuously check on the quality and safety of their tehina. The company was deeply disappointed and felt betrayed by its supplier for not advising them of any problems. The reference to the concept of trust between parties is not unfounded, but one can still check internally as a precautionary control measure without violating trust in the other party. A company does not have to trust blindly, especially not when a sensitive matter as health is concerned. It may even be doing a favour to its supplier that could miss contamination in its factory. Much less understandable is the lack of tests on the company’s finished products. If not before or during production, then at the very least testing of the finished salads would have given the company a chance of detecting a contamination before leaving the factory, investigating backwards and identifying the source in the tehina. Other companies (e.g., Strauss, Tzabar Salads) using the same tehina ran tests on their finished products and identified the contamination, linking it to tehina by HaNasich.

Both Unilever and Shamir Salads were actually forced to order recalls of their products. A recall becomes damaging in the public eye when the company does not seem to control the process and its timing, or is not honest with the consumers about the recall’s reasons and circumstances.

Complicated relations and flawed working of safety procedures in the food industry may have some responsibility for contamination getting lost or hidden from public knowledge. Companies have a reasonable interest to try to solve a problem in production they identify internally in hope they can contain it “behind closed doors”. It is a matter of calculated risk — but risks sometimes realise in a worse way. The Israeli Ministry of Health is criticised for not placing a proper procedure that requires food producers to perform microbiological lab tests on samples of finished product items and that current reporting procedures are vague. For instance, the companies are not required to report to the ministry until after ordering a recall due to contamination. Consequently, there are repeated conflicts over responsibility and blame-exchanges between producers and the Health Ministry. Furthermore, food companies are working with private labs that are in turn required to report directly to the Health Ministry only in case of contamination found in finished products and not in their raw materials. The implied outcome: food companies have a latent incentive to keep anything that happens in the factory silent, handle a “situation” for a longer time, and not report to anyone until the problem becomes severe or an urgent recall is inevitable.

Issues of food contamination and foodborne illnesses concern many countries, gaining particularly growing awareness in Western countries. The Fortune Magazine published an article, kind of special report, on problems of food safety in the United States (October 2015) titled “Contamination Nation“. The number of food recalls has grown more than twice from 2004 t0 2014 (2004: 288 recalls of which 240 of non-meat products; 2014: 659 recalls, 565 non-meat). Nearly half of recalls (47%) in the US are due to microbiological contamination. The highest proportion of recalls (21%) are of ready-to-eat food products.

  • According to the Centers for Disease Control and Prevention (CDC) 48 million Americans suffer each year of foodborne illnesses (128,000 are hospitalised and 3,000 die of a foodborne illness).

The writer, Beth Kowitt, proposes four reasons it is so hard to battle food contamination and poisoning; their relevance extends to Israel and to many other nations:

  • Foodborne illnesses are very difficult to identify and track down their roots — cases of illness are sporadic and therefore hard to tie with a specific “outbreak”; hundreds of components may be involved in isolating a cause of poisoning.
  • The food industry does not trust state regulators, their knowledge and tools — major food companies are performing their own tests for bacteria on food and in factory premises and develop a knowledgebase independent of state departments or agencies (FDA, CDC); companies are reluctant to disclose information they do not have to, part in concern of being implicated before the epidemiological mapping is completed.
  • The more food is imported from other countries, the more difficult it gets to control and verify its safety — exporting countries have different food-safety standards and inspection regimes, and the more steps food passes before entering one’s destination country, there are more opportunities for becoming contaminated.
  • Consumers have to do more to protect themselves — when consumers seek certain ingredients to be reduced or excluded (e.g., potassium, salt, sugar) or refrain from consuming frozen products because of health considerations, they could render their food less protected from bacterial contamination of their food; consumers are responsible for taking active measures to reduce contamination risks at their homes (e.g., washing hands, boiling milk, checking meat temperature).

It may be added to the last reason that safeguarding from food contamination may start from the facilities of the food producer but it should continue through the retailers’ food stores and finally indeed at the consumers’ home kitchens. Retailers are obliged to keep stores and displays cleaned-up at all times and ensure products are not kept beyond their expiry date (e.g., chilled dairy products, ready-to-eat meals, eggs). As for consumers, the American CDC recommends four practices for protecting from contamination: Cook to kill bacteria, Clean working surfaces, Separate more risky items (meat, fruits and vegetables) from other food, and Chill to reduce chance of bacterial cultivation.

Next to the article cited above, Fortune brings the story of the Texan-based Blue Bell ice-cream company which demonstrates what happens when a food company stalls treatment of contamination hazards at its plants and even hides them for too long. The crisis has rolled during 2015 but an investigation found that its roots may have existed since 2010. There were three deaths and two more serious patient ilnesses in the same Kansas hospital in late 2014, and in total ten people were affected by listeria-type infection connected with the ice-cream over five years; establishing the connection with Blue Bell was hard.

Contamination occurred in two plants: at Brenham, Texas ‘homebase’, and in Oklahoma. It appears that already in 2013 the company discovered contamination in its Oklahoma plant that was not treated properly despite an FDA inspection. Importantly, bacteria were found in that plant on floors and catwalks (i.e., bacteria can be easily passed with movement of workers and objects). Additional flaws were found in further inspections, including “condensation dripping from machinery into ice cream and ingredient tanks; poor storage and food-handling practices; and failures to clean equipment thoroughly”. Because of its stalling, the company drifted into what experts call “recall creep” — it happens when executives think limited action every time they are told of listeria findings is enough to solve the problem and constrain commercial damages, thence find themselves forced to perform greater recalls over and over again.

Blue Bell is the third-largest ice cream maker in the US and its products are widely admired. Many people across the country are said to have saddened by the closure of the plants and loss of their beloved ice cream for a period. This year the company resumed production and marketing, adding gradually more flavours and markets, after a thorough clean-up of plants, change of procedures and rules and training of employees. One of the practices installed is “test-and-hold” where a production series is sample-tested  and all packs are held in storage until it is cleared from bacterial contamination.

A serious fatal crisis related to food safety in Israel occurred in 2003 with the milk formula for babies by Remedia. It should be noted this was not an incident of contamination. In this case the company made a change in the composition of one of its formula versions by which it drastically reduced or eliminated from the product the vitamin B1. This ingredient is vital for the development of the nervous system of babies. As a result, critical damage was caused to the health of babies: four babies died and several more children grew up with irreversible damage to their development (neural, cognitive and motor). Although this event is different, and the consequences in the recent contamination incidents are much less severe, two relevant notions are in order. First, a contamination incident can lead to just as severe consequences when the problem is mishandled and information is concealed from authorities and consumers as the crisis of Blue Bell proves. Second, Remedia made the grave mistake of throwing all the blame on a German company (Humana) that was hired to develop, implement and test the new recipe (and erred in its tests). However, Remedia was responsible and accountable for its product to the parents and babies in Israel, not the faulty German company it worked with. Remedia ceased to exist.

It is probably only human for the company’s managers to direct a justified accusation and blame for a failure on a contractor, supplier or business partner, as a way of saying: “Look, this is not a failure in our own operation; you can still trust us with everything we are doing for you”. It does mitigate responsibility somewhat, though from a consumer viewpoint this kind of ‘clearing’ does not work and is often doomed to be rejected. The companies that market the implicated products did allow them to be distributed to consumers. At the end of the day, it is their brand names on the products that count.

It is impossible nowadays to completely eliminate food contamination, particularly by bacteria. However, food companies (and not them alone) can and should make every effort for preventing bacterial and other types of contamination and poisoning. They are expected to show that they are proactively taking measures to that aim. In addition, the owners and executives have to be open and sincere about the causes or circumstances of recalls to consumers, and consider revealing incidents even beforehand as indication the company is acting responsibly. It is pure investment in the credibility of their brands.

Ron Ventura, Ph.D. (Marketing)

Note:

These articles appeared in Fortune (Europe Edition), Number 13, 1st October 2015:

“Contamination Nation”, Beth Kowitt, pp. 53-56.

“How Blue Bell Blew It”, Peter Elkind, pp. 56-58.

Read Full Post »

The decision of the British people in a referendum on 23 June (2016) to leave the European Union (EU) — known as ‘Brexit’ — promises to emerge as a most profound event in the nation’s recent history. The result of the referendum in favour of Brexit, by a majority of 52% against 48%, was decisive yet not by a large margin; moreover, the striking differences in voting patterns between England and Scotland, and even within England, between London and other parts of the country, invoke deep tensions (In Scotland and London the Remain camp had a clear majority).

The effects of Brexit are still very early to call and are hard to predict because a departure of a member country, let alone the United Kingdom, from the EU has never been experienced so far. The effects also are expected to impact multiple areas, including politics, economics, business, social welfare and standard-of-living. This article focuses on the area of retailing; it reviews and contemplates early assessments of the plausible effects that leaving the EU can have on retailers and consumers in Britain. However, due to the early stage of the process, the ambiguity that surrounds the implications of leaving the EU, and the fact that the new British government is not enacting yet an exit from the EU (i.e., Article 50 of the EU Lisbon Treaty), one should be cautious in taking these assessments as concrete predictions about the (probable) outcomes of Brexit.

Uncertainty mixed with pessimism has claimed an immediate toll on the value of the pound sterling (particularly its exchange rate against the US dollar); stock prices have also moderately declined in London Stock Exchange, but further declines are foreseen as the process unfolds. The devaluation of the British pound is a critical factor whose effect is expected to roll for several more years. Retailers are concerned that rising prices of imported goods (e.g., food, clothing) will deter consumers. In addition, increased cost of imported raw materials and components used in production is likely to contribute to rising prices of local goods, further exerting an inflationary pressure. Positive effects that may arise from this devaluation on exports will be discussed later. The sense of “bad news” is not escaping consumers either, manifested in a quick and rather sharp decline in consumer confidence as reported by GfK marketing research firm. This could mean that consumers become hesitant and more inclined to “wait and see”, thus postponing their more costly purchases, particularly of discretionary and leisure products and services.

The Centre for Retail Research (CRR) considers multiple aspects wherein retailers and consumers are likely to be affected while entering a post-Brexit era. It is suggested that a decline of 5% in the value of the pound against the euro would be enough to compensate for new tariff barriers imposed by the EU, and the steeper decline that already occurred is a very good thing — it would help exports (e.g., e-commerce), transitioning from imports to local production, and tourism. The CRR argues that the pound was already over-priced and needed correction (note that  right after the referendum the pound declined 8% against the dollar and euro, but a slide down occurred earlier, at the beginning of this year, so against values of late 2015 the pound declined by as much as 15%). But there are additional important factors with structural implications that are noteworthy: need to fulfill changing jobs and a drive for automation; need for new worker and consumer protection laws and regulations; re-settling (digital) data protection regulation and mechanisms.

There is broad agreement that in order for Britain to retain relations with the European Single Market, it will have to continue and abide to product and data protection standards of the EU. Britain will also not be able to completely restrict worker migration from the EU. The difference will be, however, that Britain will have to work by those rules but will not have a say about them — a warning the Remain campaigners continue to critically voice. Different models are contemplated for the relations of Britain with the EU in the post-Brexit era, notably by joining Norway in the European Economic Area (EEA) or replicating the special relations of Switzerland with the EU. But the EU council nervously hurried to warn Britain, or any other country that contemplates to follow, that it should not allude itself of receiving an advantageous status as of Switzerland’s.

  • Another avenue for resolution may consider the trade arrangements of Israel as a non-member country with the EU, and its participation in Horizon 2020, a programme for science and technology research and development.

References made in the media to changes in retail sales in June seem too soon and hardly indicative of a real effect of the Brexit decision, primarily given that only ten days remained to the end of the month after the referendum (some sources suggest waiting for July’s figures). Figures also vary, depending on the basis of comparison (e.g., volume or value, last month or same month last year, all or like-for-like [same stores]). For example, sales by volume decreased 0.9% in June compared with May (2016), yet compared with June of last year (2015) they increased 4.3% (by value, sales increased just 1.5% [Britain’s Office for National Statistics (ONS): Retail Industry-Sales Index]). Different figures were published by KPMG consulting firm together with the British Retail Consortium (BRC): Their Retail Sales Monitor shows that sales grew just 0.2% in June year-on-year, but when compared on a like-for-like basis they dropped 0.5%. The BRC-KPMG monitor furthermore indicates that non-food sectors, especially fashion, were hit harder than the food or grocery sector.

Recent observed changes may be attributed at most to so-called ‘Brexit-sentiment’ . If we were to look already for a more reliable indication of an immediate post-referendum shock, the KPMG’s press release reports that sales fell particularly in the last week of June. The Financial Times (13 July ’16) indicates that according to its Brexit Barometer, day-to-day spending “may have bounced back to just slightly below what it was immediately before the June 23 referendum”. The number of visits to stores (‘footfall’) declined in the week immediately after the referendum (10% year-on-year for weekdays – especially on High Street), recovered a little in early July, followed by another a drop in visits. The fluctuations are not consistent and it is hard to conclude a trend at this time. The picture for Saturdays is even less bright: “high-street footfall on Saturdays, the most important shopping day, has now fallen year-on-year for three consecutive weeks”.

The Economist Intelligence Unit published just before the referendum a special, rather negative, report on Brexit (“Out and Down: Mapping the Impact of Brexit”). It relates to key implications of Brexit in regard to retailing: a fall of the pound, inflation in line with rise of import prices, consumer purchasing hesitation, and more complex supply chains for retailers. According to their projection, the year 2017 will be the worst for retailing; recovery will be felt during 2018-2020 as growth of retail sales volume resumes, but it will happen intermittently and sales will not return to the pre-Brexit level.

In order to better grasp how Brexit may change the direction for British economy, and for the well-being of consumers and retailers in particular, it would help to take a little longer look backwards (i.e., as far as 2007) at retail sales and some additional  indicators.

The ONS Retail Sales Index by volume (seasonally adjusted, excluding fuel): After a long period, from 2006 (shortly before the financial crisis) until late 2013, when sales volume (index) was almost stagnant at just about 100, it started lifting since early 2014 and until June this year (2016) to a level of 112.5. It has been a positive sign for return to the expansion years of a previous decade (~1996-2006). But the implementation of Brexit (i.e., at least while negotiating new trade agreements) threatens to halt the climb and impede a continued recovery of the sector from the lingering effects of the financial and economic crisis of 2007-2008 (including a ‘second spell’ in 2011-2013).

Growth in pay compared with inflation (ONS: UK Perspectives 2016 Personal & Househod Finances [Section 4]): This is an indicator of the cost of living (or the purchasing power of income from work). We may notice three distinct periods: (a) A ‘shock’ response to the financial crisis ~2008-2009 included a steep rise in consumer prices while growth in regular pay dissipated, and then a ‘correction’ of slowing price increases; (b) Inflation rate higher than growth in pay ~2010-2013 — during this period of the hardest burden on consumers, growth in pay remained at a bottom level of 1-2% while inflation climbed as high as 5% (2011) and subsequently “cooled” to 2-3%; (c) Renewal of real rise in pay ~2014-2016 as inflation starts to subdue, falling to near 0%, and pay growth reaches 2-3%. Worsening market conditions due to Brexit could lead to erosion once again of  regular (weekly) pay and suppressed consumer spending.

Household spending (ONS: UK Perspectives 2016 as above [Section 5]): The average household expenditure, inflation-adjusted, decreased from 2006 through 2012 from ~£550 to about £510 per week; then spending started to recuperate in 2013 and 2014, reaching £530. Improvement may have continued up to this year: On the one hand, regular pay increased in real terms in the past two years; on the other hand, the real disposable household income in Britain has been hovering just above £17,000 since 2006 (after a climb in previous years), though lifting its head a little in 2015. Now there is higher risk that such improvement in spending will not be possible to continue.

Consumer Confidence Index (GfK): The research firm GfK conducted a one-off special survey in the week following the referendum to measure its Consumer Confidence Barometer (CCB) (normally updated on a monthly basis). It provides a sharp demonstration of the impact of ‘Brexit-sentiment’: The (net) index value dropped from -1 in the previous survey to -9 after learning of the referendum result. The last time a similar decline (8 points) in a single month was measured occurred in 2011, and only in 1994 had a larger single drop been measured. Those belonging in the Remain camp are more negative (-13) than those in the Leave camp (-5). Respondents to the barometer are asked about the current state of the economy and their expectations over the next twelve (12) months — 60% expect the economic situation to worsen (an increase of 14% from pre-referendum). Also, 33% expect prices to rise sharply.

The Financial Times presents in its Brexit follow-up a chart of the history of GfK’s Confidence Index from 2007 to 2016: The chart shows how CCB dropped from just below 0 to -40 during the 2007-2008 crisis, recovered to -20, declined again to around -30 during the ‘second-spell’ of the economic crisis in 2011-2013, and then climbed back to a little above 0 before the referendum. A decline of CCB actually already started earlier this year, and then came the steep single drop following the Brexit referendum. Consumer confidence was already at lower (net) levels and has experienced continuous descents in the past ten years; it may likewise continue to deteriorate below -20 again after the recent drop in CCB.

A map by GfK shows variation across regions and demographic segments. Interestingly, the strongest ‘demoralising’ effect was found among the young group of ages 18-29 (decline of 13 points) compared with older groups (6-8 points off), yet the younger remain overall more positive and optimistic about the economy (index +6), especially compared with those of 50-64 of age (index -21).

  • After three years of decline in the number of retail companies in the UK running into financial difficulties, since the last peak of 2012 (54), it seems to be rising again in the first half of 2016, according to data gathered and reviewed by the Centre for Retail Research (note that not all companies going into legal administration necessarily go bankrupt and cease to operate). Growing pressure on retailers during the process of leaving the EU may put even more medium and large retailers (in number and size of stores) at risk of failure.

After a significant drop last year, number of retailers in trouble looks to be rising again in 2016

The depriciation of the British pound is expected to facilitate selling and increase exports to foreign consumers in other countries through e-commerce (i.e., retailing or shopping websites) by retailers residing in the UK. Especially during the period that existing trade agreements are still valid, it would be the best time for British retailers operating online to fill their coffers with cash. They will need to refrain from updating pound-nominated prices upwards as long as possible. When new trade agreements are reached, the terms for purchasing abroad online from British retailers may also change and new adjustments will be required.

  • Ido Ariel of Econsultancy recommends three supporting marketing methods for encouraging international customers to purchase online at the interim period on UK retail websites: inducing a sense of urgency and initiating pro-active targeted prompting messages; offering targeted promotions to increase personalization (e.g., geo-targeting); and enacting limited-time discounts.

However, the condition in which the British Economy arrives to this historic junction is concerning, having reduced its manufacturing sector too much over the years and relying too heavily on a services economy. This situation may mitigate the country’s ability to exploit its currency advantage in the short- to medium-term by increasing exports of goods, and may also put it in a less advantageous position as a strong producing economy in negotiations for future trade deals. The condition of the British economy could become even weaker if, as projected by the Economist Intelligence Unit, service companies — financial and banking of most — will lose their “passport” to act from the UK in the other EU member-countries (e.g., France, Germany), and thus they will choose to cut their operations in the country or leave altogether.

  • The contribution of the production sector to the economic output (Generated Value Added [GVA]) decreased in the UK from 41% in 1948, through 25% in 1990, and down to 14% in 2013 (‘production’ includes manufacturing, oil and gas extraction, and water and energy utilities);
  • The relative contribution of the services sector grew during that period from 46% to 79% (67% in 1990);
  • The growth of the sub-sector of business and financial services is most noticeable, expanding from 13% in 1978, through 22% in 1990, and reaching 32% in 2013.
  • A World Bank comparison referring specifically to manufacturing shows that its contribution to output in Britain is 10% versus 22% in Germany (UK’s is the lowest [with France] and Germany’s the highest among all G7 countries, 2012).
  • (Source: ONS review, April 2014: “International Perspective on the UK — Gross Domestic Product”. For main points see The Guardian’s Economics blog.)

In the long-term Britain may well succeed in re-establishing a strong position in business and trade. But it will come at a high cost in the short- and medium-term (next three to five years) for the economy overall, businesses and consumers, and this process is not free of risks. Is it that much necessary? Another contentious question is: How much has the EU really held the UK back? Answers to these questions remain in deep dispute. Having stayed in the EU, the UK might have been able to help stabilise the European economy while resolving its existing failures, and then grow faster with the EU. But too many Britons stopped believing this will ever be possible, or simply lost their patience. The EU leadership in Brussels bears much responsibility for arriving to this predicament. But that matters little now.

It is now a time for taking an opportunity to resolve weaknesses in the British economy — industry and trade. It will have to prove itself as an independent viable economy, less reliant directly on the EU but more like many other countries trading with the EU. Retailers may have to make changes to their mixtures between imported and locally manufactured products; to form trading ties with different and additional countries; and more vigorously refresh and update their trading, merchandising and pricing techniques and tactics to be competitive on the local stage, and where relevant on an international stage. The Centre for Retail Research has expressed most pointedly what is expected of retailers: “Retail post-Brexit will have to be more agile, more digital, capital-intensive and more responsive to change”. Retailers and consumers will have to adjust to new market conditions and adapt to new game rules.

Ron Ventura, Ph.D. (Marketing)

Read Full Post »

Consumers evoke from the visual appearance of a product their impressions of its beauty or aesthetics. They may also interpret physical features embedded in the product form (e.g., handles, switches, curvature) as cues for a proper use of the product. But there is an additional hidden layer of the design that may influence the judgement of consumers, that is the intention of the product designer(s). The intention could be an idea or a motive behind the design, as to what a designer wanted to achieve. However, intentions, only implicit in product appearance, may not be clear or easy to infer.

The intention of a designer may correspond to the artistic creativity of the product’s visual design (i.e., aesthetic appeal), its purpose and mode of use, and furthermore, extending symbolic meanings (e.g., social values, self-image of the target users). For a consumer, judgement could be a question of what one infers and understands from the product’s appearance, and how close one understands it to be the intention of the designer. For example, a consumer can make inferences from cues in the product form  (e.g., an espresso machine) about its appropriate function (e.g., how to insert a coffee capsule in order to make a drink) — but a consumer may ask herself, is that the way the designer intended the product to be used?  These inferences are interrelated and complementary in determining the ‘correct’ purpose, function or meaning of a product. There are original and innovative products for which the answers are more difficult to produce than for others based only on a product’s appearance.

  • Note: Colours and signs on the surface of a product may be informative in regard to function as well as symbolic associations of a product.

The researchers da Silva, Crilly and Hekkert (2015) investigated if and how consumers’ knowledge of the designers’ intentions can influence their appreciation of the respective products. Yet, in acknowledgement that consumers are likely to derive varied inferences on intention (some of them mistaken) from visual images of products, the researchers present verbal statements on intentions in addition to images. Moreover, their studies show that there is important significance to the contribution of the verbal statements, explicitly informing consumers-respondents of designers’ intentions, in influencing (improving) consumers’ appreciation of products (1).

To  begin with, consumers usually have different conceptions and understanding of design than professionals in the field. Thereby, most consumers are not familiar with terminology in the domain of design (e.g., typicality/novelty, complexity, unity, harmony) and may use their own vocabulary to describe attributes of appearance; if the same terms are used, they may not have the same meaning or interpretation among designers and common consumers (2). Nevertheless, consumers have innate tastes for design (e.g., based on principles of Gestalt), and with time they may develop better comprehension, appraisal skills, and refined preferences for design of artefacts (as well as buildings, paintings, photographs etc.). The preferences of individuals may progress as they develop greater design acumen and accumulate more experience in reacting to designed objects while preferences may also be affected by one’s personality traits. Design acumen, in particular, pertains to the aptitude or approach of people to visual design, which may be characterised by quicker sensory connections, greater sophistication of preferences, and stronger propensity for processing visual versus verbal information (3). The gaps prevailing between consumers and designers in domain knowledge and experience may cause diversions when making inferences directly about a product as well as when ‘reading’ the designer’s intention from the product’s appearance.

The starting point of da Silva, Crilly and Hekkert posits that “the designer’s intention can intuitively be regarded as the essence of a product and that knowledge of this intention can therefore affect how that product is appreciated” (p. 22). The ‘essence’ describes how a product is supposed to behave or perform as foreseen by the designer; thinking about it by consumers can give them pleasure as much as perceiving the product’s features.

Appreciation in Study 1 is measured as a composite of five scale items — liking, beauty, attractiveness, pleasingness, and niceness; it is a form of ‘valence judgement’ but with a strong “flavour” of aesthetics, a seeming remainder of its origin as a scale of aesthetic appreciation adapted by the researchers to represent general product appreciation.

  • Note: The degree to which the researchers succeeded in expanding the meaning of ‘appreciation’ may have some bearing on the findings where respondents make judgements beyond aesthetics (e.g., the scale lacks an item on ‘usefulness’).

At first it is established that knowledge of explicit intentions of designers, relating to 15 products in Study 1, influenced the appreciation of the designed products for good or bad (i.e., in absolute values) vis-à-vis the appreciation based on pictures alone. Subsequently, the researchers found support for overall increase in appreciation (i.e., positive effect) following the exposure to explicit statements of the designers’ intentions.

A deeper examination of the results revealed, however, that for three products there was a more substantial improvement; for ten products a moderate or minor increase was found due to intention knowledge; and two products suffered a decrement in appreciation. Furthermore, the less a product was appreciated based only on its image, the more it could gain in appreciation after consumers were informed of the designer’s intention. Products do not receive higher post-appreciation merely because they were appreciated better in the first place. More conspicuously, for products that were more difficult to interpret and judge based on their visual image, knowledge of the designer’s intention could help consumers-respondents realise and appreciate much better their purpose and why they were designed in that particular way, considering both their visual appeal and function (but there is a qualification to that, later explained).

The second study examined reasons for changes in appreciation following to being informed of designers’ intentions. Study 2 aimed to distinguish between appreciation that is due to appraisal of the intention per se and appreciation attributed to how well a product fulfills a designer’s intention, independent of whether a consumer approves or not of the intention itself. This study concentrated on three of the products used in Study 1, described briefly with their stated intentions (images included in the article):

  • A cross-cultural memory game (Product B) — The game “was designed with the aim of making the inhabitants of The Netherlands aware of their similarities instead of their differences” (i.e., comparing elements of Dutch and Middle Eastern cultures). [Product B gained the most in post-appreciation in Study 1.]
  • A partially transparent bag (Product C) — Things that are no longer in need, but are still in good condition, can be left in this bag on the street for anyone interested: “It was designed with the aim of enabling people to be generous towards strangers.” [Moderate gain.]
  • A “fitted-form” kitchen cupboard (Product G) — In this cupboard everyday products can be stored in fitted compartments according to their exact shapes. The designer’s intention said the product “was designed with the aim of helping people appreciate the comfortable predictability of daily household task”. [Product G gained the least in post-appreciation in Study 1.]

Consistent with Study 1, these three products were appreciated similarly and to a high degree based on images alone, and their appreciation increased to large, medium and small degrees after being informed of intentions. It is noted, however, that overall just half of respondents reported that knowing an intention changed how much they liked the respective product (about two-thirds for B, half for C, and a third for G). Subsequently respondents were probed about their reasons for changes in appreciation (liking) and specifically about their assessment of the product as means to achieve the stated intention. Three themes emerged as underlying the influence of intention knowledge on product appreciation: (a) perception of the product; (b) evaluation of the intention; and (c) evaluation of the product as a means to fulfill its intention (as explicitly queried).

Knowledge of the designer’s intention can change the way consumers perceive the product, its form and features. Firstly, it can make the product appear more interesting, such as by adding an element of surprise, an unexpected insight about its form (found especially for product B). In some cases it simply helps to comprehend the product’s form. The insight gained from knowing the designer’s intention may be expressed in revealing a new meaning of the product that improves appreciation (e.g., a more positive social ‘giving’ meaning of product C). But here is a snag — if the intention consumers are told of contradicts the meaning they assigned to the product when initially perceiving its image, it may inversely decrease one’s appreciation. For example, the ‘form-fitted’ cupboard (G) may seem nicely chaotic, but the way a consumer-participant interpreted it does not agree with the intention given by the designer (it ‘steals’ something from its attraction), and therefore the consumer becomes disappointed.

Upon being informed of the designer’s intention, a consumer may appreciate an idea or cause expressed in the intention itself (e.g., on merit of being morally virtuous, products B and C). The positive attitude towards the intention would then be transferred to the product (e.g., ‘helping people is a very beautiful thing’ in reference to C). On the downside, knowing an intention may push consumers away from a product (e.g., disliking the ‘predictability’ of one’s behaviour underlying product G). A product may thus gain or lose consumers’ favour in so far as the intention reflects on its essence.

But relying on a (declared) intention for the idea, cause or aim it conveys is not a sufficient criterion for driving appreciation upper or lower. Consumers also consider, as expected of them, whether the product is an able means to implement an idea or fulfill its aim. It is not just about what the designer intended to achieve but also how well a product was designed to achieve the designer’s goal. Participants in Study 2 were found to hold a product in favour for its capacity to fulfill its intended aim, even though they did not judge it as virtuous or worthy. There were also opposite cases where appreciation decreased but participants pointed out that the fault was not in the intention, rather in its implementation (e.g., “I think it’s a good idea [intention] but this [product C] won’t really work”). The authors suggest that participants use references in their judgements, including alternative known or imagined products which they believe to be more successful for fulfilling a similar aim or alternative aims or causes they could think of as appropriate for the same product.

The researchers find evidence in participants’ explanations suggesting they see how efficiency can be beautiful (e.g., how materials are used optimally and aesthetically). They relate this notion to a design principle of obtaining ‘maximum-effect-from-minimum-means’. Participants also endorsed novel or unusual means to realise the intention behind a product. Hekkert defined the principle above as one of the goals to pursue for a pleasing design.  It means conveying more information through fewer and simpler features, creating more meanings through a single construct, and applying metaphors. Hekkert also recommended a sensible balance between typicality and novelty (‘most advanced, yet acceptable’) that will inspire consumers but not intimidate them (4).

  • This research was carried out as part of the Project UMA: “Unified Model of Aesthetics” for designed artefacts at the Department of Industrial Design, Delft University of Technology, The Netherlands. (See how the model depicts a balance in meeting safety needs versus accomplishment needs for aesthetic pleasure: connectedness-autonomy, unity-variety, typicality-novelty).

Knowledge of the intentions of designers can elucidate for consumers why a product was designed to appear and to be used in a particular way. It contributes motivation or cause (e.g., social solidarity, energy-saving) for obtaining and using the designed product. But the intention should be reasonable and agreeable to consumers, and the product design in practice has to convince consumers it is fit and capable to fulfill the intention. It is nevertheless desirable that the product is visually pleasing, as an object of aesthetic appeal and as a communicator of functional and symbolic meanings.

When marketers assess that consumers are likely to have greater difficulty to interpret a product visual design and infer the intention behind it, they may wisely accompany a presentation of the product with a statement by the designer. This would apply, for instance, to innovative products, early products of their type, or original concepts for known products. The designer may introduce the design concept, his or her intention or aim, and perhaps how it was derived; this introduction may be delivered in text as well as video in assorted media as suitable (print, online, mobile). On the part of consumers, exposure to the designer’s viewpoint would  enrich their shopping and purchasing experience, helping them to develop better-tuned visual impressions and judgements of products.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) How People’s Appreciation of Products Is Affected by Their Knowledge of the Designers’ Intentions; Odette da Silva, Nathan Crilly, & Paul Hekkert, 2015; International Journal of Design, 9 (2), pp. 21-33.

(2) How Consumers Perceive Product Appearance: The Identification of Three Product Appearance Attributes; Janneke Blijlevens, Marielle E.H. Creusen, & Jan P.L. Schoorman, 2009; International Journal of Design, 3 (3), pp. 27-35.

(3) Seeking the Ideal Form: Product Design and Consumer Response; Peter H. Bloch, 1995; Journal of Marketing, 59 (3), pp. 16-29.

(4) Design Aesthetics: Principles of Pleasure in Design; Paul Hekkert, 2006; Psychology Science, 48 (2), pp. 157-172.

Read Full Post »

For the past two years the Internet company Yahoo is under immense pressure: The management led by CEO Marissa Mayer, in office since 2012, is working hard to reinvigorate the core online business of the company with new up-to-date technologies; and furthermore, creating more value, mainly from advertising. The board of directors is seeking to give management more time to find a way out of the difficult times, however it is struggling to fend off pressures from activist investors who demand a break-up of the company in order to salvage the real value they see captured in Yahoo through its stakes in external companies — Alibaba of China and Yahoo Japan. Yahoo is in a delicate and complex situation, carrying a danger that consumers-users will be left behind in the final business outcome.

The key criticism of Yahoo concerns the poor performance of its online advertising system, lagging behind other platforms such as Google (search) and Facebook (social media). The core business of the company entails its search engine and media (news in various domains), acting as sources of income from advertising (e.g., display ads, sponsored results). Display advertising is now active also in Yahoo’s Mail (e-mail service).

Underlying the poor financial performance of the advertising system are mainly two problems: (a) inconvenient and technologically outdated utilities and tools for advertisers when placing their orders for online ads (1); (b) a relatively low volume of search queries by Internet users, particularly far behind Google, and insufficient returns by visitors to the different sections of Yahoo websites. For example, according to figures revealed by the New-York Times, only ten percent (10%) out of one billion monthly visitors of Yahoo websites return every day, suggesting weak brand attachment; the reported figure for Facebook is 65% (2). It may start from failing to persuade more Internet users to make Yahoo a start homepage on their browsers.

Yahoo may be suffering, nevertheless, from a  broader problem of generating income from its online services. That is, the company should not rely only on income from advertising but create additional schemes that can generate income from use of its online services. Yahoo could monetise services, for instance, by charging users on premium plans (e.g., allowing extended storage capacity, more advanced tools or features, increased customisation, access to extended content). Yahoo may further not have a wide enough range of services on which it can charge premiums from registered (logged-in) users. Rightfully, companies are reluctant to ask customers to pay for online services, but that may be an unaffordable privilege, as in the case of Yahoo. Moreover, charging price premiums for enhanced services is legitimate and can contribute to higher perceived quality or value to consumers.

The complexity of the situation can partly be explained by the claim of investors that a greater portion of market value of Yahoo arises from its stakes in Alibaba and Yahoo Japan than from its own activity. Yahoo originally (2005) had a stake of ~24% in the Chinese e-commerce company Alibaba. Shortly before an initial public offering (IPO) of Alibaba in September 2014, that stake was valued $40 billion. During the IPO, Yahoo sold 40% of that stake as agreed with Alibaba to the latter’s requirement. Yahoo eventually collected more than $9bn, available to award shareholders or re-invest in the company (how funds were actually used is unpublished). The remaining stake of Yahoo in Alibaba (~15%) was worth some $30bn in December 2015. Investors thought that not enough value stemmed from Yahoo’s genuine activity before Alibaba’s IPO, and some seem to believe that is nonetheless apparent after the IPO.

The first two years of Mayer as CEO enjoyed a sense of improvement and optimism. Until the IPO of Alibaba, Yahoo acquired more than forty technology companies to bring fresh methods, tools and skills to the company. The share price of Yahoo climbed from a low of under $20 to above $30 by the end of 2013 and reached $50 in late 2014. But after Alibaba’s IPO, tensions with investors, especially the activist ones, escalated as patience with Mayer as well as the board was running thin. The share price also started to decline back to $30 during 2015 (it recovered to ~$36 since January 2016).

It must be noted that the board of directors together with Mayer did try to find solutions that would satisfy the investors while saving the core business of Yahoo. One plan considered was to sell the remaining stake of Yahoo in Alibaba but that solution was abandoned due to concerns about a looming large tax liability. Another solution, championed by Mayer, was to put the core media and search business of Yahoo on sale in one piece, but that plan was also just recently suspended as the process failed to mature. The most serious prospective buyer was the US telecom company Verizon; they were thinking of merging the activity of Yahoo with that of AOL, acquired last year, but executives were worried about the company’s ability to pull together such an integration effort in a short time (3).

  • Update note (July 2016): After all, a deal was done with Verizon to buy Yahoo for $4.8bn (excluding its stakes in Alibaba and Yahoo Japan.)

In the second part of this article I examine the display and organization of Yahoo’s websites with a user-consumer viewpoint in mind — visual layout, sections and services on the website, composition of content, links, menus and other objects. The examination is focused more on the content and services Yahoo provides to its users rather than its advertising.

Yahoo runs multiple versions of its website in different countries and languages. The major part of the review is centered on the website of Yahoo in the United Kingdom as a pivot exemplar. References will be subsequently made to other versions. Nevertheless, all of the additional websites visited (8) highly resemble the UK website in appearance and composition. Through the examination I intend to argue that Yahoo has not organized and designed the homepages of its website versions appropriately to expose users to, and give them the necessary inducement to access, some of its core services that would also be important sources of income. However, beyond the homepages, I also relate to the ‘portfolio’ of media topical sections and services that comprise the websites.

Some of the graphics on the page were not captured (the title name Yahoo and news bar were supplemented)

Two services of Yahoo are primary assets: the search engine (Yahoo! Search) and the e-mail service (Yahoo! Mail). Both follow the company’s website in substance from its early days. They are essential components of Yahoo’s brand. The search facility is the gate to the enormous content on the Internet. The e-mail service with its mailbox management utilities is at the foundations of the company’s invaluable customer base. Both have advanced over the years and added features, although there is argument over the nature of progress particularly with regard to the search engine. A third additional asset of Yahoo is the media content of news stories and videos in various domains delivered on the website. On the left-hand of the homepage appears a sidebar with links to services and news topics on the website; a ‘global’ heading bar appears on top of any webpage on Yahoo’s site.

As important and interesting as the news media content may be, its preview takes grossly too much space of the homepage. Conversely, the search window for initial queries, while on top, is marginalised on the page, nearly “drowning” in the news content. It sends a message to visitors that this feature is secondary or less to media content. It is little wonder that on-face Internet users perceive Google as the universal search engine (Yahoo has been relying on the powers of search engines of Google and previously Microsoft’s Bing in recent years). The icon-link to the e-mail service is not in a much better position at the top right corner. Even though three links for Mail appear on the homepage — the icon right to the search window, on top of the vertical sidebar, and on left side of the heading bar — none of these positions is central. The allocation of space on the homepage is not reasonably proportional between these three assets. It suggests that Yahoo has become a media company and has practically discounted its two other assets.

The sidebar added to the website in the past two years is a welcome contribution as it helps to quickly familiarise with or easily find some key services and news topics on Yahoo’s site. Nevertheless, icons-links for those services and topics could receive better attention and salience in users’ eyes and minds if they were arranged in a central area of the page adjacent to the Search window and Mail icon (e.g., beneath them). It would give Yahoo an opportunity to promote services or topics with a greater income potential vis-à-vis visitors’ interests and utility in using particular services. For example, the online cloud-based service Flickr for storing, editing and showcasing photos is hardly noticed on the head-bar, and if at all on the sidebar (Flickr was acquired by Yahoo in 2005). If site users could also see more instantly and clearly what functional services (non-news) are offered by Yahoo, it might be better understood why there is a Sign-In option separate from Mail.

  • Extra feature-services such as Contacts, Calendar, Notepad and Messenger (chat) are already included in Mail.

Yahoo highlights on its homepage general news, sport, entertainment and finance. On the ‘homepage’ of the news section one can find more categories such as UK,  World, Science & Tech, Motoring and Celebrity. Links to some of them appear on the sidebar of the UK homepage (e.g., Cars [Motoring], Celebrity). Interestingly, some news/media sections do behave as more autonomous sites and some have a different layout with a visual graphic display of tiles — Parenting, Style and Movies. (In the Italian version, Beauty and Celebrity sections also exhibit a tile ‘art’ display.)

The news headlines with the snippets (briefs) are useful but those do not necessarily belong on the homepage in that long a list. The ‘ribbon’ of images for selected stories would most appropriately fit on the homepage with a focal story changing on top — that is all that needs to remain on the homepage (with some enhancements such as choice of category) while the additional headlines are delegated to the News ‘homepage’. In the final display of the homepage, a concise and elegant arrangement should include the Search window and Mail/ Sign-In icons, surrounded by a News showcase and a palette of selected services or media topics.

  • A visitor has to look deeper into the website to trace additional services that may be  interesting and useful. A few examples: (1) The Finance (news and more) section includes a personalised utility ‘My Portfolios’ for managing investments; (2) On a page enlisting more services one can find Groups (discussion forums) and Shopping. Other features or services on a sidebar or head-bar refer to Weather, Mobile (downloading Yahoo apps), and Answers (subdivision of Search — peer-to-peer Q&A exchanges).

When the homepage of UK website is compared with other country and language websites of Yahoo, it is mostly noticeable that some of the links on the sidebar and head-bar may vary, apparently accounting for regional and cultural differences in public interests. Countries may also be affiliated or in co-operation with different local content and service providers. For instance: Italy assigns more importance to Style, Beauty and Celebrity, also having more invested topical sections; France has a section on real-estate (Immobilier) in affiliation with BFM TV); Australia has a TV section affiliated with PLUS7); and in Germany the Weather and Flickr services are represented on both sidebar and head-bar. It is further observed that the sidebar in Yahoo Australia includes many more links than in other site versions.

Regarding the US website, some differences can be marked. First, subject titles of appear above each news headline. Second, a reference to the social blogspace site Tumblr appears on the head-bar (in addition to Flickr) — it appears also on the site of Australia but not on the other sites visited (Tumblr was acquired by Yahoo in 2013). Third, the US site chose to mention on its sidebar Shopping and Politics.

  • The Yahoo websites exhibit anomalies implying that the company refrains from promoting some of its own in-house or subsidiary services. For instance, Flickr and Tumblr are sidelined, and the latter is exclusive to just a couple of countries. The ‘Shopping’ product search for attractive retailer offers (powered by Nextag) is more often hidden, and Yahoo homepages provide links to eBay and Amazon.

In order to design in practice the most appropriate and effective composition and layout of the homepage, Yahoo may apply usability tests, eye tracking, and possibly also tracking of mouse movements and clicks. These three methodological approaches can be used in parallel or even simultaneously to derive findings that can support and complement each other in guiding the design process. Attention obviously should be paid to visual appeal of the page appearance in the final design. As suggested above, however, emphasis should be directed to the content and services provided by Yahoo as opposed to the advertising space.

Notwithstanding, the homepage is just the start of the journey of a visitor on the website. Of course much depends on the quality of services and content in determining how long a visitor will stay on the site. For example, how the mail, e-commerce (shopping), or photo service platform compare with competition. Particularly with respect to the search engine, continued utilisation relies on relevance, credibility and timeliness (historical to up-to-date) of results generated.

Yahoo provides specialised searches of websites and pages, images, videos, answers, products and more. Yet the company acquired in the past the Altavista engine that was advantageous in retrieving higher-quality and academic-level information sources and materials but it was apparently submerged without leaving a trace; and as indicated earlier, Yahoo has turned to stronger capabilities of competitors at the expense of developing more of their own. Marissa Mayer aims alternately to create a leverage by developing a powerful intelligent search engine for mobile devices in a mobile-friendly site/app. Even though the mobile-driven approach can be a move in the right direction for Yahoo, it may not resolve the suggested problems inherent in the online website, and skeptics doubt that the company has the skills and resources in its current state to accomplish those goals.

Yahoo has a lot at stake. It should not rely on users to know how to get to its services independently or to search for their Internet addresses. The site, online or mobile, has to give a hand and show users the way to the services it wants them most to visit and apply, and there is no better place to start than on the site’s homepage. The solutions needed are not just about technology but in the domain of marketing strategy and user-consumer online and mobile behaviour. Yet, looking at how events roll at Yahoo, the decisions made could be driven by business and financial considerations above the heads of users-consumers.

  • The lessons for Yahoo should now be learnt by Verizon as it intends to merge between functions and capabilities of Yahoo and AOL, and probably rebrand them.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) “Marissa’s Moment of Truth”, Jess Hempel, Fortune Europe Edition, 14 May 2014  pp. 38-44.

(2) “Yahoo’s Suitors Are in the Dark About its Financial Details”, International New-York Times, 16-17 April 2016.

 

 

Read Full Post »

Human thinking processes are rich and variable, whether in search, problem solving, learning, perceiving and recognizing stimuli, or decision-making. But people are subject to limitations on the complexity of their computations and especially the capacity of their ‘working’ (short-term) memory. As consumers, they frequently need to struggle with large amounts of information on numerous brands, products or services with varying characteristics, available from a variety of retailers and e-tailers, stretching the consumers’ cognitive abilities and patience. Wait no longer, a new class of increasingly intelligent decision aids is being put forward to consumers by the evolving field of Cognitive Computing. Computer-based ‘smart agents’ will get smarter, yet most importantly, they would be more human-like in their thinking.

Cognitive computing is set to upgrade human decision-making, consumers’ in particular. Following IBM, a leader in this field, cognitive computing is built on methods of Artificial Intelligence (AI) yet intends to take this field a leap forward by making it “feel” less artificial and more similar to human cognition. That is, a human-computer interaction will feel more natural and fluent if the thinking processes of the computer resemble more closely those of its human users (e.g., manager, service representative, consumer). Dr. John E. Kelly, SVP at IBM Research, provides the following definition in his white paper introducing the topic (“Computer, Cognition, and the Future of Knowing”): “Cognitive computing refers to systems that learn at scale, reason with purpose and interact with humans. Rather than been explicitly programmed, they learn and reason from interactions with us and from their experiences with their environment.” The paper seeks to rebuke claims of any intention behind cognitive computing to replace human thinking and decisions. The motivation, as suggested by Kelly, is to augment human ability to understand and act upon the complex systems of our society.

Understanding natural language has been for a long time a human cognitive competence that computers could not imitate. However, comprehension of natural language, in text or speech, is now considered one of the important abilities of cognitive computing systems. Another important ability concerns the recognition of visual images and objects embedded in them (e.g., face recognition receives particular attention). Furthermore, cognitive computing systems are able to process and analyse unstructured data which constitutes 80% of the world’s data, according to IBM. They can extract contextual meaning so as to make sense of the unstructured data (verbal and visual). This is a marked difference between the new computers’ cognitive systems and traditional information systems.

  • The Cognitive Computing Forum, which organises conferences in this area, lists a dozen characteristics integral to those systems. In addition to (a) natural language processing; and (b) vision-based sensing and image recognition, they are likely to include machine learning, neural networks, algorithms that learn and adapt, semantic understanding, reasoning and decision automation, sophisticated pattern recognition, and more (note that there is an overlap between some of the methodologies on this list). They also need to exhibit common sense.

The power of cognitive computing is derived from its combination between cognitive processes attributed to the human brain (e.g., learning, reasoning) and the enhanced computation (complexity, speed) and memory capabilities of advanced computer technologies. In terms of intelligence, it is acknowledged that cognitive processes of the human brain are superior to computers inasmuch as could be achieved through conventional programming. Yet, the actual performance of human cognition (‘rationality’) is bounded by memory and computation limitations. Hence, we can employ cognitive computing systems that are capable of handling much larger amounts of information than humans can, while using cognitive (‘neural’) processes similar to humans’. Kelly posits in IBM’s paper: “The true potential of the Cognitive Era will be realized by combining the data analytics and statistical reasoning of machines with uniquely human qualities, such as self-directed goals, common sense and ethical values.”  It is not sufficiently understood yet how cognitive processes physically occur in the human central nervous system. But, it is argued, there is growing knowledge and understanding of their operation or neural function to be sufficient for emulating at least some of them by computers. (This argument refers to the concept of different levels of analysis that may and should prevail simultaneously.)

The distinguished scholar Herbert A. Simon studied thinking processes from the perspective of information processing theory, which he championed. In the research he and his colleagues conducted, he traced and described in a formalised manner strategies and rules that people utilise to perform different cognitive tasks, especially solving problems (e.g., his comprehensive work with Allen Newell on Human Problem Solving, 1972). In his theory, any strategy or rule specified — from more elaborate optimizing algorithms to short-cut rules (heuristics) — is composed of elementary information processes (e.g., add, subtract, compare, substitute). On the other hand, strategies may be joined in higher-level compound information processes. Strategy specifications were subsequently translated into computer programmes for simulation and testing.

The main objective of Simon was to gain better understanding of human thinking and the cognitive processes involved therein. He proclaimed that computer thinking is programmed in order to simulate human thinking, as part of an investigation aimed at understanding the latter (1). Thus, Simon did not explicitly aim to overcome the limitations of the human brain but rather simulate how the brain may work-out around those limitations to perform various tasks. His approach, followed by other researchers, was based on recording how people perform given tasks, and testing for efficacy of the process models through computer simulations. This course of research is different from the goals of novel cognitive computing.

  • We may identify multiple levels in research on cognition: an information processing level (‘mental’), a neural-functional level, and a neurophysiological level (i.e., how elements of thought emerge and take form in the brain). Moreover, researchers aim to obtain a comprehensive picture of brain structures and areas responsible for sensory, cognitive, emotional and motor phenomena, and how they inter-relate. Progress is made by incorporating methods and approaches of the neurosciences side-by-side with those of cognitive psychology and experimental psychology to establish coherent and valid links between those levels.

Simon created explicit programmes of the steps required to solve particular types of problems, though he aimed at developing also more generalised programmes that would be able to handle broader categories of problems (e.g., the General Problem Solver embodying the Means-End heuristic) and other cognitive tasks (e.g., pattern detection, rule induction) that may also be applied in problem solving. Yet, cognitive computing seeks to reach beyond explicit programming and construct guidelines for far more generalised processes that can learn and adapt to data, and handle broader families of tasks and contexts. If necessary, computers would generate their own instructions or rules for performing a task. In problem solving, computers are taught not merely how to solve a problem but how to look for a solution.

While cognitive computing can employ greater memory and computation resources than naturally available to humans, it is not truly attempted to create a fully rational system. The computer cognitive system should retain some properties of bounded rationality if only to maintain resemblance to the original human cognitive system. First, forming and selecting heuristics is an integral property of human intelligence. Second, cognitive computing systems try to exhibit common sense, which may not be entirely rational (i.e., based on good instincts and experience), and introduce effects of emotions and ethical or moral values that may alter or interfere with rational cognitive processes. Third, cognitive computing systems are allowed to err:

  • As Kelly explains in IBM’s paper, cognitive systems are probabilistic, meaning that they have the power to adapt and interpret the complexity and unpredictability of unstructured data, yet they do not “know” the answer and therefore may make mistakes in assigning the correct meaning to data and queries (e.g., IBM’s Watson misjudged a clue in the quiz game Jeopardy against two human contestants — nonetheless “he” won the competition). To reflect this characteristic, “the cognitive system assigns a confidence level to each potential insight or answer”.

Applications of cognitive computing are gradually growing in number (e.g., experimental projects with the cooperation and support of IBM on Watson). They may not be targeted directly for use by consumers at this stage, but consumers are seen as the end-beneficiaries. The users could first be professionals and service agents who help consumers in different areas. For example, applied systems in development and trial would:

  1. help medical doctors in identifying (cancer) diagnoses and advising their patients on treatment options (it is projected that such a system will “take part” in doctor-patient consultations);
  2. perform sophisticated analyses of financial markets and their instruments in real-time to guide financial advisers with investment recommendations to their clients;
  3. assist account managers or service representatives to locate and extract relevant information from a company’s knowledge base to advise a customer in a short time (CRM/customer support).

The health-advisory platform WellCafé by Welltok provides an example of application aimed at consumers: The platform guides consumers on healthy behaviours recommended for them whereby the new assistant Concierge lets them converse in natural language to get help on resources and programmes personally relevant to them as well as various health-related topics (e.g., dining options). (2)

Consider domains such as cars, tourism (vacation resorts), or real-estate (second-hand apartments and houses). Consumers may encounter tremendous information in these domains on numerous options and many attributes to consider (for cars there may also be technical detail more difficult to digest). A cognitive system has to help the consumer in studying the market environment (e.g., organising the information from sources such as company websites and professional and peer reviews [social media], detecting patterns in structured and unstructured data, screening and sorting) and learning vis-à-vis the consumer’s preferences and habits in order to prioritize and construct personally fitting recommendations. Additionally, it is noteworthy that in any of these domains visual information (e.g., photographs) could be most relevant and valuable to consumers in their decision process — visual appeal of car models, mountain or seaside holiday resorts, and apartments cannot be discarded. Cognitive computing assistants may raise very high consumer expectations.

Cognitive computing aims to mimic human cognitive processes that would be performed by intelligent computers with enhanced resources on behalf of humans. The application of capabilities of such a system would facilitate consumers or the professionals and agents that help them with decisions and other tasks — saving them time and effort (sometimes frustration), providing them well-organised information with customised recommendations for action that users would feel they  have reached themselves. Time and experience will tell how comfortably people interact and engage with the human-like intelligent assistants and how productive they indeed find them, using the cognitive assistant as the most natural thing to do.

Ron Ventura, Ph.D. (Marketing)

Notes:

1.  “Thinking by Computers”, Herbert A. Simon, 1966/2008, reprinted in Economics, Bounded Rationality and the Cognitive Revolution, Massimo Egidi and Robin Marris (eds.)[pp. 55-75], Edward Elgar.

2. The examples given above are described in IBM’s white paper by Kelly and in: “Cognitive Computing: Real-World Applications for an Emerging Technology”, Judit Lamont (Ph.D.), 1 Sept. 2015, KMWorld.com

Read Full Post »

Older Posts »