AbstractIn today’s world which is subject toan increasing number of stores and level of rivalry on a daily basis, accuratestrategizing for each and every one of the marketing mix elements – product,price, place, promotion, physical evidence, participants and process –influences a store’s success like never before. Each of these elements includea number of factors that needs to be assessed and carefully planned, forexample, store location, is one the most important variables that should beconsidered while determining the place strategies of a company.

Over the years,researchers and marketers have used a number of different approaches forsolving the store location and site selection problem. In this paper, we willreview some of the most accepted and applied computational methods ofdetermining the optimal place for a retail store. Keywords: Location Based Social Network Data, Computational Analysis,Geo-marketing, IntroductionDetermining retail store popularityand studying the variables influencing it, has always been one of the hottestresearch topics noticed in many different scientific domains. From a marketingperspective, if retail store popularity from a target customer’s point of viewis what one is after, it can be controlled and even enhanced through accurateplanning of the marketing mix elements. The marketing mix for productionbusinesses was defined by Kotler as “a set of controllable marketing variables –product, price, place and promotion – that the firm can use to get a desired response from their targetcustomers” (Rafiq& Ahmed, 1992).

Booms and Bittner, later modifiedthe marketing mix concept to better fit the marketing aspects of services byadding three new elements to the mix: process, physical evidence andparticipants. The framework introduced by Booms and Bittner has been largelyaccepted and used by marketing managers in successful companies in the processof determining their marketing strategies ever since. Accurate planning foreach one of the marketing mix elements includes making important decisionsabout a number of other factors that together shape the overall strategiesconcerning said element. For example, planning for the “place” element,includes making decisions about factors like store location, distribution channels,accessibility, distribution network coverage, sales domains, inventoryplacement and transportation facilities. Store placement, especially forservice providers and retail stores, has always been considered as one of themost important business decisions a firm can make, since it is a criticalfactor contributing to a business’s overall chance for success. “No matter how goodits offering, merchandising, or customer service, every retail companystill has to contend with three critical elements of success: location,location, and location” (Taneja, 1999). Retail stores ingeneral are classified into six categories, namely specialty stores, departmentstores, supermarkets, consumer product stores,furniture stores, and construction material stores.

(Fahui Wang, Chen, Xiu, & Zhang, 2014). There are many different approaches to supportdecision making in case of retail store placement. Some of these approaches includingrelying on experience and the use of checklists, analogues and ratios have beenaround and used by marketing managers for many years (Hernández,Bennison, Herna, & Bennison, 2005). Such techniques are favored by some managerssince they require minimum levels of budget, technical expertise and data, yettheir downfall lies in the high level of subjectivity in decision making andthe fact that they are almost incompatible with GIS (Hernandez,Bennison, & Cornelius, 1998).

Other techniques including approaches basedon the theory of centrality, gravity models, percolation theory and featureselection are more computational and therefor need a higher level of expertiseand resources, but at the same time offer a superior level of predictabilityand are not bound by a high amount of subjectivity. Since the main goal of thisreview is to assess the computational approaches to solving the retail storeplacement problem, the later techniques are discussed in details in thefollowing sections.Computational Techniques The Theory of Centrality (Principle of MinimumDifferentiation)Thistheory was presented by Hotelling (Hotelling, 1929), and focuses onthe importance of a store’s proximity to its main rivals and argues thatdistance from rivals is more important than distance from customers. in 1958,based on Hotelling’s theory, Nelson suggested that while suppliers of a givenproduct or service are located near one another, demand rises (Litz, 2014).

Later, this theorywas considered as the basis for multiple other approaches such as space syntaxanalysis (Hillier & Hanson, 1984), natural movement (Hillier, Perm, Hanson, Grajewski, & Xu, 1993) and the multiplecentrality assessment (Porta et al., 2009). Space syntaxtechniques which were originally aimed at studying the morphological logic ofurban grids, are referred to as the application of a set of configurationalanalysis techniques that assess the structure of the urban grid (Hillier & Hanson, 1984). Such techniquesalso focus on the centrality measures which are derived from the street networkand its association with economic variables (Fahui Wang et al., 2014). In contrast tothe attraction theory which implies that design should be mostly based on theattraction degree of a place which is determined by the level of movement toand from that place, natural movement theory emphasizes the importance ofspatial configuration. Configuration is the way that the spatial elements whichpeople move through are linked together to form a pattern (Hillier et al., 1993).

Natural movementtheory indicates that configuration can have effects on movementwhich are independent of attractors and can be considered as the primarygenerator of movement. Natural movement in a grid is the proportion of urbanpedestrian movement determined by the gridconfiguration itself which is the most pervasive and consistent component ofmovement (Hillier et al., 1993). The multiplecentrality assessment (MCA) approach defines being central as being close,intermediary, straight and critical to other places located in the area (Fahui Wang et al.

, 2014). Porta andcolleagues (Porta et al., 2009), confirm thehypothesis that street centrality plays a crucial role in shaping theformation of urban structure and land uses. They use a kernel densityestimation (KDE) to prepare data for the assessment of the correlation thatexists between being popular and the distribution of commercial and serviceactivities in the area of a given place. They define centrality with variablessuch as closeness, between-ness and straightness.

Most recently, Wang et al (Fahui Wang et al., 2014) examined the rolestreet centrality plays in the popularity of different types of retail stores.They also used a KDE estimation for data preparation and assessed thecorrelation among centrality and location advantage with the use of streetcentrality indices mentioned before. Gravity (Spatial Interaction) ModelsGravitymodels have been used as a solution to the retail store location problem byresearchers, analysts and marketing managers for many years now.

These models emphasize on a customer’s perspective on availability andaccessibility of a given store. The development of the first version of a gravitymodel was inspired in late 1930s by the work of Reilly, an American researcher (Kubis & Hartmann, 2007). Reilly suggestedthat customers may make tradeoffs between the specific features of a store’s main product and the store’s location (Litz, 2014). In1967, Wilson introduced a model for spatial distribution (A. G. Wilson, 1967) describing theflow of money from population centroids to retail centers which was considereda basis for retail locating and the prediction of retail center dynamics formany years (A. G. Wilson & Oulton, 1983).

In Wilson’smodel, survival of a retail center is dependent on its ability to compete forthe limited amount of available resources (customers) (Piovani, Molinero, & Wilson, 2017). He observed thesimilarities between the factors applied to the gravity model and the partitionfunctions used in statistical mechanics, which led to a shift from a Newtoniananalogy to a Boltzmann statistical mechanics analogy (A. Wilson, 2010). Consequently,Wilson introduced a new framework for spatial interaction modeling based on themaximization of the entropy of urban and regional areas. In this framework, thevalue of two parameters (one that scales attractiveness and floor-space andanother that depicts cost of moving), determine the survival chance of a retailcenter. Gravitymodels are usually divided into two different general groups based on theirtype of approach; qualitative and quantitative models. As it is suggested bytheir name, a qualitative model uses non-numerical criteria to determine thebest location for a store.

On the other hand, quantitative models takeadvantage of available numerical information including the number ofinhabitants, distances and so on. There are twotypes of quantitative gravity models, deterministic and probabilistic. Whiledeterministic models usually calculate an estimation of accounting variablessuch as turnover or return on investment to present to marketing managers todecide upon, probabilistic models attempt to model the probability of aconsumer that lives at location i to purchase products at location j. The later modelsare based on the model of Huff (Kubis & Hartmann, 2007).  Percolation TheoryScientistshave tried to describe and characterize the regionalization of urban space in ahierarchical manner for almost a century now. “A hierarchy emerges with respectto the types of relationships that exist given the cluster size, whether thecluster is a village, a town or a city” (Arcaute et al., 2015; Berry, Garrison, Berry, &Garrison, 2014) One of the mostfamous examples of this type of approach is the Central place hierarchies (Boventer, 1969) introduced byChristaller (Arcaute et al., 2015).

The origins ofChristaller’s central place theory dates back to 1933, when this German researcher firstsuggested that there is a reverse relationship between the demand for a productand the distance from the source of supply. This theory is based on theimportance of transportation costs for the customers. The main pitfalls of thistheory are that it fails to consider the effects of product attributes such ascost and demand frequency and the possibility of multi-purpose shopping (Litz, 2014).

Over the years, the hierarchical point ofview has led to the emergence of a number of other approaches like graph theoryand network theory. The connectivity of the system in all of the aforementionedapproaches can be explored through the percolation theory (Arcaute etal., 2015). The percolation theory, describes theinteraction between classical particles with a random medium and provides a simpledepiction of critical behavior (Taylor,Shante, & Kirkpatrick, 2006).

In other words, when a particle spreadsthrough space, it eventually reaches a critical point and tends to form acluster. Thus, applying percolation theory to street networks in order touncover the underlying patterns in a city in relation to available infrastructures,has been used by some researchers in the recent years (Arcaute etal., 2015; Piovani et al., 2017). Feature Selection via Location Based Social NetworkDataIn the past decade, different factors like theadvancements made in wireless communication technologies, the growing universalacceptance of location-aware technologies including mobile phones and smarttablets equipped with GPS1receivers, Sensors placed inside these devices, attached to cars and embeddedin infrastructures, remote sensors transported by aerial and satelliteplatforms and RFID2 tagsattached to objects was complemented by the development of GIS3technologies to result in the availability of an increasing amount of data withcontent richness which can be exploited by analysts. With the emergence andgrowing popularity of social networks and location-aware services, the nextstep was combining these two technologies which resulted in the introduction oflocation based social networks4(Kheiri, Karimipour, & Forghani, 2016).

Since such networks actas a bridge establishing a connection between a user’s real life and onlineactivities (Kheiri et al., 2016), data obtained from themis considered among one of the most important resources of spatial data andpresents a unique opportunity for researchers in business-related fields toprecisely study consumer’s behavioral patterns. Consequently, with the introduction of LSBNs, thequestion of optimal store placement like many other scientific problems hasentered a new era with fast, diverse and voluminous data, terms that areusually used to describe big data. Liu and his colleagues (Liu et al., 2015), introduced the term”social sensing” for describing the process and different approaches ofanalyzing spatial big data in an individual scale. The use of the term”sensing” in describing this process, represents two different aspects of suchdata. First, this kind of analog data can be considered as a complementarysource of information for remote sensing data, because they can record the socio-economiccharacteristics of users whereas remote sensing data can never offer these kindof descriptive information.

Second, such data follow the concept of Volunteeredgeographic information5(introduced by (Goodchild, 2006)), meaning that everyindividual person in today’s world can be considered as a sensor transmittingdata as they move. Accordingly, Researchers in the past decade have focusedsome of their efforts on exploiting LSBN data to solve the retail storeplacement problem. Other than one or two cases, most of the research done inthis area has taken advantage of the new advancements in feature selection. Basedon the unique attributes and the type of information that can be retrieved fromLBSN data, a number of features that influence retail store popularity aredefined and then used to predict the popularity of given stores. Accessibility,distance to downtown, area popularity, neighborhood entropy, venue density, theeffect of complementary products/services, competitiveness, Jensen quality,transition density, transition quality and incoming flow are some of the mostimportant features derived from the related literature. Karamshuk and hiscolleagues (Karamshuk, Noulas, Scellato, Nicosia, & Mascolo,2013), asses thepopularity of three different coffee shop and restaurant chains in New Yorkcity with the use of two different type of features (geographic and mobilityfeatures) and via data retrieved from the popular LSBN; Foursquare6.They compare the results obtained by using each individual feature forpopularity prediction with the results of combining the features with a machinelearning feature selection technique (RankNet algorithm), and conclude thatusing a combination of features offers more accuracy. Wang et al (Feng Wang & Chen, 2016), take advantage of theuser generated reviews on Yelp7to assess the prediction power of their framework in forecasting the popularityof a number of given candidates for a new restaurant.

Their framework is basedon the application of three different regression models (Ridge regression,support vector regression and gradient boosted regression trees), to combinefeatures in order to enhance the prediction process. Yu and his colleagues (Yu, Tian, Wang, & Guo, 2016), attempt to tackleanother aspect of the store placement problem; choosing a shop-type from a listof candidate types for a given location. They combine features by applying amatrix factorization technique. Rahman and Nayeem (Rahman & Nayeem, 2017), exploit Foursquare datain order to compare the results of the direct use of features and a combinationof features offered by a support vector machine regression, and demonstratethat the application of the regression model for feature selection offers moreaccuracy and better predictability.

Written by
admin
x

Hi!
I'm Colleen!

Would you like to get a custom essay? How about receiving a customized one?

Check it out