In today’s world which is subject to
an increasing number of stores and level of rivalry on a daily basis, accurate
strategizing for each and every one of the marketing mix elements – product,
price, place, promotion, physical evidence, participants and process –
influences a store’s success like never before. Each of these elements include
a number of factors that needs to be assessed and carefully planned, for
example, store location, is one the most important variables that should be
considered while determining the place strategies of a company. Over the years,
researchers and marketers have used a number of different approaches for
solving the store location and site selection problem. In this paper, we will
review some of the most accepted and applied computational methods of
determining the optimal place for a retail store.

Keywords: Location Based Social Network Data, Computational Analysis,


Determining retail store popularity
and studying the variables influencing it, has always been one of the hottest
research topics noticed in many different scientific domains. From a marketing
perspective, if retail store popularity from a target customer’s point of view
is what one is after, it can be controlled and even enhanced through accurate
planning of the marketing mix elements. The marketing mix for production
businesses was defined by Kotler as “a set of controllable marketing variables –
product, price, place and promotion – that the firm can use to get a desired response from their target
customers” (Rafiq
& Ahmed, 1992). Booms and Bittner, later modified
the marketing mix concept to better fit the marketing aspects of services by
adding three new elements to the mix: process, physical evidence and
participants. The framework introduced by Booms and Bittner has been largely
accepted and used by marketing managers in successful companies in the process
of determining their marketing strategies ever since. Accurate planning for
each one of the marketing mix elements includes making important decisions
about a number of other factors that together shape the overall strategies
concerning said element. For example, planning for the “place” element,
includes making decisions about factors like store location, distribution channels,
accessibility, distribution network coverage, sales domains, inventory
placement and transportation facilities. Store placement, especially for
service providers and retail stores, has always been considered as one of the
most important business decisions a firm can make, since it is a critical
factor contributing to a business’s overall chance for success. “No matter how good
its offering, merchandising, or customer service, every retail company
still has to contend with three critical elements of success: location,
location, and location” (Taneja, 1999). Retail stores in
general are classified into six categories, namely specialty stores, department
stores, supermarkets, consumer product stores,
furniture stores, and construction material stores. (Fahui Wang, Chen, Xiu, & Zhang, 2014). There are many different approaches to support
decision making in case of retail store placement. Some of these approaches including
relying on experience and the use of checklists, analogues and ratios have been
around and used by marketing managers for many years (Hernández,
Bennison, Herna, & Bennison, 2005). Such techniques are favored by some managers
since they require minimum levels of budget, technical expertise and data, yet
their downfall lies in the high level of subjectivity in decision making and
the fact that they are almost incompatible with GIS (Hernandez,
Bennison, & Cornelius, 1998). Other techniques including approaches based
on the theory of centrality, gravity models, percolation theory and feature
selection are more computational and therefor need a higher level of expertise
and resources, but at the same time offer a superior level of predictability
and are not bound by a high amount of subjectivity. Since the main goal of this
review is to assess the computational approaches to solving the retail store
placement problem, the later techniques are discussed in details in the
following sections.

Computational Techniques


The Theory of Centrality (Principle of Minimum

theory was presented by Hotelling (Hotelling, 1929), and focuses on
the importance of a store’s proximity to its main rivals and argues that
distance from rivals is more important than distance from customers. in 1958,
based on Hotelling’s theory, Nelson suggested that while suppliers of a given
product or service are located near one another, demand rises (Litz, 2014). Later, this theory
was considered as the basis for multiple other approaches such as space syntax
analysis (Hillier & Hanson, 1984), natural movement (Hillier, Perm, Hanson, Grajewski, & Xu, 1993) and the multiple
centrality assessment (Porta et al., 2009). Space syntax
techniques which were originally aimed at studying the morphological logic of
urban grids, are referred to as the application of a set of configurational
analysis techniques that assess the structure of the urban grid (Hillier & Hanson, 1984). Such techniques
also focus on the centrality measures which are derived from the street network
and its association with economic variables (Fahui Wang et al., 2014). In contrast to
the attraction theory which implies that design should be mostly based on the
attraction degree of a place which is determined by the level of movement to
and from that place, natural movement theory emphasizes the importance of
spatial configuration. Configuration is the way that the spatial elements which
people move through are linked together to form a pattern (Hillier et al., 1993). Natural movement
theory indicates that configuration can have effects on movement
which are independent of attractors and can be considered as the primary
generator of movement. Natural movement in a grid is the proportion of urban
pedestrian movement determined by the grid
configuration itself which is the most pervasive and consistent component of
movement (Hillier et al., 1993). The multiple
centrality assessment (MCA) approach defines being central as being close,
intermediary, straight and critical to other places located in the area (Fahui Wang et al., 2014). Porta and
colleagues (Porta et al., 2009), confirm the
hypothesis that street centrality plays a crucial role in shaping the
formation of urban structure and land uses. They use a kernel density
estimation (KDE) to prepare data for the assessment of the correlation that
exists between being popular and the distribution of commercial and service
activities in the area of a given place. They define centrality with variables
such as closeness, between-ness and straightness. Most recently, Wang et al (Fahui Wang et al., 2014) examined the role
street centrality plays in the popularity of different types of retail stores.
They also used a KDE estimation for data preparation and assessed the
correlation among centrality and location advantage with the use of street
centrality indices mentioned before.

Gravity (Spatial Interaction) Models

models have been used as a solution to the retail store location problem by
researchers, analysts and marketing managers for many years now. These models emphasize on a customer’s perspective on availability and
accessibility of a given store. The development of the first version of a gravity
model was inspired in late 1930s by the work of Reilly, an American researcher (Kubis & Hartmann, 2007). Reilly suggested
that customers may make tradeoffs between the specific features of a store’s main product and the store’s location (Litz, 2014). In
1967, Wilson introduced a model for spatial distribution (A. G. Wilson, 1967) describing the
flow of money from population centroids to retail centers which was considered
a basis for retail locating and the prediction of retail center dynamics for
many years (A. G. Wilson & Oulton, 1983). In Wilson’s
model, survival of a retail center is dependent on its ability to compete for
the limited amount of available resources (customers) (Piovani, Molinero, & Wilson, 2017). He observed the
similarities between the factors applied to the gravity model and the partition
functions used in statistical mechanics, which led to a shift from a Newtonian
analogy to a Boltzmann statistical mechanics analogy (A. Wilson, 2010). Consequently,
Wilson introduced a new framework for spatial interaction modeling based on the
maximization of the entropy of urban and regional areas. In this framework, the
value of two parameters (one that scales attractiveness and floor-space and
another that depicts cost of moving), determine the survival chance of a retail

models are usually divided into two different general groups based on their
type of approach; qualitative and quantitative models. As it is suggested by
their name, a qualitative model uses non-numerical criteria to determine the
best location for a store. On the other hand, quantitative models take
advantage of available numerical information including the number of
inhabitants, distances and so on. There are two
types of quantitative gravity models, deterministic and probabilistic. While
deterministic models usually calculate an estimation of accounting variables
such as turnover or return on investment to present to marketing managers to
decide upon, probabilistic models attempt to model the probability of a
consumer that lives at location i to purchase products at location j. The later models
are based on the model of Huff (Kubis & Hartmann, 2007).

Percolation Theory

have tried to describe and characterize the regionalization of urban space in a
hierarchical manner for almost a century now. “A hierarchy emerges with respect
to the types of relationships that exist given the cluster size, whether the
cluster is a village, a town or a city” (Arcaute et al., 2015; Berry, Garrison, Berry, &
Garrison, 2014) One of the most
famous examples of this type of approach is the Central place hierarchies (Boventer, 1969) introduced by
Christaller (Arcaute et al., 2015). The origins of
Christaller’s central place theory dates back to 1933, when this German researcher first
suggested that there is a reverse relationship between the demand for a product
and the distance from the source of supply. This theory is based on the
importance of transportation costs for the customers. The main pitfalls of this
theory are that it fails to consider the effects of product attributes such as
cost and demand frequency and the possibility of multi-purpose shopping (Litz, 2014). Over the years, the hierarchical point of
view has led to the emergence of a number of other approaches like graph theory
and network theory. The connectivity of the system in all of the aforementioned
approaches can be explored through the percolation theory (Arcaute et
al., 2015). The percolation theory, describes the
interaction between classical particles with a random medium and provides a simple
depiction of critical behavior (Taylor,
Shante, & Kirkpatrick, 2006). In other words, when a particle spreads
through space, it eventually reaches a critical point and tends to form a
cluster. Thus, applying percolation theory to street networks in order to
uncover the underlying patterns in a city in relation to available infrastructures,
has been used by some researchers in the recent years (Arcaute et
al., 2015; Piovani et al., 2017).

Feature Selection via Location Based Social Network

In the past decade, different factors like the
advancements made in wireless communication technologies, the growing universal
acceptance of location-aware technologies including mobile phones and smart
tablets equipped with GPS1
receivers, Sensors placed inside these devices, attached to cars and embedded
in infrastructures, remote sensors transported by aerial and satellite
platforms and RFID2 tags
attached to objects was complemented by the development of GIS3
technologies to result in the availability of an increasing amount of data with
content richness which can be exploited by analysts. With the emergence and
growing popularity of social networks and location-aware services, the next
step was combining these two technologies which resulted in the introduction of
location based social networks4
(Kheiri, Karimipour, & Forghani, 2016). Since such networks act
as a bridge establishing a connection between a user’s real life and online
activities (Kheiri et al., 2016), data obtained from them
is considered among one of the most important resources of spatial data and
presents a unique opportunity for researchers in business-related fields to
precisely study consumer’s behavioral patterns.

Consequently, with the introduction of LSBNs, the
question of optimal store placement like many other scientific problems has
entered a new era with fast, diverse and voluminous data, terms that are
usually used to describe big data. Liu and his colleagues (Liu et al., 2015), introduced the term
“social sensing” for describing the process and different approaches of
analyzing spatial big data in an individual scale. The use of the term
“sensing” in describing this process, represents two different aspects of such
data. First, this kind of analog data can be considered as a complementary
source of information for remote sensing data, because they can record the socio-economic
characteristics of users whereas remote sensing data can never offer these kind
of descriptive information. Second, such data follow the concept of Volunteered
geographic information5
(introduced by (Goodchild, 2006)), meaning that every
individual person in today’s world can be considered as a sensor transmitting
data as they move. Accordingly, Researchers in the past decade have focused
some of their efforts on exploiting LSBN data to solve the retail store
placement problem. Other than one or two cases, most of the research done in
this area has taken advantage of the new advancements in feature selection. Based
on the unique attributes and the type of information that can be retrieved from
LBSN data, a number of features that influence retail store popularity are
defined and then used to predict the popularity of given stores. Accessibility,
distance to downtown, area popularity, neighborhood entropy, venue density, the
effect of complementary products/services, competitiveness, Jensen quality,
transition density, transition quality and incoming flow are some of the most
important features derived from the related literature. Karamshuk and his
colleagues (Karamshuk, Noulas, Scellato, Nicosia, & Mascolo,
2013), asses the
popularity of three different coffee shop and restaurant chains in New York
city with the use of two different type of features (geographic and mobility
features) and via data retrieved from the popular LSBN; Foursquare6.
They compare the results obtained by using each individual feature for
popularity prediction with the results of combining the features with a machine
learning feature selection technique (RankNet algorithm), and conclude that
using a combination of features offers more accuracy. Wang et al (Feng Wang & Chen, 2016), take advantage of the
user generated reviews on Yelp7
to assess the prediction power of their framework in forecasting the popularity
of a number of given candidates for a new restaurant. Their framework is based
on the application of three different regression models (Ridge regression,
support vector regression and gradient boosted regression trees), to combine
features in order to enhance the prediction process. Yu and his colleagues (Yu, Tian, Wang, & Guo, 2016), attempt to tackle
another aspect of the store placement problem; choosing a shop-type from a list
of candidate types for a given location. They combine features by applying a
matrix factorization technique. Rahman and Nayeem (Rahman & Nayeem, 2017), exploit Foursquare data
in order to compare the results of the direct use of features and a combination
of features offered by a support vector machine regression, and demonstrate
that the application of the regression model for feature selection offers more
accuracy and better predictability.

Written by

I'm Colleen!

Would you like to get a custom essay? How about receiving a customized one?

Check it out