Business Intelligence: Compelling Topics

Departments are currently organized in a silo. Thus, their information is in silo systems, which makes it difficult to leverage that information across the company.  When we employ a data warehouse, which is a central database that contains a collection of decision-related internal and external sources of data, it can aid in the data analysis for the entire company (Ahlemeyer-Stubbe & Coleman, 2014). When we build a multi-level Business Intelligence (BI) system on top of a centralized data warehouse, we no longer have silo data systems, and thus, can make a data-driven decision.  Thus, to support data-driven decision while moving away from a silo department kept data to a centralized data warehouse, Curry,  Hasan, and O’Riain (2012) created a system that shows results from the hospital centralized data warehouse at different levels of the company, as the organization level (stakeholders are executive members, shareholders, regulators, suppliers, consumers), the functional level (stakeholders are functional managers, organization manager), and the individual level (stakeholders are the employees).  Data may be centralized, but specialized permissions on data reports can exist on a multi-level system.

The types of data that exist and can be stored in a centralized data warehouse are: Real-time data: data that reveals events that are happening immediately, Lag information: information that explains events that have recently just happened; and Lead information: information that helps predict events into the future based off of lag data, like regression data, forecasting model output (based off of Laursen & Thorlund, 2010).  All with the goal of helping decision makers if certain Target Measures are met.  Target measures are used to improve marketing efforts through tracking measures like ROI, NVP, Revenue, lead generation, lag generations, growth rates, etc. (Liu, Laguna, Wright, & He, 2014).

Decision Support Systems (DSS) were created before BI strategies.  A DSS helps execute the project, expand the strategy, improve processes, and improves quality controls in a quickly and timely fashion.  Data warehouses’ main role is to support the DSS (Carter, Farmer, & Siegel, 2014).  Unfortunately, the talks above about data types and ways to store data to enable data-driven decisions it doesn’t explain the “how,” “what,” “when,” “where,” “who”, and “why.”  However, a strong BI strategy is imperative to making this all work.  A BI strategies can include, but is not limited to data extraction, data processing, data mining, data analysis, reporting, dashboards, performance management, actionable decisions, etc. (Fayyad, Piatetsky-Shapiro, & Smyth, 1996; Padhy, Mishra, & Panigrahi, 2012; McNurlin, Sprague,& Bui, 2008).  This definition along with the fact the DSS is 1/5 principles to BI suggest that DSS was created before BI and that BI is a more new and holistic view of data-driven decision making.

But, what can we do with a strong BI strategy? Well with a strong BI strategy we can increase a company’s revenue through Online profiling.  Online profiling is using a person’s online identity to collect information about them, their behaviors, their interactions, their tastes, etc. to drive a targeted advertising (McNurlin et al., 2008).  Unfortunately, the fear comes when the end-users don’t know what the data is currently being used for, what data do these companies or government have, etc.  Richards and King (2014) and McEwen, Boyer, and Sun (2013), expressed that it is the flow of information, and the lack of transparency is what feeds the fear of the public. McEwen et al. (2013) did express many possible solutions, one which could gain traction in this case is having the consumers (end-users) know what variables is being collected and have an opt-out feature, where a subset of those variables stay with them and does not get transmitted.

 

Reference:

  • Ahlemeyer-Stubbe, Andrea, Shirley Coleman. (2014). A Practical Guide to Data Mining for Business and Industry, 1st Edition. [VitalSource Bookshelf Online]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781118981863/
  • Carter, K. B., Farmer, D., & Siegel, C. (2014-08-25). Actionable Intelligence: A Guide to Delivering Business Results with Big Data Fast!, 1st Edition. [VitalSource Bookshelf Online]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781118920657/
  • Curry, E., Hasan, S., & O’Riain, S. (2012, October). Enterprise energy management using a linked dataspace for energy intelligence. In Sustainable Internet and ICT for Sustainability (SustainIT), 2012 (pp. 1-6). IEEE.
  • Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery in databases. AI magazine, 17(3), 37. Retrieved from: http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/
  • Laursen, G. H. N., & Thorlund, J. (2010) Business Analytics for Mangers: Taking Business Intelligence Beyond Reporting. Wiley & SAS Business Institute.
  • Liu, Y., Laguna, J., Wright, M., & He, H. (2014). Media mix modeling–A Monte Carlo simulation study. Journal of Marketing Analytics, 2(3), 173-186.
  • McEwen, J. E., Boyer, J. T., & Sun, K. Y. (2013). Evolving approaches to the ethical management of genomic data. Trends in Genetics, 29(6), 375-382.
  • McNurlin, B., Sprague, R., & Bui, T. (09/2008). Information Systems Management, 8th Edition. [VitalSource Bookshelf Online]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781323134702/
  • Padhy, N., Mishra, D., & Panigrahi, R. (2012). The survey of data mining applications and feature scope. arXiv preprint arXiv:1211.5723.  Retrieved from: https://arxiv.org/ftp/arxiv/papers/1211/1211.5723.pdf
  • Richards, N. M., & King, J. H. (2014). Big data ethics. Wake Forest L. Rev., 49, 393

Business Intelligence: Targets, Probabilities, & Modeling

  • Target Measures are used to improve marketing efforts through tracking measures like ROI, NVP, Revenue, lead generation, lag generations, growth rates, etc. (Liu, Laguna, Wright, & He, 2014). The goal is that after a marketing effort is conducted, there should be a change in Target Measures. Positive changes in these measures should be repeated.  Hoptroff and Kufyba (2001) stated that these measures could also be defect rates, default rates, survey ranking results, response rates, churn rate, the value of lost to the business, transaction amounts, products purchased, etc.
  • Probability Mining is data mining using Logit Regression, neural networks, linear regression, etc. Using this helps determine the probability of an event, in our case meeting or failing to meet our Target Measures based on information on past events. (Hoptroff & Kufyba, 2001)
  • Econometrics Modeling is a form of understanding the economy through a blend of economic theory with statistical analysis. Essentially, a way of modeling how certain independent variables act or influence the dependent variable using both economic and statistical theory tools to build the model.  Econometrics Modeling looks into the market power a business holds, game theory models, information theory models, etc.  It is rationalized that economic theory nor statistical theory can provide enough knowledge to solve/describe a certain variable/state, thus the blending of both are assumed to be better at solving/describing a certain variable/state (Reiss & Wolak, 2007)

In the end, an econometric models can contains elements of probability mining, but a probability miner doesn’t have to be is not an econometric model.  Each of these models and miners can track and report on target measures.

Econometrics Modeling is a way to understand price and the pricing model, which is central to generating profits through understanding both economic and statistical/probability principles to achieve a targeted measure.   Companies should use big data and a probability miner/econometric modeling to help them understand the meaning behind the data and extract actionable decisions one could make to either meet or exceed a current target measure, compare and contrast against their current competition, understand their current customers.

Two slightly different Applications

  1. Probability mining has been used to see a customer’s affinity and responses towards a new product through profiling current and/or new customers (Hoptroff & Kufyba, 2001). Companies and marketing firms work on these models to assign a probability value of attracting new customers to a new or existing product or service. The results can give indications as to whether or not the company could met the Target Measures.
  2. We have Marketing Strategies Plan A, B, and C, and we want to use econometric modeling to understand how cost effective each marketing strategy plan would be with respect to the same product/product mix at different price points. This would be a cause and effect modeling (Hoptroff, 1992). Thus, the model should help predict which strategy would produce the most revenue, which is one of our main target measures.

An example of using Probability Mining is Amazon’s Online shopping experience. As the consumer adds items to the shopping cart, Amazon in real-time begins to apply probabilistic mining to find out what other items this consumer would purchase (Pophal, 2014) based on what has happened before through the creation of profiles and say “Others who purchased X also bought Y, Z, and A.”  This quote, almost implies that these items are a set and will enhance your overall experience, buy some more.  For instance, buyers of a $600 Orion Telescope also bought this $45 Hydrogen-alpha filter (use to point the telescope towards the sun to see planets move in front of it).

The Federal Reserve Bank and its board members have been using econometric modeling in the past 30 years for forecasting economic conditions and quantitative policy analysis (Brayton. Levin, Tryon., & Williams, 1997).  The model began in 1966 with help of the academic community, Division of Research and Statistics with available technology, which became operational in 1970.  It had approximate 60 behavioral equations, with long-run neoclassical growth model, factor demands, and life-cycle model of consumption.  Brayton et al. in 1997 go on to say that this model was used for primarily the analysis of stabilization of monetary and fiscal policies, as well as other governmental policies effects onto the economy.

Resources: