Business Intelligence: Online Profiling

Online profiling is using a person’s online identity to collect information about them, their behaviors, their interactions, their tastes, etc. to drive a targeted advertising (McNurlin et al., 2008).  Online profiling straddles the point of becoming useful, annoying, or “Big Brother is watching” (Pophal, 2014).  Profiling can be based on simple third-party cookies, which are unknowingly placed when an end-user travels to a website and depending on the priority of the cookie, it can change the entire end-user experience when the visit a site with targeted messages on banner adds (McNurlin et al., 2008).  More complex tracking is when some end user uses a mobile device to scan a QR code or walks near an NFC area, where the phone then transmits about 40 different variables of that person to the company, which can then provide a more precise or perfect advertising (Pophal, 2014).

This data collection is all to gain more information about the consumer, to make better decisions about what to offer theses consumers like precise advertisements, deals, etc. (McNurlin, 2008).  The best way to describe this is through this quote by a current marketer in Phophal (2014): “So if I’m in L.A., and it’s a pretty warm day here-85 degrees-you shouldn’t be showing me an ad for hot coffee; you should be showing me a cool drink.” But, advertisers have to find a way to let the consumer know about their product, without overwhelming the consumer with “information overload.” How do consumers say “Hey look at me, I am important, and nothing else is… wouldn’t this look nice in your possession?”  If they do this too much, they can alienate the buyer from using the technology and from buying the product altogether. These advertisers need to find a meaningful and influencing connection to their consumers if they want to drive up their revenues.

At the end of the day, all this online profiling is aiming to collect enough or more than necessary data to make predictions of what the consumer is most likely going to buy and give them enough incentive to influence their purchasing decision.  The operating cost of such a tool must be done so that there is still a profit to be gained when the consumer completes a transaction and buys the product.  This, then becomes an important part of a BI program, because you are aiming to drive consumers away from your competitors and into your product.

The fear comes when the end-user doesn’t know what the data is currently being used for, what data do these companies or government have, etc.  Richards and King (2014) and McEwen, Boyer, and Sun (2013), expressed that it is the flow of information, and the lack of transparency is what feeds the fear of the public. Hence, the “Big Brother is watching”.  McEwen et al. (2013) did express many possible solutions, one which could gain traction in this case is having the consumers (end-users) know what variables is being collected and have an opt-out feature, where a subset of those variables stay with them and does not get transmitted.

Resources:

Business Intelligence: Targets, Probabilities, & Modeling

  • Target Measures are used to improve marketing efforts through tracking measures like ROI, NVP, Revenue, lead generation, lag generations, growth rates, etc. (Liu, Laguna, Wright, & He, 2014). The goal is that after a marketing effort is conducted, there should be a change in Target Measures. Positive changes in these measures should be repeated.  Hoptroff and Kufyba (2001) stated that these measures could also be defect rates, default rates, survey ranking results, response rates, churn rate, the value of lost to the business, transaction amounts, products purchased, etc.
  • Probability Mining is data mining using Logit Regression, neural networks, linear regression, etc. Using this helps determine the probability of an event, in our case meeting or failing to meet our Target Measures based on information on past events. (Hoptroff & Kufyba, 2001)
  • Econometrics Modeling is a form of understanding the economy through a blend of economic theory with statistical analysis. Essentially, a way of modeling how certain independent variables act or influence the dependent variable using both economic and statistical theory tools to build the model.  Econometrics Modeling looks into the market power a business holds, game theory models, information theory models, etc.  It is rationalized that economic theory nor statistical theory can provide enough knowledge to solve/describe a certain variable/state, thus the blending of both are assumed to be better at solving/describing a certain variable/state (Reiss & Wolak, 2007)

In the end, an econometric models can contains elements of probability mining, but a probability miner doesn’t have to be is not an econometric model.  Each of these models and miners can track and report on target measures.

Econometrics Modeling is a way to understand price and the pricing model, which is central to generating profits through understanding both economic and statistical/probability principles to achieve a targeted measure.   Companies should use big data and a probability miner/econometric modeling to help them understand the meaning behind the data and extract actionable decisions one could make to either meet or exceed a current target measure, compare and contrast against their current competition, understand their current customers.

Two slightly different Applications

  1. Probability mining has been used to see a customer’s affinity and responses towards a new product through profiling current and/or new customers (Hoptroff & Kufyba, 2001). Companies and marketing firms work on these models to assign a probability value of attracting new customers to a new or existing product or service. The results can give indications as to whether or not the company could met the Target Measures.
  2. We have Marketing Strategies Plan A, B, and C, and we want to use econometric modeling to understand how cost effective each marketing strategy plan would be with respect to the same product/product mix at different price points. This would be a cause and effect modeling (Hoptroff, 1992). Thus, the model should help predict which strategy would produce the most revenue, which is one of our main target measures.

An example of using Probability Mining is Amazon’s Online shopping experience. As the consumer adds items to the shopping cart, Amazon in real-time begins to apply probabilistic mining to find out what other items this consumer would purchase (Pophal, 2014) based on what has happened before through the creation of profiles and say “Others who purchased X also bought Y, Z, and A.”  This quote, almost implies that these items are a set and will enhance your overall experience, buy some more.  For instance, buyers of a $600 Orion Telescope also bought this $45 Hydrogen-alpha filter (use to point the telescope towards the sun to see planets move in front of it).

The Federal Reserve Bank and its board members have been using econometric modeling in the past 30 years for forecasting economic conditions and quantitative policy analysis (Brayton. Levin, Tryon., & Williams, 1997).  The model began in 1966 with help of the academic community, Division of Research and Statistics with available technology, which became operational in 1970.  It had approximate 60 behavioral equations, with long-run neoclassical growth model, factor demands, and life-cycle model of consumption.  Brayton et al. in 1997 go on to say that this model was used for primarily the analysis of stabilization of monetary and fiscal policies, as well as other governmental policies effects onto the economy.

Resources:

Business Intelligence: Effectiveness

Non-profit Hospitals are in a constant state of trying to improve their services and drive down costs. Thus, one of the ways they do this is by turning to Lean Six Sigma techniques and IT to identify opportunities to save money and improve the overall patient experience. Six Sigma relies on data/measurements to determine opportunities for continuous improvements, thus aiding in the hospitals goals, a Business Intelligence (BI) program was developed (Topaloglou & Barone, 2015).

Key Components of the structure

For an effective BI program the responsible people/stakeholders (Actors) are identified, so we define who is responsible for setting the business strategies (Goals).  The strategy must be supported by the right business processes (Objects), and the right people must be assigned as accountable for that process.  Each of these processes has to be measured (Indicators) to inform the right people/stakeholders on how the business strategy is doing.  All of this is a document in a key document (called AGIO), which is essentially a data definition dictionary that happens to be a common core solution (Topaloglou & Barone, 2015).  This means that there is one set of variables names and definitions.

Implementation of the above structure has to take into account the multi-level business and their needs.  Once the implementation is completed and buy off from all other stakeholders has occurred, that is when the business can experience its benefits.  Benefits are: end users can make strategic data based decisions and act on them, a shift in attitudes towards the use and usefulness of information, perception of data scientist from developers to problem solvers, data is an immediate action, continuous improvement is a byproduct of the BI system, real-time views with data details drill down features enabling more data-driven decisions and actions, the development of meaningful dashboards that support business queries, etc. (Topaloglou & Barone, 2015).

Knowledge management systems fit into the structure

“Healthcare delivery is a distributed process,” where patients can receive care from family doctors, clinicians, ER staff,  specialists, acute care, etc. (Topaloglou & Barone, 2015).  Each of these people involved in healthcare delivery have vital knowledge about the patient that needs to be captured and transferred correctly; thus hospital reports help capture that knowledge.  Knowledge also lies with how the patient flows in and out of sections in the hospital, and executives need to see metrics on how all of these systems work together.  Generating a knowledge management distributed database system (KMDBS), aids in tying all this data together from all these different sources to provide the best care for patients, identify areas for continual improvements, and provides this in a neat little portal (and dashboards) for ease of use and ease of knowledge extraction (Topaloglou & Barone, 2015).  The goal is to unify all the knowledge from multiple sources into one system, coming up with a common core set of definitions, variables, and metrics.  The common core set of definitions, variables, and metrics are done so that everyone can understand the data in the KMDBS, and look up information if there are any questions.  The development team took this into account and after meeting with different business levels, the solution that was developed in-house provided all staff a system which used their collective knowledge to draw out key metrics that would aid them in data-driven decisions for continuous improvement on the services they provide to their patients.

1 example

Topaloglou & Barone, (2015) present the following example below:

  • Actor: Emergency Department Manger
  • Goal: Reduce the percentage of patients leaving without being seen
  • Indicator: Percentage of patients left without being seen
  • Object: Physician initial assessment process

 

Resources

Business Intelligence: data mining success

For data mining success one must follow a data mining process. There are many processes out there, and here are two:

  • From Fayyad, Piatetsky-Shapiro, and Smyth (1996)
    1. Data -> Selection -> Target Data -> Preprocessing -> Preprocesses Data -> Transformation -> Transformed Data -> Data Mining -> Patterns -> Interpretation/Evaluation -> Knowledge
  • From Padhy, Mishra, and Panigrahi (2012)
    1. Business understanding -> Data understanding -> Data Preparation -> Modeling -> Evaluation -> Deployment

Success has much to do with the events that lead to the main event as it does with the main event.  Thus, what is done to the data before data mining can proceed successfully. Fayyad et al. (1996) address that data mining is just a subset of the knowledge discovery process, where data mining provides the algorithms/math that helps reach the final goal.  Looking at each of the individual processes we can see that they are slightly different, yet the same.  Another key thing to note is that we can move back and forth (i.e. iterations) between the steps in these processes.  These two are supposing that data is being pulled from a knowledge database or data warehouse, where the data should be cleaned (uniformly represented, handling missing data, noise, and errors) and accessible (provided access paths to data).

Pros/Challenges

If the removal of the pre-processing stage or data preparation phase, we will never be able to reduce the high-dimensionality in the data sets (Fayyad et al., 1996).  High dimensionality increases the size of the data, thus increases the need for more processing time, which may not be as advantageous on a real-time data feed into the data mining derived model.  Also, with all this data, you run into the chances that the data model derived through the data mining process will pick up spurious patterns, which will not be easily generalizable or even understandable for descriptive purposes (Fayyad et al., 1996).  Descriptive purposes are data mining for the sake of understanding the data, whereas predictive purposes are for data mining for the sake of predicting the next result of an input of variables from a data source (Fayyad et al., 1996, Padhy et al., 2012).  Thus, to avoid this high-dimensionality problem, we must understand the problem, understand why we have the data we have, what data is needed and reduce the dimensions to the bare essentials.

Another challenge that would come from data mining if we did do the selection, data understanding, or data mining algorithm selection, the step is the issue is overfitting.  Fayyad et al. (1996), defines selection as selecting the key data you need to feed into the model, and selecting the right data mining algorithm which will influence the results.  Understanding the problem will allow you to select the right data dimensions as aforementioned as well as the data mining algorithm (Padhy et al., 2012).  Overfitting is when a data mining algorithm tries to not only derive general patterns in the data but also describes it with noisy data (Fayyad et al., 1996).  Through the selection process, you can pick data with reduced noise to avoid an overfitting problem.  Also, Fayyad et al. (1996) suggest that solutions should include: cross-validation, regularization, and other statistical analysis.  Overfitting issues though can be fixed through understanding what you are looking for before using data mining, will aid in the evaluation/interpretation process (Padhy et al., 2012).

Cons/Opportunities

Variety in big data that changes with time, while applying the same data mined model, will at one point, either be outdated (no longer relevant) or invalid.  This is the case in social media, if we try to read posts without focusing on one type of post, it would be hard to say that one particular data pattern model derived from data mining is valid.  Thus, previously defined patterns are no longer valid as data rapidly change with respect to time (Fayyad et al., 1996).  We would have to solve this, through incrementally modifying, deleting or augmenting the defined patterns in the data mining process, but as data can vary in real-time, in the drop of a hat, and this can be quite hard to do (Fayyad et al., 1996).

Missing data and noisy data is very prevalent in Meteorology; we cannot sample the entire atmosphere at every point at every time.  We send up weather balloons 2-4 times a day at two points in a US state at a time.  We then try to feed that into a model for predictive purposes.  However, we have a bunch of gaps in the data.  What happens if the weather balloon is a dud, and we get no data. Hence, we have missing data.  This is a problem with the data.  How are we supposed to rely on the solution derived through data mining if the data is either missing or noisy? Fayyad et al. (1996) said that missing values are “not designed with discovery in mind”, but we must include statistical strategies to define what these values should be.  One of the ones that meteorologist use is data interpolation.  There are many types of interpolation, revolving simple nearest neighbor ones, to complex Gaussian types.

Resources: