Data Analysis of Qualitative data

Each of the methods has at its core a thematic analysis of data, which is methodically and categorically linking data, phrases, sentences, paragraphs, etc. into a particular theme.  Coring up these themes by their thematic properties helps in understanding the data and developing meaningful themes aiding in building a conclusion to the central question.

Ethnographic Content Analysis (Herron, 2015):  Thick descriptions (collection of field notes that describe and recorded learning and a collection of perceptions of the researcher) help in the creation of cultural themes (themes related to behaviors on an underlying action) from which information was interpreted.

Phenomenological data analysis (Kerns, 2014): Connections among different classes of data through a thematic analysis were used for which results could be derived from.

Case study analysis (Hartsock, 2014): Through the organization of data within a specific case design and treating each distinct data set as a case study, one could derive some general themes within each individual case.  Once, all these general themes are identified, we should look for some cross-case themes.

Grounded Theory Data Analysis (Falciani-White, 2013): Code data through comparing incidents/data to a category (by breaking down, analyzing, comparing, labeling and categorizing data into meaningful units of data), and integrating categories by their properties, in order to help you identify a few themes in order to drive a theory in a systematic manner.

References:

Quant: Compelling topics

Most Compelling Topics

Field (2013) states that both quantitative and qualitative methods are complimentary at best, none competing approaches to solving the world’s problems. Although these methods are quite different from each other. Simply put, quantitative methods are utilized when the research contains variables that are numerical, and qualitative methods are utilized when the research contains variables that are based on language (Field, 2013).  Thus, central to quantitative research and methods is to understand the numerical, ordinal, or categorical dataset and what the data represents. This can be done through either descriptive statistics, where the researcher uses statistics to help describe a data set, or it can be done through inferential statistics, where conclusions can be drawn about the data set (Miller, n.d.).

Field (2013) and Schumacker (2014), defined central tendency as an all-encompassing term to help describe the “center of a frequency distribution” through the commonly used measures mean, median, and mode.  Outliers, missing values, and multiplication of a constant, and adding a constant are factors that affect the central tendency (Schumacker, 2014).  Besides just looking at one central tendency measure, researchers can also analyze the mean and median together to understand how skewed the data is and in which direction.  Heavily skewed distributions would heavily increase the distance between these two values, and if the mean less than the median the distribution is skewed negatively (Field, 2013).  To understand the distribution, better other measures like variance and standard deviations could be used.

Variance and standard deviations are considered as measures of dispersion, where the variance is considered as measures of average dispersion (Field, 2013; Schumacker, 2014).  Variance is a numerical value that describes how the observed data values are spread across the data distribution and how they differ from the mean on average (Huck, 2011; Field, 2013; Schumacker, 2014).  The smaller the variance indicates that the observed data values are close to the mean and vice versa (Field, 2013).

Rarely is every member of the population studied, and instead a sample from that population is randomly taken to represent that population for analysis in quantitative research (Gall, Gall, & Borg 2006). At the end of the day, the insights gained from this type of research should be impersonal, objective, and generalizable.  To generalize the results of the research the insights gained from a sample of data needs to use the correct mathematical procedures for using probabilities and information, statistical inference (Gall et al., 2006).  Gall et al. (2006), stated that statistical inference is what dictates the order of procedures, for instance, a hypothesis and a null hypothesis must be defined before a statistical significance level, which also has to be defined before calculating a z or t statistic value.  Essentially, a statistical inference allows for quantitative researchers to make inferences about a population.  A population, where researchers must remember where that data was generated and collected from during quantitative research process.

Most flaws in research methodology exist because the validity and reliability weren’t established (Gall et al., 2006). Thus, it is important to ensure a valid and reliable assessment instrument.  So, in using any existing survey as an assessment instrument, one should report the instrument’s: development, items, scales, reports on reliability, and reports on validity through past uses (Creswell, 2014; Joyner, 2012).  Permission must be secured for using any instrument and placed in the appendix (Joyner, 2012).  The validity of the assessment instrument is key to drawing meaningful and useful statistical inferences (Creswell, 2014).

Through sampling of a population and using a valid and reliable survey instrument for assessment, attitudes and opinions about a population could be correctly inferred from the sample (Creswell, 2014).  Sometimes, a survey instrument doesn’t fit those in the target group. Thus it would not produce valid nor reliable inferences for the targeted population. One must select a targeted population and determine the size of that stratified population (Creswell, 2014).

Parametric statistics, are inferential and based on random sampling from a distinct population, and that the sample data is making strict inferences about the population’s parameters, thus tests like t-tests, chi-square, f-tests (ANOVA) can be used (Huck, 2011; Schumacker, 2014).  Nonparametric statistics, “assumption-free tests”, is used for tests that are using ranked data like Mann-Whitney U-test, Wilcoxon Signed-Rank test, Kruskal-Wallis H-test, and chi-square (Field, 2013; Huck, 2011).

First, there is a need to define the types of data.  Continuous data is interval/ratio data, and categorical data is nominal/ordinal data.  Modified from Schumacker (2014) with data added from Huck (2011):

Statistic Dependent Variable Independent Variable
Analysis of Variance (ANOVA)
     One way Continuous Categorical
t-Tests
     Single Sample Continuous
     Independent groups Continuous Categorical
     Dependent (paired groups) Continuous Categorical
Chi-square Categorical Categorical
Mann-Whitney U-test Ordinal Ordinal
Wilcoxon Ordinal Ordinal
Kruskal-Wallis H-test Ordinal Ordinal

So, meaningful results get reported and their statistical significance, confidence intervals and effect sizes (Creswell, 2014). If the results from a statistical test have a low probability of occurring by chance (5% or 1% or less) then the statistical test is considered significant (Creswell, 2014; Field, 2014; Huck, 2011Statistical significance test can have the same effect yet result in different values (Field, 2014).  Statistical significance on large samples sizes can be affected by small differences and can show up as significant, while in smaller samples large differences may be deemed insignificant (Field, 2014).  Statistically significant results allow the researcher to reject a null hypothesis but do not test the importance of the observations made (Huck, 2011).  Huck (2011) stated two main factors that could influence whether or not a result is statistically significant is the quality of the research question and research design.

Huck (2011) suggested that after statistical significance is calculated and the research can either reject or fail to reject a null hypothesis, effect size analysis should be conducted.  The effect size allows researchers to measure objectively the magnitude or practical significance of the research findings through looking at the differential impact of the variables (Huck, 2011; Field, 2014).  Field (2014), defines one way of measuring the effect size is through Cohen’s d: d = (Avg(x1) – Avg(x2))/(standard deviation).  If d = 0.2 there is a small effect, d = 0.5 there is a moderate effect, and d = 0.8 or more there is a large effect (Field, 2014; Huck, 2011). Thus, this could be the reason why a statistical test could yield a statistically significant value, but further analysis with effect size could show that those statistically significant results do not explain much of what is happening in the total relationship.

In regression analysis, it should be possible to predict the dependent variable based on the independent variables, depending on two factors: (1) that the productivity assessment tool is valid and reliable (Creswell, 2014) and (2) we have a large enough sample size to conduct our analysis and be able to draw statistical inference of the population based on the sample data which has been collected (Huck, 2011). Assuming these two conditions are met, then regression analysis could be made on the data to create a prediction formula. Regression formulas are useful for summarizing the relationship between the variables in question (Huck, 2011).

When modeling predict the dependent variable based upon the independent variable the regression model with the strongest correlation will be used as it is that regression formula that explains the variance between the variables the best.   However, just because the regression formula can predict some or most of the variance between the variables, it will never imply causation (Field, 2013).  Correlations help define the strength of the regression formula in defining the relationships between the variables, and can vary in value from -1 to +1.  The closer the correlation coefficient is to -1 or +1; it informs the researcher that the regression formula is a good predictor of the variance between the variables.  The closer the correlation coefficient is to zero, indicates that there is hardly any relationship between the variable (Field, 2013; Huck, 2011; Schumacker, 2014).  It should never be forgotten that correlation doesn’t imply causation, but can help determine the percentage of the variances between the variables by the regression formula result, when the correlation value is squared (r2) (Field, 2013).

 

References:

  • Creswell, J. W. (2014) Research design: Qualitative, quantitative and mixed method approaches (4th ed.). California, SAGE Publications, Inc. VitalBook file.
  • Field, A. (2013) Discovering Statistics Using IBM SPSS Statistics (4th ed.). UK: Sage Publications Ltd. VitalBook file.
  • Gall, M. D., Gall, J., & Borg W. (2006). Educational research: An introduction (8th ed.). Pearson Learning Solutions. VitalBook file.
  • Huck, S. W. (2011) Reading Statistics and Research (6th ed.). Pearson Learning Solutions. VitalBook file.
  • Joyner, R. L. (2012) Writing the Winning Thesis or Dissertation: A Step-by-Step Guide (3rd ed.). Corwin. VitalBook file.
  • Miller, R. (n.d.). Week 1: Central tendency [Video file]. Retrieved from http://breeze.careeredonline.com/p9fynztexn6/?launcher=false&fcsContent=true&pbMode=normal
  • Schumacker, R. E. (2014) Learning statistics using R. California, SAGE Publications, Inc, VitalBook file.

Differences between Quantitative and Qualitative Intros and Lit Reviews

Simply put, quantitative methods are utilized when the research contains variables that are numerical, and qualitative methods are utilized when the research contains variables that are based on language (Field, 2013).  Thus, each methods goals and procedures are quite different. This difference in goals and procedures drives differences in how a research paper’s introduction and literature review are written.

Introductions in a research paper allow the researcher to announce the problem and why it is important enough to be explored through a study.  Given that qualitative research may not have any known variables or theories, the introductions tend to vary tremendously (Creswell, 2014; Edmondson & MacManus, 2007).  Creswell (2014), suggested that qualitative methods introductions can begin with a quote from one the participants; stating the researchers’ personal story from a first person or third person viewpoint, or can be written in an inductive style.  There is less variation in quantitative methods introductions because the best way to introduce the problem is to introduce the variables, from an impersonal viewpoint (Creswell, 2014).  It is through gaining further understanding of these variables’ influence on a particular outcome is what’s driving the study in the first place.

The purpose of the literature review is for the researcher to share the results of other studies tangential to theirs to show how their study relates to the bigger picture and what gaps in the knowledge they are trying to solve (Creswell, 2014).  Edmondson and MacManus (2007) stated that when the nature of the field of research is nascent, the study becomes exploratory and qualitative in nature.  Given their exploratory nature, in qualitative methods, the researchers write their literature review in the form that is exploratory and in an inductive manner (Creswell, 2014).  Edmondson and MacManus (2007) stated that when the nature of the research is mature, there are plenty of related and existing research studies on the topic, a more quantitative approach is more appropriate.  Given that there is a huge body of knowledge to draw from when it comes to quantitative methods, the researchers tend to have substantially large amounts of literature at the beginning and structure it in a deductive fashion (Creswell, 2014).  Framing the literature review in a deductive manner allows the researcher at the end of the literature review to state clearly and measurably their research question(s) and hypotheses (Creswell, 2014; Miller, n.d.).

To conclude, understanding which methodological approach best fits a research study can help drive how the introduction and literature review sections are crafted and written.

References

  • Creswell, J. W. (2014) Research design: Qualitative, quantitative and mixed method approaches (4th ed.). California, SAGE Publications, Inc. VitalBook file.
  • Edmondson, A. C., & McManus, S. E. (2007). Methodological fit in management field research. Academy of Management Review, 32(4), 1155–1179. http://doi.org/10.5465/AMR.2007.26586086
  • Field, A. (2013) Discovering Statistics Using IBM SPSS Statistics (4th ed.). UK: Sage Publications Ltd. VitalBook file.

Quant: Getting Lost in the Numbers

It is easy to get lost in numbers when you do quantitative research.
These are suggestions that can help keep the focus on people and organizations when you are dealing with numbers representing them.

In quantitative research, data that is collected is numerical in nature. Rarely is every member of the population studied, and instead a sample from that population is randomly taken to represent that population for analysis in quantitative research (Gall, Gall, & Borg 2006). At the end of the day, the insights gained from this type of research should be impersonal, objective, and generalizable.  To generalize the results of the research the insights gained from a sample of data needs to use the correct mathematical procedures for using probabilities and information, statistical inference (Gall et al., 2006).  Gall et al. (2006), stated that statistical inference is what dictates the order of procedures, for instance, a hypothesis and a null hypothesis must be defined before a statistical significance level, which also has to be defined before calculating a z or t statistic value.

Essentially, a statistical inference allows for quantitative researchers to make inferences about a population.  A population, where researchers must remember where that data was generated and collected from during quantitative research process.  However, it is easy to get lost in the numbers during quantitative research, thus here is a list of some of the ways to keep the focus on the people and organizations when research deal with the numbers that represent their population: To design a quantitative research project, researchers must understand the purpose and rationale of their own research designs and their research methods (Creswell, 2014).  Knowing the purpose and rationale can help the development of a research question(s) and hypothesis.  With a clear research question and hypothesis can a researcher to design and review their data collection from people, organizations, or instruments.  It is when focusing on the methods section that researchers can keep their focus on the people and organizations by identifying the population, consideration of a stratified population before sampling, sampling design and procedures, selection process for the individuals, which variables to study (their name, how they relate to the research question, and collection description) (Creswell, 2014).

  • The numerical data used in the quantitative research was generated and collected from people, a social group, an organizational entity, or an instrument. The numerical value alone does not have any meaning nor value to the research. But, when the numerical value is paired with contextual information, then it provides researchers a wealth of information to conduct their statistical analysis on the data (Ahlemeyer-Stubbe, & Coleman, 2014; Miller, n.d.a.).
  • Remember each data point, row or column represents a person, group, or thing with all its features and bugs. It would be wise to create a metadata file that describes the data points variables to help keep the focus on the people and organizations.  In SPSS, the metadata section is called the “Variable View”, and each person is represented as an entity or row of data in the “Data View” (Field, 2013; Miller, n.d.b.).
  • Data sets are never neutral and theory-free data repositories but require researchers to interpret that data through their personal lenses (Crawford, Miltner, & Gray, 2014). One must gather and analyze data ethically to avoid social and legal concerns. Thus, the researcher must be aware of how their analysis of the data can be used to cause harm to others or help facilitate discriminate against disenfranchised groups of people (Robinson, 2015).

References:

  • Ahlemeyer-Stubbe, A., & Coleman S. (2014). A practical guide to data mining for business and industry. UK, Wiley-Blackwell. VitalBook file.
  • Crawford, K., Miltner, K., & Gray, M. L. (2014). Critiquing Big Data : Politics , Ethics , Epistemology Special Section Introduction. International Journal of Communication, 8, 1663–1672.
  • Creswell, J. W. (2014) Research design: Qualitative, quantitative and mixed method approaches (4th ed.). California, SAGE Publications, Inc. VitalBook file.
  • Field, A. (2013) Discovering Statistics Using IBM SPSS Statistics (4th ed.). UK: Sage Publications Ltd. VitalBook file.
  • Gall, M. D., Gall, J., & Borg W. (2006). Educational research: An introduction (8th ed.). Pearson Learning Solutions. VitalBook file.
  • Miller, R. (n.d.a.). Week 1: Central tendency [Video file]. Retrieved from http://breeze.careeredonline.com/p9fynztexn6/?launcher=false&fcsContent=true&pbMode=normal
  • Miller, R. (n.d.b.). Week 2: All about SPSS. [Video file]. Retrieved from http://breeze.careeredonline.com/p99kywtldbw/?launcher=false&fcsContent=true&pbMode=normal
  • Robinson, S. C. (2015). The good, the bad, and the ugly: Applying Rawlsian ethics in data mining marketing. Journal of Mass Media Ethics, 30(1), 19–30. http://doi.org/10.1080/08900523.2014.985297

Quantitative Vs Qualitative Analysis

Field (2013) states that both quantitative and qualitative methods are complimentary at best not competing approaches to solving the world’s problems. Although these methods are quite different from each other. Creswell (2014) explain how these two, quantitative and qualitative methods, can be combined to study a phenomenon through what is called a “Mixed Method” Approach, which is out of scope for this discussion.  Simply put, quantitative methods are utilized when the research contains variables that are numerical, and qualitative methods are utilized when the research contains variables that are based on language (Field, 2013).  Thus, each methods goals and procedures are quite different

Goals and procedures

Quantitative methods derive from positivist, numerically driven, and epistemological (Joyner, 2012).   Quantitative methods use closed-ended questions, i.e. hypothesis, and collect their data numerically through instruments (Creswell, 2014). In quantitative research, there is an emphasis on experiments, measurement, and a search of relationships via fitting data to a statistical model and through observing a collection of data graphically to identify trends via deduction (Field, 2013; Joyner, 2012). According to Creswell (2014), quantitative researchers build protections against biases and control for alternative explanations through experiments which are generalizable and replicable. Quantitative studies could be experimental, quasi-experimental, causal-comparative, correlational, descriptive, and evaluation (Joyner, 2012).  According to Edmondson and McManus (2007), quantitative methodologies fit best when the underlying research theory is mature.  The maturity of the theory should tend to drive researchers towards one method over the other, along the spectrum quantitative, mixed, or qualitative methodologies (Creswell, 2014; Edmondson & McManus, 2007).

Comparatively, Edmondson and McManus (2007) stated, qualitative methodologies fit best when the underlying research theory is nascent. Quantitative methods derive from phenomenological view, the perceptions of people (Joyner, 2012).  Qualitative methods use open-ended questions, i.e. interview questions and collect their data through observations of a situation (Creswell, 2014).  Qualitative research focuses on meaning and understanding of a situation where the researcher searches for meaning through interpretation of the data via induction (Creswell, 2014; Joyner, 2012).  Qualitative research could be case studies, ethnographic, action, philosophical, historical, legal, educational, etc. (Joyner, 2012).

Commonalities and differences

The commonalities that exist between these two methods is that each method has a question to answer, an identified area of interest (Creswell, 2014; Edmonson & McManus, 2007; Field, 2013; Joyner 2012).  Each method requires a survey of the current literature to help develop the research question (Creswell, 2014; Edmondson & McManus, 2007). Finally, there is a need to design a study to collect and analyze data to help answer that research question (Creswell, 2014; Edmonson & McManus, 2007; Field, 2013; Joyner 2012).  Therefore, the similarities between these two methods exist on why research is conducted and at a high level the what and the how research is conducted.  They differ in the particulars of the what and the how research is conduction.

The research question(s) can either become a centralized question with(out) sub-questions, but in quantitative research is driven by a series of statistically testable theoretical-hypothesis (Creswell, 2014; Edmonson & McManus, 2007). For quantitative methods data analysis, statistical tests are done to seek relationships, with hopes of testing a theory-driven hypothesis and providing a precise model, via a collection of numerical measures and established constructs (Edmonson & McManus, 2007). Given the need to statistically accept or reject theoretical-hypothesis, the sample size for a quantitative methods tend to be greater than those of qualitative methods (Creswell, 2014).  Qualitative research is driven by exploration and observations to test their hypothesis (Creswell, 2014; Edmonson & McManus, 2007). For qualitative methods data analysis, there should be an iterative and explorative content analysis, with hopes to build a new construct (Edmonson & McManus, 2007).  These are some of many other differences that exist between these two methods.

When are the advantages of quantitative methods maximized

Based off of Edmondson and McManus (2007), the best time to use quantitative methods is when the underlying theory of the research subject is mature.  Maturity consists of extensive literature that could be reviewed, the existence of theoretical constructs, and extensively tested measures (Edmondson & McManus, 2007).  Thus, the application of quantitative methods will help build effectively on prior work which will help fill in the gap of knowledge on a particular topic, whereas qualitative methods and mixed methods would fail to do so. Applying quantitative methods to a mature theory is reinventing the wheel, and applying mixed methods to it, will uneven the status of the evidence (Edmondson & McManus, 2007).

References:

  • Creswell, J. W. (2014) Research design: Qualitative, quantitative and mixed method approaches (4th ed.). California, SAGE Publications, Inc. VitalBook file.
  • Edmondson, A. C., & McManus, S. E. (2007). Methodological fit in management field research. Academy of Management Review, 32(4), 1155–1179. http://doi.org/10.5465/AMR.2007.26586086
  • Field, A. (2013) Discovering Statistics Using IBM SPSS Statistics (4th ed.). UK: Sage Publications Ltd. VitalBook file.
  • Joyner, R. L. (2012) Writing the Winning Thesis or Dissertation: A Step-by-Step Guide (3rd ed.). Corwin. VitalBook file.