Internal and External Validity

In quantitative research, a study is valid if one could draw meaning and inferences from the results based on methodology employed.  The three ways to look at validity is in (1) Content (do we measure what we wanted), (2) Predictive (do we match similar results, can we predict something), and (3) construct (are these hypothetical or real concepts).  This is not to be confused with reliability & consistency.  Thus, Creswell (2013) warns that if we modify an instrument or combine it with others, the validity and reliability of it could change, and in order to use it we must reestablish its validity and reliability.  There are several threats to validity that exist, either internal (history, maturation, regression, selection, mortality, diffusion of treatment, compensatory/resentful demoralization, compensatory rivalry, testing, and instrumentation) or external (interaction of selection and treatment, interaction of setting and treatment, and interaction of history and treatment).

Sample Validity Considerations: The validity issues are and their mitigation plans

Internal Validity Issues:

Hurricane intensities and tracks can vary annually or even decadally.  As time passes during this study for the 2016 and 2017 Atlantic Ocean Basin this study may run into regression issues.  These regression issues threaten the validity of the study in a way that certain types of weather components may not be the only factors that can increase/decrease hurricane forecasting skill from the average.  To mitigate regression issues, the study could mitigate the effect that these storms with an extreme departure from the average forecast skill have on the final results by eliminating them.  Naturally, the extreme departures from the average forecast skill will, with time, slightly impact the mean, but their results are still too valuable to dismiss.  Finding out which weather components impact these extreme departures from the average forecast skill is what drives this project.  Thus, their removal doesn’t seem to fit in this study and defeats the purpose of knowledge discovery.

External Validity Issues: 

The Eastern Pacific, Central Pacific, and Atlantic Ocean Basin have the same underlying dynamics that can create, intensify and influence the path of tropical cyclones.  However, these three basins still behave differently, thus there is an interaction of setting and treatment threats to the validity of these studies results. Results garnered in this study will not allow me to generalize beyond the Atlantic Ocean Basin. The only way to mitigate this threat to validity is to suggest future research to be conducted on each basin separately.


Data Tools: Case Study on Hadoop’s effectiveness

Case Study: Open source Cloud Computing Tools: A case study with a weather application

Focus on: Hadoop V0.20, which has a Platform as a Service cloud solution, which have parallel processing capabilities

Cluster size: 6 nodes, with Hadoop, Eucalyptus, and Django-Python clouds interfaces installed

Variables: Managing historical average temperature, rainfall, humidity data, and weather conditions per latitude and longitude across time and mapping it on top of a Google’s Map user interface

Data Source: Yahoo! Weather Page

Results/Benefits to the Industry:  The Hadoop platform has been evaluated by ten different criteria and compared to Eucalyptus and Django-Python, from a scale of 0-3, where 0 “indicates [a] lack of adequate feature support” and 3 “indicates that the particular tool provides [an] adequate feature to fulfill the criterion.”

Table 1: The criterion matrix and numerical scores have been adopted from Greer, Rodriguez-Martinez, and Seguel (2010) results.

Criterion Description Score
Management Tools Tools to deploy, configure, and maintain the system 0
Development Tools Tools to build new applications or features 3
Node Extensibility Ability to add new nodes without re-initialization 3
Use of Standards Use of TCP/IP, SSH, etc. 3
Security Built-in security as oppose to use of 3rd party patches. 3
Reliability Resilience to failures 3
Learning Curve Time to learn technology 2
Scalability Capacity to grow without degrading performance
Cost of Ownership Investments needed for usage 2
Support Availability of 3rd party support 3
Total 22

Eucalyptus scored 18, and Django-Python scored 20, therefore making Hadoop a better solution for this case study.  They study mentioned that:

  • Management tools: configuration was done by hand with XML and text and not graphical user interface
  • Development tools: Eclipse plug-in aids in debugging Hadoop applications
  • Node Extensibility: Hadoop can accept new nodes with no interruption in service
  • Use of standards: uses TCP/IP, SSH, SQL, JDK 1.6 (Java Standard), Python V2.6, and Apache tools
  • Security: password protected user-accounts and encryption
  • Reliability: Fault-tolerance is presented, and the user is shielded from the effects
  • Learning curve: It is not intuitive and required some experimentation after practicing from online tutorials
  • Scalability: not assessed due to the limits of the study (6-nodes is not enough)
  • Cost of Ownership: To be effective Hadoop needs a cluster, even if they are cheap machines
  • Support: there is a third party support for Hadoop

The authors talk about how Hadoop fails in providing a real-time response, and that part of the batch code should include email requests to be sent out at the start, key points of the iteration, or even at the end of the job when the output is ready.  The speed of Hadoop is slower to the other two solutions that were evaluated, but the fault tolerance features make up for it.  For set-up and configuration, Hadoop is simple to use.

Use in the most ample manner?

Hadoop was not fully used in my opinion and the opinion of the authors because they stated that they could not scale their research because the study was limited to a 6-node cluster. Hadoop is built for big data sets from various sources, formats, etc. to be ingested and processed to help deliver data-driven insights and the features of scalability that address this point were not addressed adequately in this study.


  • Greer, M., Rodriguez-Martinez, M., & Seguel, J. (2010). Open Source Cloud Computing Tools: A Case Study with a Weather Application.Florida: IEEE Open Source Cloud Computing.

Quant: Lack of detail

Concerns about the lack of detail In this scenario, there is a lack of detail, and to get subjects to participate in this research problem Miller (n.d.) said: “People need to know the specifics.” From the scenario described above, there is no indication of who these researchers are nor their credentials.  Without a quick biography … Continue reading “Quant: Lack of detail”

Concerns about the lack of detail

In this scenario, there is a lack of detail, and to get subjects to participate in this research problem Miller (n.d.) said: “People need to know the specifics.” From the scenario described above, there is no indication of who these researchers are nor their credentials.  Without a quick biography on the website, it is hard to discern if these researchers are credible to conduct the research. From the scenario, the recruit of subjects for their study seems to be lacking a statement of purpose, which sets the stage, intent, objectives, and major idea of the study to begin with (Creswell, 2014).  The statement of purpose gives the reader (the subjects) the reason as to why these researchers want to examine the two styles of leadership.  The statement of purpose demonstrates the problem statement, and defines the specific research questions these researchers are studying (Creswell, 2014).  Creswell, (2014) stated that effective purpose statements for quantitative research will be written in a deductive language and should include the variables, the relationships between the variables, the participants, and the research location.  The intent in quantitative research is demonstrated in the purpose statement through describing the relationships or lack thereof between the variable to be found through either a survey or experiments.  Miller (n.d.) and Creswell (2014) stated that identification of theory or conceptual framework is needed to build a strong statement of purpose.  Miller (n.d.) goes further to explain that there needs to be a statement of which two leadership styles theories or dimensions will be evaluated in this study.

There is no mention of whether the recruitment of the subjects is part the pilot study, which is used to help develop and try out methods and procedures, or conducting the main study, which is where the collection of actual data for the study is collected (Gall, Gall, & Borg, 2006).  The methodology section of this call for subjects should have addressed this.  It should also address what type of instrument these researchers are using to collect data from the subjects.  There are two main types of data collection: Survey and experiments.  It is more likely that this study recruiting subjects to study two types of leadership styles will use surveys as their means of quantitative data collection.  Creswell (2014) defines surveys a numerical data collected, studied and analyzed from a sample of the population to find out participant opinions and attitudes.  If done correctly, a statistical inference could be applied to aid in applying the results gained from this study to those of the population these researchers are trying to understand on these two leadership styles (Gall et al., 2006). Miller (n.d.) suggested that the surveys could ask about the subjects’ opinions or attitudes towards certain leadership style traits, or the survey could state a few scenarios and have the subjects select a multiple choice answer.

The survey instrument should be either valid and reliable.  It should have been either used before in other studies, with slight modifications to fit the parameters of this study, and it should be listed on their website.  A slight modification to the instrument may not have held the same validity and reliability as the old instrument.  Plus, if there is a lack of validity or reliability in the study’s instrument, then why should the subjects participate and waste their time.  Validity and reliability ensure that the results captured through the instrument will provide valid and meaningful results (Creswell, 2014; Miller, n.d.).  If the current instrument is not fully valid and reliable, then this could be indications of a pilot study to help refine and build validity and reliability in the instrument (Gall et al., 2006).   According to Creswell (2014), there are internal, external, and statistical conclusion threats to validity that must be controlled or mitigated as to help draw out the correct inference of the population.

There is no mention of the population in which these researchers are trying to study on the two leadership styles.  If the subject doesn’t fall under the conditions of the population, then the subject doesn’t know if even applying would seem like a waste of time.  Creswell (2014) states that depending on the population of the certain study instruments could work better than others, while others are just not well-suited enough to provide the needed validity and reliability needed to generalize results to that population.  The researchers could try to narrow their population by stating, “This study aims to understand the relationship between X, Y, and Z, that are displayed in A & B leadership styles among the Latin(x)-Americans population in the state of Oklahoma, from the ages of 25-35 and 45-55.”  Thus, subjects that do not fall under this population should not need to worry about applying for the study, saving time for both prospective subjects and the researchers.  The study has not mentioned how the population should be narrowed into a few dimensions to fit their study.  Thus, one can assume that these researchers may be trying to study the general population, which has a huge number of diverse dimensions that are impossible to study (Miller, n.d.).  Unless otherwise stated, any assumption goes based on the facts of this scenario.  The scenario does not mention how the researchers plan to obtain a rand selection from this population, and submitting a call through their website, would only draw a special type of population, which may or may not represent the population these researchers are trying to study.  The closer the sample represents the study’s aimed population, the more powerful is the statistical inference is to help draw inferences that are more representative of the population (Gall et al., 2006; Miller, n.d.).

Finally, there is a need for subject participation information that would entice participation: how long will the survey take; is there compensation; and will the subject be informed of the results at the end of the survey.  If the survey takes too much time and the population that these researchers are trying to sample doesn’t have that time readily available, then the participation rate would decrease.  The longer it takes to fill out an assessment, the need for compensation for the subjects is needed.  There are two ways to compensate subjects in a study: hand out small amounts of compensation to each participant or at the conclusion of the study; a random drawing is conducted to give out 2-3 prizes of substantial size (Miller, n.d.).  Regardless, if there is or is not any form of compensation available, the researchers should consider if there are at least some results or “lessons learned” that would be earned by the subjects through the participation in their study.


Quant: Validity and Reliability

the construction process of a survey that would ensure a valid & reliable assessment instrument

Most flaws in research methodology exist because the validity and reliability weren’t established (Gall, Gall, & Borg, 2006). Thus, it is important to ensure a valid and reliable assessment instrument.  So, in using any existing survey as an assessment instrument, one should report the instrument’s: development, items, scales, reports on reliability, and reports on validity through past uses (Creswell, 2014; Joyner, 2012).  Permission must be secured for using any instrument and placed in the appendix (Joyner, 2012).    The validity of the assessment instrument is key to drawing meaningful and useful statistical inferences (Creswell, 2014). Creswell (2014), stated that there are multiple types of validity that can exist in the instruments: content validity (measuring what we want), predictive or concurrent validity (measurements aligned with other results), construct validity (measuring constructs or concepts).  Establishing validity in the assessment instrument helps ensure that it’s the best instrument to use for the right situation.  Reliability in assessments instruments is when authors report that the assessment instrument has internal consistency and have been tested multiple times to ensure stable results every single time (Creswell, 2014).

Unfortunately, picking up an assessment instrument that doesn’t match the content exactly will not benefit anyone, nor will the results be accepted by the greater community.  Modifying an assessment instrument that doesn’t quite match completely, can damage the reliability of this new version of the instrument, and it can take huge amounts of time to establish validity and reliability on this new version of the instrument (Creswell, 2014).  Also creating a brand new assessment instrument would mean extensive pilot studies and tests, along with an explanation of how it was developed to help establish the instrument’s validity and reliability (Joyner, 2012).

Selecting a target group for the administration of the survey

Through sampling of a population and using a valid and reliable survey instrument for assessment, attitudes and opinions about a population could be correctly inferred from the sample (Creswell, 2014).  Thus, not only is validity and reliability important but selecting the right target group for the survey is key.  A targeted group for this survey means that the population in which information will be inferred from must be stratified, which means that the characters of the population are known ahead of time (Creswell, 2014; Gall et al. 2006). From this stratified population, is where a random sampling of participants should be selected from, to ensure that statistical inference could be made for that population (Gall et al., 2006). Sometimes a survey instrument doesn’t fit those in the target group. Thus it would not produce valid nor reliable inferences for the targeted population. One must select a targeted population and determine the size of that stratified population (Creswell, 2014).  Finally, one must consider the sample size of the targeted group.

Administrative procedure to maximize the consistency of the survey

Once a stratified population and a random sample from that population have been carefully selected, there is a need to maximize the consistency of the survey.  Thus, researchers must take into account the availability of sampling, through either mail, email, website, or other survey tools like are ways to gather data (Creswell 2014). However, mail has a low rate of return (Miller, n.d.), so face-to-face methods or online the use of online providers may be the best bet to maximize the consistency of the survey.


Creswell, J. W. (2014) Research design: Qualitative, quantitative and mixed method approaches (4th ed.). California, SAGE Publications, Inc. VitalBook file.

Gall, M. D., Joyce Gall, Walter Borg. Educational Research: An Introduction (8th ed.). Pearson Learning Solutions. VitalBook file.

Joyner, R. L. (2012) Writing the Winning Thesis or Dissertation: A Step-by-Step Guide (3rd ed.). Corwin. VitalBook file.

Miller, R. (n.d.). Week 5: Research study construction. [Video file]. Retrieved from