Data Visualization Tools in Healthcare

Purpose and Impact of data visualization in the Healthcare industry

There are many applications of data analytics in the healthcare industry: physician and ambulatory care centers, hospitals and health systems, managed care plans and HMOs, genomic studies, and Accountable care organizations (Cyranoski, 2015; eInfochips, n.d.). Therefore, visualizing health data could help tell stories through analyzing relevant data such that data-driven decisions and actions could be made (California HealthCare Foundation [CHCF], 2014; eInfochips, n.d.).  Cleardata (n.d.) suggested that presentation of data for data visualization should consist of the following best practices: the use of relevant data, begin with understanding what should be communicated then design towards that, make visualizations easy for the consumer, ensure HIPAA-compliance when showing data, and create visualizations that can lead in making data-driven decisions and action. Therefore, before selecting the right visualization tool, a presentation approach must be considered, which takes into account: personal level of expertise, visualization methods, and interactivity of the visualization (CHCF, 2014).

It is not enough to analyze the relevant data for data-driven decisions but also selecting relevant visualizations of that data to enable those data-driven decision (eInfochips, n.d.). There are many types of ways to visualize the data to highlight key facts through style and succinctly: tables and rankings, bar charts, line graphs, pie charts, stacked bar charts, tree maps, choropleth maps, cartograms, pinpoint maps, or proportional symbol maps (CHCF, 2014).  The above visualization plots, charts, maps and graphs could be part of an animated, static, and Interactive Visualizations and would it be a standalone image, dashboards, scorecards, or infographics (CHCF, 2014; eInfochips, n.d.).

The CHCF (2014) recommended that data visualization tools that everyone could use would be: Google Charts & Maps, Tableau Public, Mapbox, Infogram, Many Eyes, iCharts, and Datawrapper. CHCF (2014) also recommended some data visualization tools for developers such as High Charts, TileMill, D3.js, FLOT, Fusion Charts, OpenLayers, and JSMap.  Whereas eInfochips (n.d.) suggested visualization tools like Tableau, R, and Spotfire. Many eyes have been shut down by IBM and have been replaced by Watson Analytics (Machlis, 2011).

Summary on three data visualization tools that are used in health care (Machlis, 2011):

Tool Name Description Advantages Disadvantages Skill Level Required Runs on
R A statistical analysis tool that can not only do simple arithmetic and regression analysis, but it can also do complex data preprocessing, data mining, machine learning, and static data visualizations. Library of code is supported by the community, which is subject matter experts. Runs as a command line program, therefore there is a need to install a Graphical User Interface. Linux, Mac OS X, Unix, & Windows Advance Beginner
Tableau or

Tableau public (Free version)

A tool mostly used for interactive visualization that can do all the visualizations mentioned in the post, through dragging and dropping variables. A drag-and-drop interface allows for quick work to do data analysis that would take the time to manually code. Data stored in Tableau Public is stored on the web for free for others to use, which may make data privacy hard to control.  Otherwise, the full software is over $1K for a single user. There is limited customization in its interface, but can be done through code. Windows and Mac OS X Beginner to Intermediate
Google Chart Tools A self-contained application for storing data on the cloud and visualizing it anywhere, through the use of JavaScript visualization libraries. Integration with other Google products like Google Spreadsheets and is heavily documented JavaScript library. Requires coding to make the visualizations, and you don’t have access to the JavaScript codes and have to rely on continuous Google support. Any device with a web browser. Advanced to Expert

 

 

Resources

Using R and Spark for health care

Use of R with regard to healthcare field case study by Pereira and Noronha (2016):

R and RStudio have been used to look at patient health and diseases records located in Electronic Medical Records (EMR) for fraud detection.  Anomaly detection revolves around using a mapping code that filters data based on geo-locations.  Secondly, a reducer code which aggregates the data based on extreme values of cost claims per disease along with calculating the difference.  Finally, a code that analyzed the data that meets a 60% cost fraud threshold. It was found that as the geo-location resolution increased, the anomalies detected increased.

R and RStudio have been able to use big data analytics to predict diabetes from the Health Information System (HIS) which houses patient information, based on symptoms. For predicting diabetes, the authors used a classification algorithm (decision tree) with a 70%-30% training-test dataset split, to eventually plot the false positive rate v. True positive rate.  This plot showed skill in predicting diabetes.

Use of Spark about the healthcare field case study by Pita et al. (2015):

Data quality in healthcare data is poor and in particular that from the Brazilian Public Health System.  Spark was used to help in data processing to improve quality through deterministic and probabilistic record linking within multiple databases.  Record linking is a technique that uses common attributes across multiple databases and identifies a 1-to-1 match.  Spark workflows were created to help do record linking by (1) analyzing all data in each database and common attributes with high probabilities of linkage; (2) pre-processing data where data is transformed, anonymization, and cleaned to a single format so that all the attributes can be compared to each other for a 1-to-1 match; (3) record linking based on deterministic and probabilistic algorithms; and (4) statistical analysis to evaluate the accuracy. Over 397M comparisons were made in 12 hours.  They concluded that accuracy depends on the size of the data, where the bigger the data, the more accuracy in record linking.

References

  • Pereira, J. P., & Noronha, V. (2016). Anomalies Detection and Disease Prediction in Healthcare Systems using Big Data Analytics. Retrieved from http://www.aijet.in/v3/1608001.pdf
  • Pita, R., Pinto, C., Melo, P., Silva, M., Barreto, M., & Rasella, D. (2015). A Spark-based Workflow for Probabilistic Record Linkage of Healthcare Data. In EDBT/ICDT Workshops (pp. 17-26).

Adv Quant: Association Rules in R

Introduction

Online radio keeps track of everything you play. This information is used to make recommendations to you for additional music. This large dataset was mined with arules in R to recommend new music to this community of radio listeners which has ~300,000 records and ~15,000 users.

Results

 5ip1.PNG

Figure 1. The output of the apriori command, which filtered data for the rules under a support of 0.01, a confidence of 0.5, and max length of 3.

5ip2.PNG

Figure 2. The output of the apriori, searching for only a subset of rules: (a) all rules with lift is greater than 5, (b) all rules where the confidence is greater than 0.6, (c) all rules with support > 0.02 and confidence greater than 0.6, (d) all the rules where Rihanna appears on the right-hand side, and (e) the top ten rules with the largest lift.

 5ip3.PNG

Figure 3. The output of the apriori command, which filtered data for the rules as aforementioned under a support of 0.001, a confidence of 0.5, and max length of 2.

5ip4.1.PNG5ip4.2.PNG5ip4.3

Figure 4. The output of the apriori, searching for only a subset of rules: (a) all rules with lift is greater than 5, (b) all rules where the confidence is greater than 0.6, (c) all rules with support > 0.02 and confidence greater than 0.6, (d) all the rules where Rihanna appears on the right-hand side, and (e) the top ten rules with the largest lift.

Discussion

There are a total of 289,956 data points, with 15,001 unique users that are listening to 1,005 unique artists.  From this dataset, there is a total of 48 rules under a support of 0.01, a confidence of 0.5, and a max length of 3.  When inspecting the first five rules (Figure 1), the results show each rule, and its corresponding support, confidence and lift if it meets the restrictions placed above.   Also, there is a total of 93 rules under a support of 0.001, the confidence of 0.5, and max length of 2.  When inspecting the first five rules (Figure 2), the results show each rule, and its corresponding support, confidence and lift if it meets the restrictions placed above.

 Apriori counts the transactions within the “playtrans” matrix.  According to Hahsler et al. (n.d.), the most used constraints for apriori are known as support and confidence, where the lower the confidence or support values, the more rules the algorithm will generate.  This relationship is illustrated between the two rule sets, where with higher support values, there were fewer rules generated.  Essentially, support can be seen as the proportion (%) of transactions in the data set with that exact item, whereas confidence is the proportion (%) of transaction where the rule is correct (Hahsler et al., n.d.).  The effects between just varying the support values can be seen in the number of subset rules for each rule set (Figure 2 & 4).    When reducing the support levels, there was an increase in the number of rules with Rihanna on the right-hand side (Figure 2d & 4d), and this happened across inspecting all the subset rules, even though the support, confidence, and lift values are the same between the rule sets.

Finally, the greater the lift value, the stronger the association rule (Hahsler et al., n.d.).  When relaxing the constraints, higher lift values could be observed (Figure 1-4).  This happens due to showing more rules, as constraints are weakened, then lift values can increase. Analyzing the top 10 lift values between both rule sets (Figure 2e and 4e), the top value with stricter results doesn’t appear in the top 10 lift values for relaxed constraints.  However, with stricter constraints (Figure 2e), users that listen to “the pussycat dolls” have a higher chance of listening to “rihanna”, than any other artist.  Whereas with relaxed constraints (Figure 4e), users that listen to “madvillain” have a higher chance of listening to “mf doom”, than any other artist, and that is more likely than the “the pussycat doll”-“rihanna” rule.  Similar associations can be made from the data found in the figures (1-4).

 Code

setwd(“C:/Users/fj998d/Documents/R/dataSets”)

LastFM=read.csv(“lastfm.csv”, header = F, sep = “,”) ## (Celma, 2009)

#

##

###—————————————————————————————————————-

## Variables: UserID = V1; ArtistID = V2; ArtistName = V3; PlayCount = V4

###—————————————————————————————————————-

## Apriori info(Hahsler, Grun, Hornic, & Buchta, n.d.):

##   Constraints for apriori are known as support and confidence, the lower the confidence or supprot the more rules.

##     * Support is the proportion (%) of transactions in the data set with that exact item.

##     * Confidence is the proportion (%) of transaction where the rule is correct.

##   The greater the lift, the stronger the assocition rule, thus lift is a deviation measure of the total rule

##   support from the support expected under independence.

##   Other Contraints used

##     * Max length defines the maximum size of mined frequent item rules.

###—————————————————————————————————————-

##

#

head(LastFM)

length(LastFM$V1)

summary(levels(LastFM$V1))

summary(levels(LastFM$V2))

## a-rules package for asociation rules

install.packages(“arules”)

library(arules)

## Computational enviroment for mining association rules and frequent item sets

## we need to manpulate the data a bit before using arules, we split the data in the vector

## x into groups defined in vector f. (Hahsler, Grun, Hornic, & Buchta, n.d.)

playlists = split(x=LastFM[,”V2″],f=LastFM$V1) # Convert the data to a matrix so that each fan is a row for artists across the clmns (R, n.d.c.)

playlists = lapply(playlists,unique)           # Find unique attributes in playlist, and create a list of those in playlists (R, n.d.a.; R, n.d.b.)

playtrans = as(playlists,”transactions”)       # Converts data and produce rule sets

## Create association rules with a support of 0.01 and confidence of 0.5, with a max length of 3

## which will show the support that listening to one artist gives to other artists; in other words,

## providing lift to an associated artist.

musicrules = apriori(playtrans, parameter=list(support=0.01, confidence=0.5, maxlen=3)) # filter the data for rules

musicrules

inspect(musicrules[1:5])

## Choose any subset

inspect(subset(musicrules, subset=lift>5))                        # tell me all the rules with a lift > 5

inspect(subset(musicrules, subset=confidence>0.6))                # tell me all the rules with a confidence of 0.6 or greater

inspect(subset(musicrules, subset=support>0.02& confidence >0.6)) # tell me the rules within a particular CI

inspect(subset(musicrules, subset=rhs%in%”rihanna”))              # tell me all the rules with rihanna in the left hand side

inspect(head(musicrules, n=10, by=”lift”))                        # tell me the top 10 rules with the largest lift

## Create association rules with a support of 0.001 and confidence of 0.1, with a max length of 2

artrules = apriori(playtrans, parameter=list(support=0.001, confidence=0.5, maxlen=2)) # filter the data for rules

artrules

inspect(artrules[1:5])

 ## Choose any subset

inspect(subset(artrules, subset=lift>5))

inspect(subset(artrules, subset=confidence>0.6))

inspect(subset(artrules, subset=support>0.02& confidence >0.6))

inspect(subset(artrules, subset=rhs%in%”rihanna”))

inspect(head(artrules, n=10, by=”lift”))

## Write down all the rules into a CSV file for co

write(musicrules, file=”musicRulesFromApriori.csv”, sep = “,”, col.names = NA)

write(artrules, file=”artistRulesFromApriori.csv”, sep = “,”, col.names = NA)

 Reference

Adv Quant: Decision Trees in R

Classification, Regression, and Conditional Tree Growth Algorithms

The variables used for tree growth algorithms are the log of benign prostatic hyperplasia amount (lbph), log of prostate-specific antigen (lpsa), Gleason score (gleason), log of capsular penetration (lcp) and log of the cancer volume (lcavol) to understand and predict tumor spread (seminal vesicle invasion=svi).

Results

5db3f1.PNG

Figure 1: Visualization of cross-validation results, for the classification tree (left) and regression tree (right).

5db3f2

Figure 2: Classification tree (left), regression tree (center), and conditional tree (right).

5db3f3.PNG

Figure 3: Summarization of tree data: (a) classification tree, (b) regression tree, and (c) conditional tree.

Discussion

For the classification tree growth algorithm, the head node is the seminal vesicle invasion which helps show the tumor spread in this dataset, and the cross-validation results show that there is only one split in the tree, with an x-value relative value for the first split of 0.71429 (Figure 1 & Figure 3a), and an x-value standard deviation of 0.16957 (Figure 3a).  The variable that was used to split the tree was the log of capsular penetration (Figure 2), when the log of capsular penetration at <1.791.

Next, for the regression tree growth algorithm, there are three leaf nodes, because the algorithm split the data three times.  In this case, the relative error for the first split is 1.00931, and a standard deviation of 0.18969 and at the second split the relative error is 0.69007 and a standard deviation of 0.15773 (Figure 1 & Figure 3b).  The tree was split at first at the log of capsular penetration at <1.791, and with the log of prostate specific antigen value at <2.993 (Figure 2).  It is interesting that the first split occurred at the same value for these two different tree growth algorithm, but that the relative errors and standard deviations were different and that the regression tree created one more level.

Finally, the conditional tree growth algorithm produced a split at <1.749 of the log capsular penetration at the 0.001 significance level and <2.973 for the log of prostate specific antigen also at the 0.001 significance level (Figure 2 & Figure 3c).  The results are similar to the regression tree, with the same number of leaf nodes and values in which they are split against, but more information is gained from the conditional tree growth algorithm than the classification and regression tree growth algorithm.

Code

#

### ———————————————————————————————————-

## Use the prostate cancer dataset available in R, in which biopsy results are given for 97 men.

## Goal:  Predict tumor spread in this dataset of 97 men who had undergone a biopsy.

## The measures to be used for prediction are BPH=lbhp, PSA=lpsa, Gleason Score=gleason, CP=lcp,

## and size of prostate=lcavol.

### ———————————————————————————————————-

##

install.packages(“lasso2”)

library(lasso2)

data(“Prostate”)

install.packages(“rpart”)

library(rpart)

## Grow a classification tree

classification = rpart(svi~lbph+lpsa+gleason+lcp+lcavol, data=Prostate, method=”class”)

printcp(classification) # display the results

plotcp(classification)  # visualization cross-validation results

plot(classification, uniform = T, main=”Classification Tree for prostate cancer”) # plot tree

text(classification, use.n = T, all = T, cex=.8)                                  # create text on the tree

## Grow a regression tree

Regression = rpart(svi~lbph+lpsa+gleason+lcp+lcavol, data=Prostate, method=”anova”)

printcp(Regression) # display the results

plotcp(Regression)  # visualization cross-validation results

plot(Regression, uniform = T, main=”Regression Tree for prostate cancer”) # plot tree

text(Regression, use.n = T, all = T, cex=.8)                              # create text on the tree

install.packages(“party”)

library(party)

## Grow a conditional inference tree

conditional = ctree(svi~lbph+lpsa+gleason+lcp+lcavol, data=Prostate)

conditional # display the results

plot(conditional, main=”Conditional inference tree for prostate cancer”)

References

Adv Quant: Bayesian analysis in R

Introduction

Bayes’ theory is a conditional probability that takes into account prior knowledge, but updates itself when new data becomes available (Hubbard, 2010; Smith, 2015).  The formulation of Bayes’ theory is p(θ |y)= p(theta)*P(y| θ)/(∑(P(θ)*P(y| θ))), where p(θ) is the prior probabilities, and P(y| θ) are the likelihoods (Cowles, Kass, & O’Hagan, 2009).

The Delayed Airplanes Dataset consists of airplane flights from Washington D.C. into New York City.  The date range for this data is for the entire month of February 2016, and there are 702 cases to be studied.

Results

4ip1.PNG

Figure 1: Histogram showcasing the density of flight delays that are 15 minutes or longer.

4ip2.PNG

Figure 2: Shows summary data for the variables in this Bayesian Analysis before training and testing.

4ip3.PNG

Figure 3: Bayesian Prediction of the flight delay data from Washington, D.C. to New York City, NY.

4ip4

Figure 4: Bayesian prediction results versus the test data results, where false negatives are encircled in blue, while false positives are encircled in red.

Discussion

 The histogram (Figure 1) showcases that there are almost three times as many cases that flights depart on time from Washington, D.C. to New York City, NY.  Summation data proves this (Table 2).

The above summary (Table 2) states that 77.813% of the flights were not delayed equal to or more than 15 minutes, for the cases we do have data on. There is null data in the departure time, delayed 15 minutes or more, and weather delay variables.  To know the percentage of flights per day of the week, or carrier, destination, etc. the prior probabilities need to be calculated below.

About 77.2973% of the training model didn’t have a delay, but 22.7027% did have a delay of 15 or greater minutes (from tdelay variable).  These values are close to those above summation (Figure 2). Thus the training data could be trusted, even though a random sampling wasn’t taken.  The reason for not taking a random sampling is to be able to predict into the future, given 60% of the data is already collected.

Comparing both sets of histograms (Figure 1 and Figure 3), the distribution of the first histogram is binomial.  However, the posterior distribution, the secondary histogram, is similarly shaped as a positively skewed distribution.  This was an expected result described by Smith (2015), which is why the author states that the prior distribution has an effect on the posterior distribution.

The Bayesian prediction results tend to produce a bunch false negatives, compared to the real data sets, thus indicating more type II error than type I error.  When looking at the code below, the probability of finding a result that is 0.5 or larger is 15.302%.

Code

#

## Locate the data, filter out the data, and pull it into R from the computer (R, n.d.b.)

#

setwd(“C:/Users/XXX/Documents/R/dataSets”)

airplaneData=read.csv(“022016DC2NYC_1022370032_T_ONTIME.csv”, header = T, sep = “,”)

#

##

### ———————————————————————————————————-

##  Data Source: http://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time

##        Dependent:   Departure Delay Indicator, 15 minutes or more (Dep_Del15)

##        Independent: Arrival airports of Newark-EWR, Kennedy-JFK, and LaGuardia-LGA (Origin)

##        Independent: Departure airports of Baltimore-BWI, Dulles-IAD, and Reagan-DCA (Dest)

##        Independent: Carriers (Carrier)

##        Independent: Hours of departure (Dep_Time)

##        Independent: Weather conditions (Weather_Delay)

##        Independent: Monday = 1, Tuesday = 2, …Sunday = 7 (Day_Of_Week)

### ———————————————————————————————————-

##  bayes theory => p(theta|y)= p(theta)*P(y|theta)/(SUM(P(theta)*P(y|theta))) (Cowles, Kass, & O’Hagan, 2009)

### ———————————————————————————————————-

##

#

## Create a data.frame

delay = data.frame(airplaneData)

## Factoring and labeling the variables (Taddy, n.d.)

delay$DEP_TIME = factor(floor(delay$DEP_TIME/100))

delay$DAY_OF_WEEK = factor(delay$DAY_OF_WEEK, labels = c(“M”, “T”, “W”, “R”, “F”, “S”, “U”))

delay$DEP_DEL15 = factor(delay$DEP_DEL15)

delay$WEATHER_DELAY= factor(ifelse(delay$WEATHER_DELAY>=1,1,0)) # (R, n.d.a.)

delay$CARRIER = factor(delay$CARRIER, levels = c(“AA”,”B6″,”DL”,”EV”,”UA”))

levels(delay$CARRIER) = c(“American”, “JetBlue”, “Delta”, “ExpressJet”, “UnitedAir”)

## Quick understanding the data

delayed15 = as.numeric(levels(delay$DEP_DEL15)[delay$DEP_DEL15])

hist(delayed15, freq=F, main = “Histogram of Delays of 15 mins or longer”, xlab = “time >= 15 mins (1) or time < 15 (0)”)

summary(delay)

### Create the training and testing data (60/40%)

ntotal=length(delay$DAY_OF_WEEK)    # Total number of datapoints assigned dynamically

ntrain = sample(1:ntotal,floor(ntotal*(0.6))) # Take values 1 – n*0.6

ntest = ntotal-floor(ntotal*(0.6))       # The number of test cases (40% of the data)

trainingData = cbind(delay$DAY_OF_WEEK[ntrain], delay$CARRIER[ntrain],delay$ORIGIN[ntrain],delay$DEST[ntrain],delay$DEP_TIME[ntrain],delay$WEATHER_DELAY[ntrain],delayed15[ntrain])

testingData  = cbind(delay$DAY_OF_WEEK[-ntrain], delay$CARRIER[-ntrain],delay$ORIGIN[-ntrain],delay$DEST[-ntrain],delay$DEP_TIME[-ntrain],delay$WEATHER_DELAY[-ntrain],delayed15[-ntrain])

## Partitioning the train data by half

trainFirst= trainingData[trainingData[,7]<0.5,]

trainSecond= trainingData[trainingData[,7]>0.5,]

### Prior probabilities = p(theta) (Cowles, Kass, & O’Hagan, 2009)

## Dependent variable: time delayed >= 15

tdelay=table(delayed15[ntrain])/sum(table(delayed15[ntrain]))

### Prior probabilities between the partitioned training data

## Independent variable: Day of the week (% flights occured in which day of the week)

tday1=table(trainFirst[,1])/sum(table(trainFirst[,1]))

tday2=table(trainSecond[,1])/sum(table(trainSecond[,1]))

## Independent variable: Carrier (% flights occured in which carrier)

tcarrier1=table(trainFirst[,2])/sum(table(trainFirst[,2]))

tcarrier2=table(trainSecond[,2])/sum(table(trainSecond[,2]))

## Independent variable: Origin (% flights occured in which originating airport)

tOrigin1=table(trainFirst[,3])/sum(table(trainFirst[,3]))

tOrigin2=table(trainSecond[,3])/sum(table(trainSecond[,3]))

## Independent variable: Destination (% flights occured in which destinateion airport)

tdest1=table(trainFirst[,4])/sum(table(trainFirst[,4]))

tdest2=table(trainSecond[,4])/sum(table(trainSecond[,4]))

## Independent variable: Department Time (% flights occured in which time of the day)

tTime1=table(trainFirst[,5])/sum(table(trainFirst[,5]))

tTime2=table(trainSecond[,5])/sum(table(trainSecond[,5]))

## Independent variable: Weather (% flights delayed because of adverse weather conditions)

twx1=table(trainFirst[,6])/sum(table(trainFirst[,6]))

twx2=table(trainSecond[,6])/sum(table(trainSecond[,6]))

### likelihoods = p(y|theta) (Cowles, Kass, & O’Hagan, 2009)

likelihood1=tday1[testingData[,1]]*tcarrier1[testingData[,2]]*tOrigin1[testingData[,3]]*tdest1[testingData[,4]]*tTime1[testingData[,5]]*twx1[testingData[,6]]

likelihood2=tday2[testingData[,1]]*tcarrier2[testingData[,2]]*tOrigin2[testingData[,3]]*tdest2[testingData[,4]]*tTime2[testingData[,5]]*twx2[testingData[,6]]

### Predictions using bayes theory = p(theta|y)= p(theta)*P(y|theta)/(SUM(P(theta)*P(y|theta))) (Cowles, Kass, & O’Hagan, 2009)

Bayes=(likelihood2*tdelay[2])/(likelihood2*tdelay[2]+likelihood1*tdelay[1])

hist(Bayes, freq=F, main=”Bayesian Analysis of flight delay data”)

plot(delayed15[-ntrain]~Bayes, main=”Bayes results versus actual results for flights delayed >= 15 mins”, xlab=”Bayes Analysis Prediction of which cases will be delayed”, ylab=”Actual results from test data showing delayed cases”)

## The probability of 0.5 or larger

densityMeasure = table(delayed15[-ntrain],floor(Bayes+0.5))

probabilityOfXlarger=(densityMeasure[1,2]+densityMeasure[2,1])/ntest

probabilityOfXlarger

References

Adv Quant: K-means classification in R

The explanatory variables in the logistic regression are both the type of loan and the borrowing amount.

4dbf1.PNG

Figure 1: The summary output of the logistic regression based on the type of loan and the borrowing amount.

The logistic equation shows statistical significance at the 0.01 level when the variables amount, and when the type of loan is used for a used car and a radio/television (Figure 1).  Thus, the regression equation comes out to be:

default = -0.9321 + 0.0001330(amount) – 1.56(Purpose is for used car) – 0.6499(purpose is for radio/television)

4dbf2.PNG

Figure 2: The comparative output of the logistic regression prediction versus actual results.

When comparing the predictions to the actual values (Figure 2), the mean and minimum scores between both of them are similar.  However, all other values are not. When the prediction values are rounded to the nearest whole number the actual prediction rate is 73%.

K-means classification, on the 3 continuous variables: duration, amount, and installment.

In K-means classification the data is clustered by the mean Euclidean distance between their differences (Ahlemeyer-Stubbe & Coleman, 2014).  In this exercise, there are two clusters. Thus, the cluster size is 825 no defaults, 175 defaults, where the within-cluster sum of squares for between/total is 69.78%.  The matrix of cluster centers is shown below (Figure 3).

4dbf3

Figure 3: K means center values, per variable

Cross-validation with k = 5 for the nearest neighbor.

K-nearest neighbor (K =5) is when a data point is clustered into a group, by having 5 of the nearest neighbors vote on that data point, and it is particularly useful if the data is a binary or categorical (Berson, Smith, & Thearling, 1999).  In this exercise, the percentage of correct classifications from the trained and predicted classification is 69%.  However, logistic regression in this scenario was able to produce a much higher prediction rate of 73%, this for this exercise and this data set, logistic regression was quite useful in predicting the default rate than the k-nearest neighbor algorithm at k=5.

Code

#

## The German credit data contains attributes and outcomes on 1,000 loan applications.

## Data source: https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data

## Metadata file: https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.doc

#

## Reading the data from source and displaying the top five entries.

credits=read.csv(“https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data&#8221;, header = F, sep = ” “)

head(credits)

#

##

### ———————————————————————————————————-

## The two outcomes are success (defaulting on the loan) and failure (not defaulting).

## The explanatory variables in the logistic regression are both the type of loan and the borrowing amount.

### ———————————————————————————————————-

##

#

## Defining and re-leveling the variables (Taddy, n.d.)

default = credits$V21 – 1 # set default true when = 2

amount = credits$V5

purpose = factor(credits$V4, levels = c(“A40″,”A41″,”A42″,”A43″,”A44″,”A45″,”A46″,”A48″,”A49″,”A410”))

levels(purpose) = c(“newcar”, “usedcar”, “furniture/equip”, “radio/TV”, “apps”, “repairs”, “edu”, “retraining”, “biz”, “other”)

## Create a new matrix called “cred” with the 8 defined variables (Taddy, n.d.)

credits$default = default

credits$amount  = amount

credits$purpose = purpose

cred = credits[,c(“default”,”amount”,”purpose”)]

head(cred[,])

summary(cred[,])

## Create a design matrix, such that factor variables are turned into indicator variables

Xcred = model.matrix(default~., data=cred)[,-1]

Xcred[1:5,]

## Creating training and prediction datasets: Select 900 rows for esitmation and 100 for testing

set.seed(1)

train = sample(1:1000,900)

## Defining which x and y values in the design matrix will be for training and for testing

xtrain = Xcred[train,]

xtest = Xcred[-train,]

ytrain = cred$default[train]

ytest = cred$default[-train]

## logistic regresion

datas=data.frame(default=ytrain,xtrain)

creditglm=glm(default~., family=binomial, data=datas)

summary(creditglm)

percentOfCorrect=100*(sum(ytest==round(testingdata$defaultPrediction))/100)

percentOfCorrect

## Predicting default from the test data (Alice, 2015; UCLA: Statistical Consulting Group., 2007)

testdata=data.frame(default=ytest,xtest)

testdata[1:5,]

testingdata=testdata[,2:11] #removing the variable default from the data matrix

testingdata$defaultPrediction = predict(creditglm, newdata=testdata, type = “response”)

results = data.frame(ytest,testingdata$defaultPrediction)

summary(results)

head(results,10)

#

##

### ———————————————————————————————————-

##  K-means classification, on the 3 continuous variables: duration, amount, and installment.

### ———————————————————————————————————-

##

#

install.packages(“class”)

library(class)

## Defining and re-leveling the variables (Taddy, n.d.)

default = credits$V21 – 1 # set default true when = 2

duration = credits$V2

amount = credits$V5

installment = credits$V8

## Create a new matrix called “cred” with the 8 defined variables (Taddy, n.d.)

credits$default = default

credits$amount  = amount

credits$installment = installment

credits$duration = duration

creds = credits[,c(“duration”,”amount”,”installment”,”default”)]

head(creds[,])

summary(creds[,])

## K means classification (R, n.b.a)

kmeansclass= cbind(creds$default,creds$duration,creds$amount,creds$installment)

kmeansresult= kmeans(kmeansclass,2)

kmeansresult$cluster

kmeansresult$size

kmeansresult$centers

kmeansresult$betweenss/kmeansresult$totss

#

##

### ———————————————————————————————————-

##  Cross-validation with k = 5 for the nearest neighbor. 

### ———————————————————————————————————-

##

#

## Create a design matrix, such that factor variables are turned into indicator variables

Xcreds = model.matrix(default~., data=creds)[,-1]

Xcreds[1:5,]

## Creating training and prediction datasets: Select 900 rows for esitmation and 100 for testing

set.seed(1)

train = sample(1:1000,900)

## Defining which x and y values in the design matrix will be for training and for testing

xtrain = Xcreds[train,]

xtest = Xcreds[-train,]

ytrain = creds$default[train]

ytest = creds$default[-train]

## K-nearest neighbor clustering (R, n.d.b.)

nearestFive=knn(train = xtrain[,2,drop=F],test=xtest[,2,drop=F],cl=ytrain,k=5)

knnresults=cbind(ytest+1,nearestFive) # The addition of 1 is done on ytest because when cbind is applied to nearestFive it adds 1 to each value.

percentOfCorrect=100*(sum(ytest==nearestFive)/100)

References

Adv Quant: Logistic Regression in R

Introduction

The German credit data contains attributes and outcomes on 1,000 loan applications. The data are available at this Web site, where datasets are provided for the machine learning community.

Results

IP3F1.PNG

Figure 1: Image shows the first six entries in the German credit data.

IP3F2.png

Figure 2: Matrix scatter plot, showing the 2×2 relationships between all the variables within the German credit data.

IP3F3.png

Figure 3: A summary of the credit data with the variables of interest.

IP3F3.png

Figure 4: Shows the entries in the designer matrix which will be used for logistical analysis.

IP3F4

Figure 5: Summarized logistic regression information based on the training data.

IP3F6.1.pngIP3F6.2.png

Figure 6: The coeficients’ confidence interval at the 95% level using log-likelihood vlaues, with values to the right including the standard errors values.

IP3F7.png

Figure 7: Wald Test statistic to test the significance level of the entire ranked variable.

IP3F8.png

Figure 8: The Odds Ratio for each independent variable along with the 95% confidence interval for those odds ratio.

IP3F9.png

Figure 9: Part of the summarized test data set for the logistics regression model.

IP3F10.png

Figure 10: The ROC curve, which illustrates the false positive rate versus the true positive rate of the prediction model.

Discussion

The results from Figure 1 means that the data needs to be formatted before any analysis could be conducted on the data.  Hence, the following lines of code were needed to redefine the variables in the German data set.   Given the data output (Figure 1), the matrix scatter plot (Figure 2) show that duration, amount, and age are continuous variables, while the other five variables are factor variables, which have categorized scatterplots.  Even though installment and default show box plot data in the summary (Figure 3), the data wasn’t factored like history, purpose, or rent, thus it won’t show a count.  From the count data (Figure 3), the ~30% of the purpose of loans are for cars, where as 28% is for TVs.  In this German credit data, about 82% of those asking for credit do not rent and about 53% of borrowers have an ok credit history with 29.3% having a horrible credit history.  The mean average default rate is 30%.

Variables (Figure 5) that have statistical significance at the 0.10 include duration, amount, installment, age, history (per category), rent, and some of the purposes categories.  Though it is preferred to see a large difference in the null deviance and residual deviance, it is still a difference.  The 95% confidence interval for all the logistic regression equation don’t show much spread from their central tendencies (Figure 6).  Thus, the logistic regression model is (from figure 5):

IP3F11.PNG

The odds ratio measures the constant strength of association between the independent and dependent variables (Huck, 2011; Smith, 2015).  This is similar to the correlation coefficient (r) and coefficient of determination (r2) values for linear regression.  According to UCLA: Statistical Consulting Group, (2007), if the P value is less than 0.05, then the overall effect of the ranked term is statistically significant (Figure 7), which in this case the three main terms are.  The odds ratio data (Figure 8) is promising, as values closer to one is desirable for this prediction model (Field, 2013). If the value of the odds ratio is greater than 1, it will show that as the independent variable value increases, so do the odds of the dependent variable (Y = n) occurs increases and vice versa (Fields, 2013).

Moving into the testing phase of the logistics regression model, the 100 value data set needs to be extracted, and the results on whether or not there will be a default or not on the loan are predicted. Comparing the training and the test data sets, the maximum values between the both are not the same for durations and amount of the loan.  All other variables and statistical distributions are similar to each other between the training and the test data.  Thus, the random sampling algorithm in R was effective.

The area underneath the ROC curve (Figure 10), is 0.6994048, which is closer to 0.50 than it is to one, thus this regression does better than pure chance, but it is far from perfect (Alice, 2015).

In conclusion, the regression formula has a 0.699 prediction accuracy, and the purpose, history, and rent ranked categorical variables were statistically significant as a whole.  Therefore, the logistic regression on these eight variables shows more promise in prediction accuracy than pure chance, on who will and will not default on their loan.

Code

#

## The German credit data contains attributes and outcomes on 1,000 loan applications.

##    •   You need to use random selection for 900 cases to train the program, and then the other 100 cases will be used for testing.

##    •   Use duration, amount, installment, and age in this analysis, along with loan history, purpose, and rent.

### ———————————————————————————————————-

## Data source: https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data

## Metadata file: https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.doc

#

#

## Reading the data from source and displaying the top six entries.

#

credits=read.csv(“https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data&#8221;, header = F, sep = ” “)

head(credits)

#

## Defining the variables (Taddy, n.d.)

#

default = credits$V21 – 1 # set default true when = 2

duration = credits$V2

amount = credits$V5

installment = credits$V8

age = credits$V13

history = factor(credits$V3, levels = c(“A30”, “A31”, “A32”, “A33”, “A34”))

purpose = factor(credits$V4, levels = c(“A40″,”A41″,”A42″,”A43″,”A44″,”A45″,”A46″,”A48″,”A49″,”A410”))

rent = factor(credits$V15==”A151″) # renting status only

# rent = factor(credits$V15 , levels = c(“A151″,”A152″,”153”)) # full property status

#

## Re-leveling the variables (Taddy, n.d.)

#

levels(history) = c(“great”, “good”, “ok”, “poor”, “horrible”)

levels(purpose) = c(“newcar”, “usedcar”, “furniture/equip”, “radio/TV”, “apps”, “repairs”, “edu”, “retraining”, “biz”, “other”)

# levels(rent) = c(“rent”, “own”, “free”) # full property status

#

## Create a new matrix called “cred” with the 8 defined variables (Taddy, n.d.)

#

credits$default = default

credits$duration= duration

credits$amount  = amount

credits$installment = installment

credits$age     = age

credits$history = history

credits$purpose = purpose

credits$rent    = rent

cred = credits[,c(“default”,”duration”,”amount”,”installment”,”age”,”history”,”purpose”,”rent”)]

#

##  Plotting & reading to make sure the data was transfered correctly into this dataset and present summary stats (Taddy, n.d.)

#

plot(cred)

cred[1:3,]

summary(cred[,])

#

## Create a design matrix, such that factor variables are turned into indicator variables

#

Xcred = model.matrix(default~., data=cred)[,-1]

Xcred[1:3,]

#

## Creating training and prediction datasets: Select 900 rows for esitmation and 100 for testing

#

set.seed(1)

train = sample(1:1000,900)

## Defining which x and y values in the design matrix will be for training and for testing

xtrain = Xcred[train,]

xnew = Xcred[-train,]

ytrain = cred$default[train]

ynew = cred$default[-train]

#

## logistic regresion

#

datas=data.frame(default=ytrain,xtrain)

creditglm=glm(default~., family=binomial, data=datas)

summary(creditglm)

#

## Confidence Intervals (UCLA: Statistical Consulting Group, 2007)

#

confint(creditglm)

confint.default(creditglm)

#

## Overall effect of the rank using the wald.test function from the aod library (UCLA: Statistical Consulting Group, 2007)

#

install.packages(“aod”)

library(aod)

wald.test(b=coef(creditglm), Sigma = vcov(creditglm), Terms = 6:9) # for all ranked terms for history

wald.test(b=coef(creditglm), Sigma = vcov(creditglm), Terms = 10:18) # for all ranked terms for purpose

wald.test(b=coef(creditglm), Sigma = vcov(creditglm), Terms = 19) # for the ranked term for rent

#

## Odds Ratio for model analysis (UCLA: Statistical Consulting Group, 2007)

#

exp(coef(creditglm))

exp(cbind(OR=coef(creditglm), confint(creditglm))) # odds ration next to the 95% confidence interval for odds ratios

#

## Predicting default from the test data (Alice, 2015; UCLA: Statistical Consulting Group., 2007)

#

newdatas=data.frame(default=ynew,xnew)

newestdata=newdatas[,2:19] #removing the variable default from the data matrix

newestdata$defaultPrediction = predict(creditglm, newdata=newestdata, type = “response”)

summary(newdatas)

#

## Plotting the true positive rate against the false positive rate (ROC Curve) (Alice, 2015)

#

install.packages(“ROCR”)

library(ROCR)

pr  = prediction(newestdata$defaultPrediction, newdatas$default)

prf = performance(pr, measure=”tpr”, x.measure=”fpr”)

plot(prf)

## Area under the ROC curve (Alice, 2015)

auc= performance(pr, measure = “auc”)

auc= auc@y.values[[1]]

auc # The closer this value is to 1 the better, much better than to 0.5

 

 

References

Adv Quant: Polynomial Regression in R

Introduction

For this local polynomial regression, the “oldfaithful.csv” will be used from the open-source data. The eruption times (in minutes) and the waiting time to the next eruption (in minutes) of 272 eruptions are provided for the Old Faithful geyser.

Results

IP2F1.PNG

Figure 1: Density Histograms for eruptions times and eruption waiting times.

IP2F2.PNG

Figure 2: Smoothed density histrogram from local polynomial regresion.

IP2F3.png

Figure 3: Intercomparisson of linear regression (blue), lowess regresion(red), and polynomial regression (green) on the eruption data.

IP2F4

Figure 4: Residual plots for both linear and polynomial regression.

Discussion

The histogram plots (Figure 1) illustrate that both variables, eruption times and eruptions waiting time are both bimodal distributions.  Thus, a linear regression (Figure 3), would not capture the relationship between these two variables.  A polynomial smoothed version of the bimodal curve (Figure 2) show that for low values of the geysers magnitude, there is a low wait time for the next occurrence and vice versa.  The smoothed density curve shows the estimate values of the geyser’s variable distribution better than the bar histogram

LOCFIT (locally fitted regression) and LOWESS (locally weighted scatterplot smoothing regression) are assessed alongside the typical LM (linear regression).  LOCFIT is based on LOWESS, which allows the end user to specify the smoothing parameter and neighborhood size, but LOCFIT affords the end user more control over other the smoothing parameters (Futschik & Crompton, 2004).  Both LOCFIT and LOWESS are methods for regression that uses the nearest-neighbor-based model (Field, 2013; Futschik & Crompton, 2004; Loader, 2013; Smith, 2015).  This analysis will look at all three.

The goal is to see if there is a relationship between the waiting time to the next eruption to the magnitude of the eruption (per eruption time).  Through the linear regression algorithm, the linear model is eruptions = 0.075628 (waiting) – 1.874016.  The Pearson’s correlation coefficient is 0.9008112. Thus 81.14% of the variation could be explained by a linear regression model.  The lowess regression appears not to capture the distribution of data at smaller eruption times, but it is better than the linear regression model since its correlation is 0.9809684, and can explain 0.9622990 of the variation between the variables.

Finally, to evaluate the effectiveness of the linear model and the polynomial model, residuals must be assessed (Figure 4). Both of the residual plots don’t show any discernable pattern. However, the residuals are closer to zero in the polynomial regression, suggesting that it does a better job at explaining the variance between the eruption magnitude and the next eruption wait time.  In conclusion, the best regression for this data set appears to be the polynomial regression.

Code

#

## Use R to analyze the faithful dataset.

## This is a version of the eruption data from the “Old Faithful” geyser in Yellowstone National Park, Wyoming.

##  •     X (primary key)

##  •     eruptions (eruption time [mins])

##  •     waiting (wait time for this eruptions [mins])

#

fateful = read.csv(file=”https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/datasets/faithful.csv&#8221;, header = TRUE, sep = “,”)

head(fateful)

# Produce density histograms of eruption times and of waiting times.

hist(fateful$eruptions, freq=F, xlab = “eruptions time [mins]”,  main = “Histogram of the eruptions time”)

hist(fateful$waiting, freq=F, xlab = “eruptions waiting time [mins]”,  main = “Histogram of the eruptions waiting time”)

# Produce a smoothed density histogram from local polynomial regression.

install.packages(“locfit”)

library(locfit)

plot(locfit(~lp(fateful$eruptions),data=fateful), xlab = “eruptions time [mins]”,  main = “Histogram of the eruptions time”)

plot(locfit(~lp(fateful$waiting),data=fateful), xlab = “eruptions waiting time [mins]”,  main = “Histogram of the eruptions waiting time”)

# Compare local polynomial regression to regular regression.

lowessRegression = lowess(fateful$waiting, faithful$eruptions, f=2/3)

polynomialRegression = locfit(fateful$eruptions~lp(fateful$waiting))

linearRegression = lm(fateful$eruptions~fateful$waiting)

# Graphing the data

plot(fateful$waiting, fateful$eruptions, main = “Eruption Times”, xlab=”eruption time [min]”, ylab = “Waiting time to next eruption [min]”)

lines(lowessRegression, col=”red”)

abline(linearRegression, col=”blue”)

lines(polynomialRegression, col=”green”)

# summary on the regressions

summary(linearRegression)

# correlations on the regressions

cor(fateful$eruptions,fateful$waiting)

cor(lowessRegression$x, lowessRegression$y)

# Plotting residuals

plot(residuals(linearRegression), main = “residuals for the linear regression”, ylab = “residuals”)

plot(residuals(polynomialRegression), main = “residuals for the polynomial regression”, ylab=”residuals”)

References

Adv Quant: Locally Weighted Scatterplot Smothing (LOWESS) in R

Locally weighted scatterplot smoothing (LOWESS) method for multiple regression models in a k-nearest-neighbor-based model is a regression model with 1+ independent variables, which uses a non-parametric method which creates a smoothed surface/curve (Field, 2013; Smith, 2015).  LOWESS aims not to introduce a parametric model to the data, because doing so, would require much more resources (Cleveland, 1979).  Non-parametric tests have fewer assumptions than parametric tests, such as there are no assumptions on the sampled variable’s population distribution (Field, 2013; Huck, 2013; Schumacker, 2014; Smith, 2015).

Assumptions in the parametric analysis, which are based on the normal distribution, include (1) additivity and linearity; (2) normality; (3) homoscedasticity/homogeneity of variance; and (3) independence (Field, 2013; Huck, 2013). However, the assumption of independence still exists in the non-parametric analysis (Huck, 2013).  Smith (2015) states that these non-parametric analyses are less powerful than parametric analysis.  However, Field (2013) disagrees and says that they are powerful, but admits that there is a loss of information about the magnitude between the observed values.  Huck (2013), states that when using non-parametric analysis correctly, they have the similar power/weight to them as parametric analysis on data that meet the parametric assumptions. Thus, to conduct non-parametric analysis, data values are ranked and arranged, thus higher valued data have higher valued ranks and vice versa (Field, 2013; Huck, 2013; Smith, 2015). Cleveland (1979), describes that only a fraction of the data (local neighbors) are considered at a time, to minimize the weighing function.  Thus, a LOWESS regression is carried out on the ranked data, which help eliminates the effects of outliers, irons out skewed distributions (Field, 2013; Smith, 2015).

Advantages and disadvantages

+ LOWESS doesn’t depend on an underlying population distribution (Field, 2013; Huck, 2013; Schumacker, 2014; Smith, 2015)

+ Looking at the data’s local neighboring data creates a smoothing function, which visually enhances pattern (Cleveland, 1979)

– The LOWESS technique is not a substitute for parametric regression analysis (Huck, 2013).  Thus, to use non-parametric tests, one must reject the null hypothesis: the data follows a defined distribution; with its corresponding alternative hypothesis: the data does not follow a defined distribution (Field, 2013; Huck, 2013).

– LOWESS is computationally heavy, especially depending on the weights chosen (Cleveland, 1979).

– Though the regression formula is easily and visually represented/smoothed, but the regression formula may not be as cleanly written (Cleveland, 1979).

Multiple Regression Analysis

From the R dataset archived website (http://vincentarelbundock.github.io/Rdatasets/), the NOxEmissions.csv file was downloaded, which is the Nox Air Pollution Data and it has 5 variables: primary key, Julian Calendar Day (julday), hourly mean of NOx concentrations in the air in parts per billion (LNOx), hourly sum of NOx car emissions in parts per billion (LNOxEm), and square root of the wind speed in meters per second (sqrtWS).

From this dataset, it is hypothesized, that the wind speed combined with the sum of NOx from car emissions could contribute to the mean Nox concentrations in the atmosphere.  Thus, given that there are multiple independent variables for one dependent variable, then multiple regression analysis is best suited (Field, 2013; Huck, 2013; Schumacker, 2014; Smith, 2015).

IP1.51F1

Figure 1: Histogram of each of the variables in the data set.

IP1.51F2
Figure 2: Simple Linear Regression between each of the independent variables to the dependent variables.  For the image on the right the regression formula is LNOx = -0.86443(sqrtWS) + 5.55885, with a correlation of -0.4300 and for the image on the left the regression formula is LNOx = 0.590066 (LNOxEm) + 0.048645, with a correlation of 0.6399.

IP1.51F3

Figure 3: The summation output of the Linear Multiple Regression, where the regression formula is LNOx= -1.018198 (sqrtWS) + 0.641412 (LNOxEm) + 1.061950, which explains 66.3% of the variation between these variables.

IP1.51F4.png

Figure 4: Normal Quantile-Quantile plot, for the multiple linear regression as described by Figure 3.

The histograms (Figure 1) are not convincing that this could be tested with a normal multiple linear regression analysis, but from the Normal quantile-quantile plot (Figure 4), shows normalcy in the data, justifying the results (Figure 3).  For furthering the understanding of the multiple linear regression, the simple linear regression per independent variable (Figure 2), shows that neither independent variable alone explain the variance between the variables as well as with the multiple regression analysis.

IP1.51F5.png

Figure 5: Multiple LOWESS regression plot with varying smoothing span.

Even though there is normalcy in the data, a LOWESS was still plotted on the data, just to illustrate how the differences between smoothing factors can influence the result.  The smoothing factor describes how small the neighborhood is on the k-nearest neighbor (Cleveland, 1979).  The smaller the smoothing factor, the smaller the neighborhood, and the blue line (f=2/3) is the default value in R (R, n.d.e,).  The larger the smoothing factor, the bigger the neighborhood, over simplifying the result.

Code

NOxData = read.csv(file=”https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/robustbase/NOxEmissions.csv&#8221;, header = TRUE, sep = “,”)

head(NOxData)

hist(NOxData$LNOx, freq=F, xlab = “hourly mean of NOx concentrations [ppb]”,  main = “Histogram of the hourly mean of NOx concentrations”)

hist(NOxData$LNOxEm, freq=F, xlab = “hourly sum of NOx car emissions [ppb]”,  main = “Histogram of the hourly sum of NOx car emissions”)

hist(NOxData$sqrtWS, freq=F, xlab = “square root of winds [m/s]”, main = “Histogram of the square root of winds”)

# Single Linear Regressions on LNOxEm

## LNOx

plot(NOxData$LNOxEm, NOxData$LNOx)

abline(lm(NOxData$LNOx~NOxData$LNOxEm), col=”red”)

summary(lm(NOxData$LNOx~NOxData$LNOxEm))

cor(NOxData$LNOx,NOxData$LNOxEm)

## sqrtWS

plot(NOxData$sqrtWS, NOxData$LNOx)

abline(lm(NOxData$LNOx~NOxData$sqrtWS), col=”red”)

summary(lm(NOxData$LNOx~NOxData$sqrtWS))

cor(NOxData$LNOx,NOxData$sqrtWS)

# Multiple Linear Regression on both LNOxEM and sqrtWS variables on LNOx

RegressionModel = lm(NOxData$LNOx~ NOxData$LNOxEm + NOxData$sqrtWS)

summary(RegressionModel)     

plot(RegressionModel)

# Pearson’s Correlation between independent variables

cor(NOxData$LNOxEm, NOxData$sqrtWS)

# 95% Confidence Intervals on the regression model

confint(RegressionModel, conf.level=0.95)

# LOWESS MODEL

LowessModel = lowess(NOxData$LNOx~ NOxData$LNOxEm + NOxData$sqrtWS, f=2/3)

LowessModel2 = lowess(NOxData$LNOx~ NOxData$LNOxEm + NOxData$sqrtWS, f=0.01)

LowessModel3 = lowess(NOxData$LNOx~ NOxData$LNOxEm + NOxData$sqrtWS, f=1)

plot(LowessModel,type=”l”,col=”blue”, main=”LOWESS Regression: green is f=1, blue is f=2/3, & red is f=0.01″)

lines(LowessModel2, col=”red”)

lines(LowessModel3, col=”green”)

References

 

Adv Quant: General Linear Regression Model in R

Introduction

A goal for this post is to convert the dataset to a dataframe for analysis and performing a regression on the state.x77 dataset.

Results

IP1.5F1.png

Figure 1: Scatter plot matrix of the dataframe state.x77.  The red box illustrates the relationship that is personally identified for further analysis.

IP1.5F2.PNG

IP1.5F3.png

Figure 2: Scatter plot of murder rates versus illiteracy rates across the united states, with the linear regression function of illiteracy = 0.11607 * Murder + 0.31362; with a correlation of 0.729752.

Discussion

This post analyzes the dataset state.x77 under the MASS R library, was converted into a data frame (see code section), and an analysis of the data was conducted.  To identify which variable relationship would be interesting to conduct a regression on this dataset, all the relationships within the data frame were plotted in a matrix (Figure 1).  The relationship that personally seemed interesting was the relationship between illiteracy and murder.  Thus, moving forward with these variables a simple linear regression was conducted on that data.  It was determined that there is a positive correlation on this data of 0.729752, and the relationship between the data is defined by

illiteracy = 0.11607 * Murder + 0.31362                                        (1)

From this equation that describes the relationship (Figure 2) between these variables, can explain, 53.25% of the variance between these variables. Both the intercept value and the regression weight are statistically significant at the 0.01 level, meaning that there is less than a 1% chance that this relationship could be developed from pure random chance (R output between Figure 1 & 2).  In conclusion, this data is stating that states with lower illiteracy rates will have the least amount of murder rates in their state, and vice versa. 

Code

#

## Converting a dataset to a dataframe for analysis.

#

library(MASS)             # Activate the MASS library

library(nutshell)         # Activate the nutshell library to access the plot function

data()                    # Lists all data and datasets within the Mass Library

data(state)               # Data in question is located in state

head(state.x77)           # Print out the top five entries of state.x77

df= data.frame(state.x77) # Convert the state.x77 data into a dataframe

#

## Regression formulation

#

plot(df)                                           # Scatter plot matrix, of all relationships between the variables in the df

stateRegression = lm(Illiteracy~Murder, data= df)  # Selecting this relationship for further analysis

summary(stateRegression)                           # Plotting a summary of the regression data

# Plotting a scatterplot from a dataframe below

plot(df$Murder, df$Illiteracy, type=”p”, main=”Illiteracy rates vs Murder rates”, xlab=”Murder”, ylab=”Illiteracy”)           # Plotting a scatterplot from a dataframe

abline(lm(Illiteracy~Murder, data= df), col=”red”) # Plotting a red regression line

cor(df$Murder, df$Illiteracy)

References