Save On an Annual DataCamp Subscription (Less Than 2 Days Left)




DataCamp is now offering a discount on unlimited access to their course curriculum. Access over 170+ course in R, Python, SQL and more taught by experts and thought-leaders in data science such as Mine Cetinkaya-Rundel (R-Studio), Hadley Wickham (R-Studio), Max Kuhn (caret) and more. Check out this link to get the discount!

Below are some of the tracks available. You can choose a career track which is a deep dive into a subject that covers all the skills needed. Or a skill track which focuses on a specific subject.

Tidyverse Fundamentals (Skill Track)
Experience the whole data science pipeline from importing and tidying data to wrangling and visualizing data to modeling and communicating with data. Gain exposure to each component of this pipeline from a variety of different perspectives in this tidyverse R track.

Finance Basics with R (Skill Track) If you are just starting to learn about finance and are new to R, this is the right track to kick things off! In this track, you will learn the basics of R and apply your new knowledge directly to finance examples, start manipulating your first (financial) time series, and learn how to pull financial data from local files as well as from internet sources.

Data Scientist with R (Career Track)
A Data Scientist combines statistical and machine learning techniques with R programming to analyze and interpret complex data. This career track gives you exposure to the full data science toolbox.

Quantitative Analyst with R (Career Track)
In finance, quantitative analysts ensure portfolios are risk balanced, help find new trading opportunities, and evaluate asset prices using mathematical models. Interested? This track is for you.

And much more – the offer ends September 25th so don’t wait!

About DataCamp:
DataCamp is an online learning platform that uses high-quality video and interactive in-browser coding challenges to teach you data science using R, Python, SQL and more. All courses can be taken at your own pace. To date, over 2.5+ million data science enthusiasts have already taken one or more courses at DataCamp.

Simple Steps to Create Treemap in R

The following document details how to create a treemap in R using the treemap package. What are they & when do we use them In the most basic terms a treemap is generally used when we want to visualize proportions. It can be thought of a pie map where the slices are replaced by rectangles. Using pie charts to visualize proportion is an excellent way, however if the categories keep on increasing the pie charts tends to become more and more unreadable. This issue of pie charts is overcomed in a Treemap which uses nested structure. These are ideal for displaying large amount of hierarchical data. We can use a treemap when space is a constraint and we have a large amount of hierarchical data to get an overview.
A treemap is a diagram representing hierarchical data in the form of nested rectangles, the area of each corresponding to its numerical value

When not to use a treemap

A treemap should not be used when there is a big difference between the measure values or the values are not comparable. Also, negative values cannot be displayed on a treemap. Building a Treemap in R To create a treemap we use one or more dimension and a maximum of 2 measures. We will be using the treemap package in R. For this article we will use the Super Store data which is provided along with the article. Step 1: Importing Data and installing treemap package in R
## Set the working directory location to the file location##
>setwd("H:/R Treemap")

## Import the datafile in R and view the data sample)
>data= read.csv("data.csv", header = TRUE, sep =",")
>View(data)
Once we get the data in R we need to load the package treemap so that we can go ahead creating our required plot.
## Installing the package and calling the package in R##
>install.packages("treemap")
>library(treemap)
The data that we are using is already reshaped data and so we can go ahead with creating our basic treemap and move step by step from it. Step 2: Creating a Treemap The treemap function is used to create a treemap.
## Creating the most basic treemap##
>treemap(data,index = c("Category"),vSize ="Sales")
The first argument in the above formula is the data file name which is “data” in our case. The arguments within the index specify the hierarchy that we are looking into and the argument vSize tell R to pick up a values on which the proportion of the boxes are to be decided. In our case since we have only index in our formula the command just splits the entire tree in three parts ( each representing the proportion of Sales for these part) A first look into the above figures shows that the Proportion of Technology, Furniture and Office Supplies is almost within the same range , the highest being technology. We can check our hunch right away, Type in the following :
>aggregate(Sales ~ Category, data, sum)
And you will get the following result :

      Category               Sales
1     Furniture              741999.8
2     Office Supplies        719047.0
3     Technology             836154.0
We see how close these Sales are to other (proportionately) Now as we have created our most basic treemap lets go a bit further and see what happens when we list multiple values in the index ( create a hierarchy )
## Creating a treemap with Category and Subcategory as a hierarchy.
>treemap(data,index = c("Category","Sub.Category"),vSize = "Sales")
Here is what happened, the tree is first splits at the category level and then each category further splits under a subcategory. We can see from the treemap that the Technology Category accounted for maximum sales and within the Technology category, Phones accounted for most of the sales. (the size of the boxes are still by Sales) Let’s go a step further and color the boxes by another measure let’s say profit.
 
##Coloring the boxes by a measure##
>treemap(data,index = c("Category","Sub.Category"),vSize ="Sales",vColor = "Profit",type="value")
The treemap that we get here is similar to the previous one except for the fact that now the box color represents the Profit instead. So we can see here that the most profitable subcategory was Copiers while on the other hand Tables were the most unprofitable. The argument vColor tells R to pick up a variable that we want to be used as a color. Type Defines if it is a value, index or categorical.
##Using a categorical variable as color##
>treemap(data,index = c("Category","Region"),vSize ="Sales",vColor = "Region",type="categorical")
Here we see that the tree is split into Categories first and under each category we have all the four region that are distinguished by individual color. Step 3: Enhancing our treemap Let’s move ahead and make our treemap more readable. To do this we will add a title to our treemap and change to font size of the Labels for category and Subcategories. We will try to keep the labels for Categories bigger and sub categories a bit smaller. Here’s how to do it:
## Titles and font size of the labels##
>treemap(data,index = c("Category","Sub.Category"),vSize ="Sales",vColor = "Profit",type="value",title = "Sales Treemap For categories",fontsize.labels = c(15,10))
Notice how we have added a custom title to treemap and change the label size for Categories and Sub categories. The argument title allows us to add title to our visual while the argument fontsize.labels helps in adjusting the size of the labels. How about positioning the labels ?? How about keeping the Categories labels centered and keeping that of Sub Categories in top left. This can be achieved by the argument align.labels as under:
## Aligning the labels##
>treemap(data,index = c("Category","Sub.Category"),vSize ="Sales",vColor = "Profit",type="value",title = "Sales Treemap For categories",fontsize.labels = c(15,10),align.labels = list(c("centre","centre"),c("left","top")))
There it is, our labels are now aligned beautifully. We can also choose our custom palette for treemaps using the palette argument as under:
>treemap(data,index = c("Category","Sub.Category"),vSize ="Sales",vColor = "Profit",type="value",palette="RdYlGn",range=c(-20000,60000),mapping=c(-20000,10000,60000),title = "Sales Treemap For categories",fontsize.labels = c(15,10),align.labels = list(c("centre","centre"),c("left","top")))
Here we have used the custom Red Yellow Green palette to see the profit more clearly. Red being the most unprofitable and Green being the most profitable. In this article we looked upon how to create a treemap in R and adding aesthetic to our plot. There’s much more that can be done using the arguments under a treemap. For a complete list of argument and functionality refer the package documents. “Area-based visualizations have existed for decades. This idea was invented by professor Ben Shneiderman at the University of Maryland, Human – Computer Interaction Lab in the early 1990s. Shneiderman and his collaborators then deepened the idea by introducing a variety of interactive techniques for filtering and adjusting treemaps. These early treemaps all used the simple “slice-and-dice” tiling algorithm”. Treemaps are the most efficient option to display hierarchy that gives a quick overview of structure. They are also great at comparing the proportion between categories via their area sizes.

Author Bio:

This article was contributed by Perceptive Analytics. Rahul Singh, Chaitanya Sagar, Jyothirmayee Thondamallu and Saneesh Veetil contributed to this article. Perceptive Analytics provides data analytics, data visualization, business intelligence and reporting services to e-commerce, retail, healthcare and pharmaceutical industries. Our client roster includes Fortune 500 and NYSE listed companies in the USA and India.

Reproducible development with Rmarkdown and Github

I’m pretty sure most readers of this blog are already familiar with Rmarkdown and Github. In this post I don’t pretend to invent the wheel but rather give a quick run-down of how I set-up and use these tools to produce high quality and scalable (in human time) reproducible data science development code.

Github

While data science processes usually don’t involve the exact same workflows like software development (for which Git was originally intended) I think Git is actually very well suited to the iterative nature of data-science tasks. When walking down different avenues in the exploration path, it’s worth while to have them reside in different branches. That way instead of jotting down in general pointers what you did along with some code snippets in some text file (or god-forbid word when you want to have images as well) you can instead go back to the relevant branch, see the different iterations and read a neat report with code and images. You can even re-visit ideas that didn’t make it into the master branch. Be sure to use informative branch names and commit messages!

Below is in illustration of how that process might look like:



Using Github allows one to easily package his code, supporting files etc (using repos) and share it with fellow researches, which can in turn clone the repo, re-run the code and go through all the development iterations without a hassle.

Rmarkdown

Most people familiar with Rmarkdown know it’s a great tool to write neat reports in all sorts of formats (html, PDF and even word!). One format that really makes it a great combo with Github is the github_document format. While one can’t view HTML files on Github, the output file from a github_document knit is an .md file which renders perfectly well on github, supporting images, tables, math, table of contents and many other. What some may not realize is that Rmarkdown is also a great development tool in itself. It behaves much like the popular Jupiter notebooks, with plots, tables and equations showing next to the code that generated them. What’s more, it has tons of cool features that really support reproducible development such as:
  • The first r-chunk (labled “setup” in the Rstudio template) always runs once when you execute code within chunks following it (pressing ctrl+Enter). It’s handy to load all packages used in later chucks (I like installing missing ones too) in this chunk such that whenever you run code within any of the chunks below it the needed packages are loaded.
  • When running code from within a chunk (pressing ctrl+Enter) the working directory will always be the one which the .Rmd file is located at. In short this means no more worrying about setting the working directory – be it when working on several projects simultaneously or when cloning a repo from Github.
  • It has many cool code execution tools such as a button to run code in all chunks up to the current one, run all code in the current chunk and it has a green progress bar so you don’t get lost too!
  • If your script is so long that scrolling around it becomes tedious, you can use this neat feature in Rstudio:  When viewing Rmarkdown files you can view an interactive table of contents that enables you to jump between sections (defined by # headers) in your code:

To summarize this section, I would highly recommend developing with Rmd files rather than R files.

A few set-up tips

  • Place a file “passwords.R” with all passwords in the directory to which you clone repos and source it via the Rmd. That way you don’t accidentally publish your passwords to Github
  • I like working with cache on all chunks in my Rmd. It’s usually good practice to avoid uploading the cache files generated in the process to Github so be sure to add to your .gitignore file the file types: *.RData, *.rdb, *.rdx, *.rds, *__packages
  • Github renders CSV files pretty nicely (and enables searching them conveniently)  so if you have some reference tables you want to include and you have a *.csv entry in your .gitignore  file, you may want to add to your .gitignore the following entry: !reference_table_which_renders_nicely_on_github.csv to exclude it from the exclusion list.

Sample Reproducible development repo

Feel free to clone the sample reproducible development repo below and get your reproducible project running ASAP!

https://github.com/IyarLin/boilerplate-script


Using Control Charts in R

I am sure you must have heard of Six Sigma quality standard or Six Sigma experts. But, what is Six Sigma?

Six Sigma is a set of techniques used by organizations to improve their processes and optimize operations. Six Sigma was popularized by manufacturing organizations and Jack Welch, former CEO of GE, was one of advocators of Six Sigma. At the heart of Six Sigma lies the core strategies to improve the quality of processes by identifying and removing the causes leading to defects and variability in product quality and business processes. Six Sigma uses empirical and statistical quality management methods to carry out operational improvement and excellence projects in organizations.

Six Sigma projects follow methodologies which are called as DMAIC and DMADV. DMAIC methodology is used for projects aimed at improving existing business processes; while, DMADV is used for projects which aims at creating new processes. Since this article talks about control charts, we will focus on DMAIC project methodology of which control charts is a part of. DMAIC methodology has five phases:

  1. Define
  2. Measure
  3. Analyze
  4. Improve
  5. Control

Define

Defining the goals that you wish to achieve – basically, identifying the problem statement you are trying to solve. In this stage, everyone involved in the project understands his/her role and responsibilities. There should be clarity on ‘Why is the project being undertaken?’

Measure

Understanding the ‘As-Is’ state of the process. Based on the goal defined in the ‘Define’ phase, you understand the process in detail and collect relevant data which is to be used in subsequent phases.

Analyze

By this phase, you know the goal that are you trying to achieve and also, you understand the entire process and have the relevant data to diagnose and analyze the problem. In this phase, make sure that your biases don’t lead you to results. Instead, it should be a complete fact-based and data-driven exercise to identify the root cause.

Improve

Now, you are aware about the entire process and cause behind the problems. In this phase, you need to find out ways or methodologies to work on the problem and improve the current processes. You have to think of new ways using techniques such as design of experiments and set up pilot projects to test the idea.

Control

You have implemented new processes and now, you have to ensure that any deviations in the optimized processes are corrected before they result in any defects. One of the techniques that can be used in control phase is statistical process control. Statistical process control can be used to monitor the processes and ensure that the desired quality level is maintained.

Control chart is the primary statistical process control tool used to monitor the performance of processes and ensure that they are operating within the permissible limits. Let’s understand what are control charts and how are they used in process improvement.

According to Wikipedia, “The data from measurements of variations at points on the process map is monitored using control charts. Control charts attempt to differentiate “assignable” (“special”) sources of variation from “common” sources. “Common” sources, because they are an expected part of the process, are of much less concern to the manufacturer than “assignable” sources. Using control charts is a continuous activity, ongoing over time.”

Let’s take an example and understand it step by step using above definition. You leave for office from your home every day at 9:00 AM. The average time it takes you to reach office is 35 minutes; while in most of the cases it takes 30 to 40 minutes for you to reach office. There is a variation of 5 minutes less or more because of slight traffic or you get all the traffic signals red on your way. However, on one fine day you leave from your home and you reach office in 60 minutes because there was an accident on the way and the entire traffic was diverted which caused additional delay of around 20 minutes. Now, relating our example with the definition above:

Measurements: Time to reach office – the time taken on daily basis to reach office from home is measured to monitor the system/process.

Variations: Deviations from the average time of 35 minutes – these variations are due to inherent attributes in the system such as traffic or traffic signals on the route.

Common sources: Slight traffic or traffic signals on the route – these are usually part of the processes and are of much less concern while driving to office.

Excessive Variation: Accident – these are events which leads to variations in the processes, leading to defects in the outputs or delayed processes.

Summing up everything, control charts are graphical techniques to monitor the performance of a process over time. In the control chart, the performance of these processes is monitored visually to identify any anomalies or variations from the usual behavior. For every control chart, there are control limits or decision limits set which define the normal behavior of the process. Any movement outside those limits indicate variation in the process and needs to be corrected to prevent further damage.

In any control chart, there are three main attributes – Average Line, UCL and LCL. Average line is the mean of all the observations taken in the process. UCL and LCL are upper control limit and lower control limit, respectively. These limits define the control or decision limits within which a process should always fall for efficient and optimized operations. These three values are determined by the process. For a process where all the values lie within the control limits and there is no specific pattern in the values, the process is said to be “in-control.”

X-axis can have either time or sample sequence; while, the Y-axis can have individual values or deviations. There are different control charts based on the data you have, continuous or variable (height, weight, density, cost, temperature, age) and attribute (number of defective parts produced). Accordingly, you choose the control chart and control objective.

Following steps present the step-by-step approach to implement a control chart:

  • What process needs to be controlled?

Answer to this question will come from the DMAIC process while implementing the entire project methodology.

  • Which system will provide the data to monitor?

Identifying the systems that will provide the data based on which control charts will be prepared and monitored.

  • Develop and monitor control charts

Develop the control charts by specifying the X-axis and Y-axis.

  • What actions to take based on control charts?

Once you have developed control charts, you need to monitor the processes and check for any special or excessive variations which may lead to defects in the processes.

By now, we have understood what control charts are and what information do they provide. Let us understand further uses of control charts and what more information can be extracted from these charts. Apart from manufacturing, control charts find their applications in healthcare industry and a host of other industries.

  • Control charts provide a very simple and easy to understand methodology to understand the performance of processes.
  • It reduces the need for inspection – the need for inspection arises only when the process behavior is significantly different from the usual behavior.
  • If changes have been made to the process, control charts can help in understanding the impact of those changes on desired results.
  • The data collected in the process can be used for improvement in the subsequent of follow-up projects

Control Chart Rules

For a process ‘in-control’, most of the points should lie near the average line i.e. Zone A, followed by Zone B and Zone C. Very few points should lie close to control limits and none of the points should fall beyond the control limits. There are eight rules which are helpful in identifying if there are certain patterns or special causes of variation in the observations.

Rule 1: One or more points beyond the control limits

Rule 2: 8 out of 9 points on the same side of the center line (Average line)

Rule 3: 6 consecutive points increasing or decreasing (monotonic)

Rule 4: 14 consecutive points are alternating up and down

Rule 5: 2 out of 3 consecutive points in Zone C or beyond

Rule 6: 4 out of 5 consecutive points in Zone B or beyond

Rule 7: 15 consecutive points are in Zone A

Rule 8: 8 consecutive points on either side of the Average line but not in Zone A

Now, we have understood the control charts, attributes, applications and associated rules, let’s try to implement a small example in R.

Let’s assume that there is a company which manufactures cylindrical piston rings. For each of the rings manufactured, measurement of the diameter is taken 5 times and captured to examine the within piece variability. These five measurements for one-piece forms one sample or one sub-group. Similarly, measurements for 25 pieces is taken. Using rnorm function in R, let’s create the measurement values.

> obs
         V1       V2       V3       V4       V5
1  1.448786 1.555614 1.400382 1.451316 1.328760
2  1.748518 1.525284 1.552703 1.417736 1.420078
3  1.600783 1.409819 1.350917 1.521953 1.358915
4  1.529281 1.582439 1.544136 1.712162 1.553276
5  1.479104 1.343972 1.642736 1.589858 1.460230
6  1.685809 1.553799 1.493372 1.609255 1.471565
7  1.493397 1.373165 1.660502 1.535789 1.512498
8  1.483724 1.564052 1.415218 1.436863 1.578013
9  1.480014 1.446424 1.604218 1.565367 1.412440
10 1.530056 1.398036 1.469385 1.667835 1.384063
11 1.423609 1.419212 1.420791 1.347140 1.485413
12 1.508196 1.505683 1.642166 1.559233 1.332157
13 1.574303 1.595021 1.484574 1.375992 1.367742
14 1.491598 1.387324 1.486832 1.372965 1.444112
15 1.420711 1.479883 1.411519 1.377991 1.251022
16 1.407785 1.477150 1.671345 1.562293 1.617919
17 1.586156 1.555872 1.515936 1.498874 1.579370
18 1.700294 1.574875 1.710501 1.544640 1.660743
19 1.593655 1.691820 1.470600 1.479399 1.506595
20 1.338427 1.600721 1.434118 1.541265 1.602901
21 1.442494 1.825335 1.450115 1.493083 1.433342
22 1.499603 1.483825 1.479840 1.466675 1.465325
23 1.432389 1.533376 1.456744 1.460206 1.456417
24 1.395037 1.382133 1.460687 1.449885 1.305300
25 1.445672 1.607760 1.534657 1.422726 1.416209
> qq = qcc(obs, type = “R”, nsigmas = 3)  

In R chart, we look for all rules that we have mentioned above. If any of the above rules is violated, then R chart is out of control and we don’t need to evaluate further. This indicates the presence of special cause variation. If the R chart appears to be in control, then we check the run rules against the X-Bar chart. In the above chart, R chart appears to be in control; hence, we move to check run rules against the X-Bar chart.

> summary(qq) 
Call:
qcc(data = obs, type = “R”, nsigmas = 3) 

R chart for obs  

Summary of group statistics:    
Min.   1st Qu.    Median      Mean   3rd Qu.      Max.
0.0342775 0.1627947 0.2212205 0.2131489 0.2644740 0.3919933  

Group sample size:  5
Number of groups:  25
Center of group statistics:  0.2131489
Standard deviation:  0.09163753  
Control limits: 
LCL       UCL  
0 0.4506969 
> qq = qcc(obs, type = “xbar”, nsigmas = 3)  

In the above chart, one of the points lie outside the UCL which implies that the process is out of control. The standard deviation in the above chart is the standard deviation of means of each of the samples. If we were to look at the sample 18, we see that the values in sample 18 are usually higher than values in other samples.

> obs[18,]         V1       V2       V3      V4       V5
18 1.700294 1.574875 1.710501 1.54464 1.660743 

Now, let’s check process capability. By process capability, we can check if control limits and specification limits are in sync with each other. For instance, in the case we have taken, our client wanted piston rings with target diameter of 1.5 cm with a variation of +/- 0.1 cm. Process capability will help us in identifying whether our system is capable to meeting the specified requirements. It is measured by process capability index Cpk.

> process.capability(qq, spec.limits = c(1.4,1.6))
 
Process Capability Analysis
 
Call:
process.capability(object = qq, spec.limits = c(1.4, 1.6))
 
Number of obs = 125          Target = 1.5
       Center = 1.498           LSL = 1.4
       StdDev = 0.09164         USL = 1.6
 
Capability indices:
 
       Value    2.5%   97.5%
Cp    0.3638  0.3185  0.4089
Cp_l  0.3562  0.2947  0.4178
Cp_u  0.3713  0.3088  0.4338
Cp_k  0.3562  0.2829  0.4296
Cpm   0.3637  0.3186  0.4087
 
Exp<LSL 14%           Obs<LSL 15%
Exp>USL 13%          Obs>USL 16%

In the above plot, red lines indicate the target value, the lower and upper specified range. It can easily be inferred that the system is not capable to manufacture products within the specified range. Also, for a capable process, value of Cpk should be greater than or equal to 1.33. In the above chart, the value is 0.356 which is less than the required value. This shows that the above process is neither stable nor capable.

I am sure after going through this article, you will be able to use and create control charts in multiple other cases in your work. We would love to hear your experience with creating control charts in different settings.

Author Bio:

This article was contributed by Perceptive Analytics. Jyothirmayee Thondamallu, Chaitanya Sagar and Saneesh Veetil contributed to this article.

Perceptive Analytics is a marketing analytics company and it also provides Tableau Consulting, data analytics, business intelligence and reporting services to e-commerce, retail, healthcare and pharmaceutical industries. Our client roster includes Fortune 500 and NYSE listed companies in the USA and India.

Advice to Young (and Old) Programmers: A Conversation with Hadley Wickham

I recently had the wonderful opportunity to chat with Hadley Wickham. He is an immensely prolific, yet humble guy who has not only contributed heavily to the advancement and development of R as a language and environment, but who also cares and has thought a lot about the process of doing data science the right way.

As a result, he has given many interviews on this “process” and his approach to data science and programming in R, mostly on the technical side of things. So, when I spoke with him, I wanted to frame the conversation for a broader audience, in large part due to the rapid expansion of people who are using R, most of whom engage very little with the programming wing of the community. Though this expansion of R users is a great thing on many dimensions, it has the potential to create a cohort of frustrated, self-taught programmers.

So, I am sharing Hadley’s responses to my questions in this blog for three main reasons: first and foremost, in an effort to offer a life-raft of sorts to people just getting started with (and frustrated by) R; second, to try and bridge the applied side of R users with the programming side of R users; and third, in the spirit of the open source foundation of R, to speak openly and plainly about the basics of approaching programming in R, and then how to keep going when you run into problems.

Importantly, though this conversation is intentionally non-technical and aimed at beginning programmers and R users, there are many great points and ideas for all programmers to consider, regardless of levels of proficiency and comfort with R. I have highlighted (via bold text) several of these particularly practical and valuable points throughout.

A final technical note: I edited a few places to make the conversation more amenable to reading and for the purpose of a blog post. I did not, however, change any of the substantive content of Hadley’s responses, as you will note from the conversational style of the responses, virtually all of which are verbatim. Enough of me. Here is some advice from Hadley Wickham to young (and old) programmers.

  • First, what is your role at R-Studio?

I am the chief scientist at R-Studio. I don’t really know what that means, or what chief scientists do. But I basically lead the teams that look after the Tidyverse, which is a set of packages for doing data science in R. So, my teams have a mix of roles, but there is some research in thinking what we should be working on, a bunch of programming, and also a bunch of education, like helping people understand how things work, webinars, books, talks, and Tweets.

  • Why R? How did you come to it and why should other people be convinced?

When I started learning R, the reason was simple: it was the only open source programming language for statistics. That’s obviously changed today, with programs and languages like Python, Java Script, and Scala.

So why R today? When you talk about choosing programming languages, I always say you shouldn’t pick them based on technical merits, but rather pick them based on the community. And I think the R community is like really, really strong, vibrant, free, welcoming, and embraces a wide range of domains. So, if there are like people like you using R, then your life is going to be much easier. That’s the first reason.

And the second reason, which is both a huge strength of R and a bit of a weakness, is that R is not just a programming language. It was designed from day 1 to be an environment that can do data analysis. So, compared to the other options like Python, you can get up and running in R doing data science, learning much, much less about programming to get started. And that generally makes it like easier to get up and running if you don’t have formal training in computer science or software engineering.

  • Let’s transition to Tidyverse, as you just mentioned. First, could you explain a bit behind the approach of the Tidyverse, from processing and management, to analysis in R?

The Tidyverse is a collection of R packages with the goal being, once you’ve learned one package in the collection, learning the other packages should be much easier. And what that means is that there is a deep, underlying philosophy and unity where you can learn things in one package and apply the same ideas elsewhere. So, it just means that your naïve ideas about what a function is going to do or how to tackle a problem should be fairly good, because you can draw on your experiences. Things are designed consistently in such a way where your experiences with other functions apply to new functions. So, for example, one of the ideas that underlies many, many packages in the Tidyverse is this idea of “tidy data,” which is a really simple idea, where when you are dealing with data science-y kind of data, you want to make sure that every variable is in a column, and then naturally every observation or case becomes a row. And if you put your data in that format once when you are doing data tidying, then you don’t have to hassle with it multiple times throughout the process.

  • In an interview in 2014 at UseR, you said one of your main goals was streamlining the process of getting from raw data to visualization quickly and efficiently, with “tidying” up the data being a key aspect of that. Presumably, you were talking about development of the Tidyverse. Do you think you’re there on that goal? Is there more to be expected? Did you meet it?

Yeah, we have made a lot of progress towards that goal. And 2014 was well before the idea of the Tidyverse existed. And the biggest change in my thinking since then is thinking of the Tidyverse as a thing and not just individual packages, and then being consistent across packages. That’s playing off one of the things my team has been focusing on this year, and that’s consistency within the Tidyverse; not adding a bunch of new features or creating new packages. But just thinking, “how can we make sure everything fits together as well as it possibly can?”

So, we are making good progress and there’s always more to do. And one of the things I find really rewarding is people sharing their experiences, like getting started with data and having the first really enjoyable experience when you go from a new dataset to some cool visualization with as little pain as possible. A neat illustration of that is the “TidyTuesday” hashtag [#tidytuesday] on Twitter. Every Tuesday they post a little data set and challenge, and then people tackle it with R and other tools and Tweet their results. Its really cool. [And anybody can do this, I guess?] Yeah exactly. Totally community run and driven.

  • Let’s shift and talk a little more conceptually about R and programming. There seems to be a ton of resources out there for R programmers given that its open source. This is naturally a wonderful thing. But I am wondering about beginning R programmers. Given that there is so much out there, how would you counsel a beginning programmer to sift through the resources to distinguish the signal from the noise?

So, I’m obviously biased in this recommendation, but I would say start with my book, “R for Data Science,” just because this is what it was designed for. It’s not going to teach you everything about R, and it’s by no means perfect, but I think it’s a really good way to get started. And it focuses relentlessly on giving you useful tools to help you understand data. It seems to be pretty popular and people seem to like it. And it’s free. You can buy a book if you want, but it’s free online.

After that, the other thing I would say is to try and find an R learning community. It’s much easier to learn and stay motivated when you are working with other people. And I think there’s lots of ways of doing that: look for a local meet up, like an R meet up or an R Ladies meet up in your area. There’s also the R-Studio community site.

Just find some way to find people like you who also are learning, because you can share your successes and your trials and your failures. It makes it much more likely that you will stick it out to the point where you will do something really useful.

  • I’ve noticed a theme in your work and how you approach package and resource development is addressing and fixing common problems in R. I’m curious how you hone in on these problems.

Yeah, I am curious too. I seem to be able to do it. I don’t really know exactly how. Part of it is I just talk to people and I travel a lot. I talk to people in different areas working on different problems, and I interact with a bunch of people on Twitter. And somehow that all feeds into my brain, and then ideas come out in a way I don’t fully understand. But it seems to work. I don’t want to break it. Also, I talk to people who are actually struggling with data analysis problems, and I also read a lot of other programming languages, computer science, and software engineering, because there’s basically nothing that I have done that’s not been done in some way, somewhere else before. So, it’s just finding a right idea that someone else has come up with and then applying it in a new domain; it’s tremendously valuable.

  • The use of R has seemed to explode within the past decade or so, moving far beyond the smaller world of computer programmers, spilling into many applied fields such as medicine, engineering, and even my own, political science. Having learned mostly on my own, I feel like there are two conversations going on, with a big gap, leaving the beginners to try and sift through complex worlds and tradeoffs. Specifically, in applied data analysis, the debate or tradeoff seems to be over using R versus another package like Stata. But from what I have observed, in the programming world, the debate seems to be over R versus other languages, such as Python, like you mentioned. So how should an applied analyst, someone who is not a programmer by training, navigate this tradeoff?

I think the tradeoff between Stata and R is: do you want a point-and-click interface, or do you want a programming interface? Point-and-click interfaces are great, because they lay out all of your options in front of you, and you don’t have to remember anything. You can navigate through the set of pre-supplied options. And that’s also it’s greatest weakness, because first of all, you are constrained into what the developer thought you should be able to do. And secondly, because your primary interaction is with a mouse, it’s very difficult to record what you did. And I think that’s a problem for science, because ideally you want to say how you actually got these results. And then simply do that reliably and have other people critique you on that. But it’s also really hard when you are learning, because when you have a problem, how do you communicate that problem to someone else? You basically have to say, “I clicked here, then I clicked here, then I clicked here, and I did this.” Or you make a screen cast, and it’s just clunky.

So, the advantages of programming languages like R or Python, is that the primary mechanism for communicating with the computer is text. And that is scary because there’s nothing like this blinking cursory in front of you; it doesn’t tell you what to do next. But it means you are unconstrained, because you can do anything you can imagine. And you have all these advantages of text, where if you have a problem with your code, you can copy and paste it into an email, you can Google it, you can check it and put it on GitHub, or you can share it by Twitter. There’s just so many advantages to the fact that the primary way you relate with a programming language is through code, which is just text. And so, as long as you are doing data analysis fairly regularly, I think all the advantages outweigh a point and click interface like Stata.

For R and Python, Python is first and foremost a programming language. And that has a lot of good features, but it tends to mean, that if you are going to do data science in Python, you have to first learn how to program in Python. Whereas I think you are going to get up and running faster with R, than with Python because there’s just a bunch more stuff built in and you don’t have to learn as many programming concepts. You can focus on being a great political scientist or whatever you do and learning enough R that you don’t have to become an expert programmer as well to get stuff done.

  • As people develop in programming, could you talk a little about the tradeoff between technical complexity and simplicity and usability?

That’s a big question. People naturally go through a few phases. When you start out, you don’t have many tips and techniques at your disposal. So, you are forced to do the simplest thing possible using the simplest ideas. And sometimes you face problems that are really hard to solve, because you don’t know quite the right techniques yet. So, the very earliest phase, you’ve got a few techniques that you understand really well, and you apply them everywhere because those are the techniques you know.

And the next stage that a lot of people go through, is that you learn more techniques, and more complex ways of solving problems, and then you get excited about them and start to apply them everywhere possible. So instead of using the simplest possible solution, you end up creating something that’s probably overly complex or uses some overly general formulation.

And then eventually you get past that and it’s about understanding, “what are the techniques at my disposal? Which techniques fit this problem most naturally? How can I express myself as clearly as possible, so I can understand what I am doing, and so other people can understand what I am doing?” I talk about this a lot but think explicitly about code as communication. You are obviously telling the computer what to do, but ideally you want to write code to express what it means or what it is trying to do as well, so when others read it and when you in the future reads it, you can understand some of the reasoning.

  • Any parting words of wisdom for R programmers or the community?

It’s easy when you start out programming to get really frustrated and think, “Oh it’s me, I’m really stupid,” or, “I’m not made out to program.” But, that is absolutely not the case. Everyone gets frustrated. I still get frustrated occasionally when writing R code. It’s just a natural part of programming. So, it happens to everyone and gets less and less over time. Don’t blame yourself. Just take a break, do something fun, and then come back and try again later.

GitLab CI for R-package development

— A basic R-phile introduction to continuous integration on GitLab

I have been using the GitLab repository for some time for mainly two reasons: I can have private projects at no monetary costs (I later came to realise that I as an academic can have the same on GitHub), and most importantly GitLab has so far gone under the radar of our IT department, meaning I can access it from my work computer. GitHub on the other hand is flagged as file sharing.

A simple CI config

Most of my time with R is spend trying to make heads and tails of various kinds of data, and I have so far just authored one R-package. While I can see the benefits of a continuous integration (CI) work flow, I just never bothered to actually set it up. Now where I am putting together code in smaller packages for internal use, it seemed like the right time to learn a little.

The Internet gives a few pointers on how to go about setting up CI on GitLab; one of the resources is the blog post Docker, GitLab CI and Developing R Packages by Mustafa Hasanbulli, who gives a simple .gitlab-ci.yml for testing packages. Mustafa’s solution make use of the rocker/tidyverse Docker image and install the dependency packages before running check() from devtools. It’s a good solution and combining with the .gitlab-ci.yml shared as a gist on Github by Artem Klevtsov, I managed to get the coverage badge I though nice to have. The .gitlab-ci.yml for a smaller package can be along the lines of:

image: rocker/tidyverse

stages:
  - check
  - coverage

check_pkg:
  stage: check
  script:
    - R -e 'install.packages(c())'
    - R -e 'devtools::check()'

coverage:
   stage: coverage
   script:
     - R -e 'covr::package_coverage(type = c("tests", "examples"))'

To extract the coverage to the coverage badge, add Coverage: \d+.\d+%$ to the section ‘Test coverage parsing’ in Settings -> CI/CD -> General pipelines.

Introducing cache

For my package, each of the two stages took about 45 minutes to complete, and I realized that the wast majority of the time was spent on downloading and especially installing packages. This was mainly do to the Bioconductor packages I rely on.

If only there would be a way to pass the installed packages between the stages, or even between runs of the CI pipeline. There is – GitLab 9.0 saw the option to specify a cache. The next problem is that the cache must be a directory of the cloned project directory. Since R prefers to install packages in /usr/lib/R/library in the Docker images, the .libpaths() must be changed. In addition you would have to remember to add any new package to the .gitlab-ci.yml. Which I for one would always forget, and therefore painstakingly have to figure out which packages to add.

A much simpler solution is to use packrat – something you anyway should consider to use. It also allows you to use the rocker/r-base image and just the packages actually required for your CI. How much of a win in terms of traffic rocker/r-base is over rocker/tidyverse probably depends on the packages you have to add. The .gitlab-ci.yml caching packages could look like this:

image: rocker/r-base

stages:
  - setup
  - test

cache:
  # Ommit key to use the same cache across all pipelines and branches
  key: "$CI_COMMIT_REF_SLUG"
  paths:
    - packrat/lib/

setup:
  stage: setup
  script:
    - R -e 'source("ci.R"); ci_setup()'

check:
  stage: test
  dependencies:
    - setup
  when: on_success
  script:
    - R -e 'source("ci.R"); ci_check()'

coverage:
  stage: test
  dependencies:
    - setup
  when: on_success
  only:
    - master
  script:
    - R -e 'source("ci.R"); ci_coverage()'

with the ci.R looking like this:

install_if_needed <- function(package_to_install){
  package_path <- find.package(package_to_install, quiet = TRUE)

  if(length(package_path) == 0){
    # Only install if not present
    install.packages(package_to_install)
  }
}

ci_setup <- function(){
  install_if_needed("packrat")
  packrat::restore()
}

ci_check <- function(){
  install_if_needed("devtools")
  devtools::check()
}

ci_coverage <- function(){
  install_if_needed("covr")
  covr::package_coverage(type = c("tests", "examples"))
}

The cache key $CI_COMMIT_REF_SLUG gives you the advantage of different cache for different branches. Using $CI_COMMIT_SHA will give you a separate cache for each commit.

Adding the packrat subdirectories src and lib* to the .gitignore will keep your repository small – and I find it quite useful to commit just the packrat.lock whenever I add or remove a package. But then again, I am the only one working with my repositories, and there might be advantages I don’t know of.

I have noticed that the stages after the setup stage sometimes fail in the first run. If this happens because of the cache, rerunning the failed stage makes everything well.

Using the above for my package, the first run of the pipeline took about 45 minutes, but the second run only about 8 minutes. A considerable reduction in time.

I hope .gitlab-ci.yml and ci.R outlined here will help you getting started on caching your R-packages in your CI. The two modules are quite simple, and if you are loking for something more sophisticated, I can recommend looking Matt Dowle works on data.table and of course the GitLab Runner help pages.

Dealing with The Problem of Multicollinearity in R

Imagine a situation where you are asked to predict the tourism revenue for a country, let’s say India. In this case, your output or dependent or response variable will be total revenue earned (in USD) in a given year. But, what about independent or predictor variables?

You have been provided with two sets of predictor variables and you have to choose one of the sets to predict your output. The first set consists of three variables:

  • X1 = Total number of tourists visiting the country
  • X2 = Government spending on tourism marketing
  • X3 = a*X1 + b*X2 + c, where a, b and c are some constants

The second set also consists of three variables:

  • X1 = Total number of tourists visiting the country
  • X2 = Government spending on tourism marketing
  • X3 = Average currency exchange rate

Which of the two sets do you think provides us more information in predicting our output?

I am sure, you will agree with me that the second set provides us more information in predicting the output because the second set has three variables which are different from each other and each of the variables provides different information (we can infer this intuitively at this moment). Moreover, none of the three variables is directly derived from the other variables in the system. Alternatively, we can also say that none of the variables is a linear combination of other variables in the system.

In the first set of variables, only two variables provide us relevant information; while, the third variable is nothing but a linear combination of other two variables. If we were to directly develop a model without including this variable, our model would have considered this combination and estimated coefficients accordingly.

Now, this effect in the first set of variables is called multicollinearity. Variables in the first set are strongly correlated to each other (if not all, at least some variables are correlated with other variables). Model developed using the first set of variables may not provide as accurate results as the second one because we are missing out on relevant variables/information in the first set. Therefore, it becomes important to study multicollinearity and the techniques to detect and tackle its effect in regression models.

According to Wikipedia, “Collinearity is a linear association between two explanatory variables. Two variables are perfectly collinear if there is an exact linear relationship between them. For example, X1 and X2 are perfectly collinear if there exist parameters λ0 and λ1 such that, for all observations i, we have

X2i = λ0 + λ1 * X1i

Multicollinearity refers to a situation in which two or more explanatory variables in a multiple regression model are highly linearly related.”

We saw an example of exactly what the Wikipedia definition is describing.

Perfect multicollinearity occurs when one independent variable is an exact linear combination of other variables. For example, you already have X and Y as independent variables and you add another variable, Z = a*X + b*Y, to the set of independent variables. Now, this new variable, Z, does not add any significant or different value than provided by X or Y. The model can adjust itself to set the parameters that this combination is taken care of while determining the coefficients.

Multicollinearity may arise from several factors. Inclusion or incorrect use of dummy variables in the system may lead to multicollinearity. The other reason could be the usage of derived variables, i.e., one variable is computed from other variables in the system. This is similar to the example we took at the beginning of the article. The other reason could be taking variables which are similar in nature or which provide similar information or the variables which have very high correlation among each other.

Multicollinearity may not possess problem at an overall level, but it strongly impacts the individual variables and their predictive power. You may not be able to identify which are statistically significant variables in your model. Moreover, you will be working with a set of variables which provide you similar output or variables which are redundant with respect to other variables.

  • It becomes difficult to identify statistically significant variables. Since your model will become very sensitive to the sample you choose to run the model, different samples may show different statistically significant variables.
  • Because of multicollinearity, regression coefficients cannot be estimated precisely because the standard errors tend to be very high. Value and even sign of regression coefficients may change when different samples are chosen from the data.
  • Model becomes very sensitive to addition or deletion of any independent variable. If you add a variable which is orthogonal to the existing variable, your variable may throw completely different results. Deletion of a variable may also significantly impact the overall results.
  • Confidence intervals tend to become wider because of which we may not be able to reject the NULL hypothesis. The NULL hypothesis states that the true population coefficient is zero.

Now, moving on to how to detect the presence of multicollinearity in the system.

There are multiple ways to detect the presence of multicollinearity among the independent or explanatory variables.

  • The first and most rudimentary way is to create a pair-wise correlation plot among different variables. In most of the cases, variables will have some bit of correlation among each other, but high correlation coefficient may be a point of concern for us. It may indicate the presence of multicollinearity among variables.
  • Large variations in regression coefficients on addition or deletion of new explanatory or independent variables can indicate the presence of multicollinearity. The other thing could be significant change in the regression coefficients from sample to sample. With different samples, different statistically significant variables may come out.
  • The other method can be to use tolerance or variance inflation factor (VIF).

VIF = 1 / Tolerance

VIF = 1/ (1 – R square)

VIF of over 10 indicates that the variables have high correlation among each other. Usually, VIF value of less than 4 is considered good for a model.

  • The model may have very high R-square value but most of the coefficients are not statistically significant. This kind of a scenario may reflect multicollinearity in the system.
  • Farrar-Glauber test is one of the statistical test used to detect multicollinearity. This comprises of three further tests. The first, Chi-square test, examines whether multicollinearity is present in the system. The second test, F-test, determines which regressors or explanatory variables are collinear. The third test, t-test, determines the type or pattern of multicollinearity.

We will now use some of these techniques and try their implementation in R.

We will use CPS_85_Wages data which consists of a random sample of 534 persons from the CPS (Current Population Survey). The data provides information on wages and other characteristics of the workers. (Linkhttp://lib.stat.cmu.edu/datasets/CPS_85_Wages). You can go through the data details on the link provided.

In this data, we will predict wages from other variables in the data.

> data1 = read.csv(file.choose(), header = T)> head(data1)  Education South Sex Experience Union  Wage Age Race Occupation Sector Marr1         8 0 1         21 0 5.10 35  2 6 1 12         9 0 1         42 0 4.95 57  3 6 1 13        12 0  0 1     0 6.67 19 3         6 1 04        12 0  0 4     0 4.00 22 3         6 0 05        12 0  0 17     0 7.50 35 3         6 0 16        13 0  0 9     1 13.07 28 3         6 0 0> str(data1)’data.frame’: 534 obs. of  11 variables: $ Education : int  8 9 12 12 12 13 10 12 16 12 … $ South     : int 0 0 0 0 0 0 1 0 0 0 … $ Sex       : int 1 1 0 0 0 0 0 0 0 0 … $ Experience: int  21 42 1 4 17 9 27 9 11 9 … $ Union     : int 0 0 0 0 0 1 0 0 0 0 … $ Wage      : num 5.1 4.95 6.67 4 7.5 … $ Age       : int 35 57 19 22 35 28 43 27 33 27 … $ Race      : int 2 3 3 3 3 3 3 3 3 3 … $ Occupation: int  6 6 6 6 6 6 6 6 6 6 … $ Sector    : int 1 1 1 0 0 0 0 0 1 0 … $ Marr      : int 1 1 0 0 1 0 0 0 1 0 …

The above results show the sample view of data and the variables present in the data. Now, let’s fit the linear regression model and analyze the results.

> fit_model1 = lm(log(data1$Wage) ~., data = data1)> summary(fit_model1) Call:lm(formula = log(data1$Wage) ~ ., data = data1) Residuals:     Min     1Q Median       3Q Max -2.16246 -0.29163 -0.00469  0.29981 1.98248  Coefficients:             Estimate Std. Error t value Pr(>|t|)    (Intercept)  1.078596 0.687514   1.569 0.117291 Education    0.179366 0.110756   1.619 0.105949 South       -0.102360 0.042823  -2.390 0.017187 * Sex         -0.221997 0.039907  -5.563 4.24e-08 ***Experience   0.095822 0.110799   0.865 0.387531 Union        0.200483 0.052475   3.821 0.000149 ***Age         -0.085444 0.110730  -0.772 0.440671 Race         0.050406 0.028531   1.767 0.077865 . Occupation  -0.007417 0.013109  -0.566 0.571761 Sector       0.091458 0.038736   2.361 0.018589 * Marr         0.076611 0.041931   1.827 0.068259 . —Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.4398 on 523 degrees of freedomMultiple R-squared:  0.3185, Adjusted R-squared:  0.3054 F-statistic: 24.44 on 10 and 523 DF,  p-value: < 2.2e-16

The linear regression results show that the model is statistically significant as the F-statistic has high value and p-value for model is less than 0.05. However, on closer examination we observe that four variables – Education, Experience, Age and Occupation are not statistically significant; while, two variables Race and Marr (martial status) are significant at 10% level. Now, let’s plot the model diagnostics to validate the assumptions of the model.


> plot(fit_model1)
Hit <Return> to see next plot:

Hit to see next plot:

Hit to see next plot:

Hit to see next plot:

The diagnostic plots also look fine. Let’s investigate further and look at pair-wise correlation among variables.

library(corrplot)
> cor1 = cor(data1)
> corrplot.mixed(cor1, lower.col = “black”, number.cex = .7)

The above correlation plot shows that there is high correlation between experience and age variables. This might be resulting in multicollinearity in the model.

Now, let’s move a step further and try Farrar-Glauber test to further investigate this. The ‘mctest’ package in R provides the Farrar-Glauber test in R.

install.packages(‘mctest’)library(mctest)

We will first use omcdiag function in mctest package. According to the package description, omcdiag (Overall Multicollinearity Diagnostics Measures) computes different overall measures of multicollinearity diagnostics for matrix of regressors.

> omcdiag(data1[,c(1:5,7:11)],data1$Wage) Call:omcdiag(x = data1[, c(1:5, 7:11)], y = data1$Wage)  Overall Multicollinearity Diagnostics                        MC Results detectionDeterminant |X’X|:         0.0001 1Farrar Chi-Square:      4833.5751 1Red Indicator:             0.1983 0Sum of Lambda Inverse: 10068.8439         1Theil’s Method:            1.2263 1Condition Number:        739.7337 1 1 –> COLLINEARITY is detected by the test 0 –> COLLINEARITY is not detected by the test

The above output shows that multicollinearity is present in the model. Now, let’s go a step further and check for F-test in in Farrar-Glauber test.

> imcdiag(data1[,c(1:5,7:11)],data1$Wage) Call:imcdiag(x = data1[, c(1:5, 7:11)], y = data1$Wage)  All Individual Multicollinearity Diagnostics Result                  VIF TOL   Wi Fi Leamer      CVIF KleinEducation   231.1956 0.0043  13402.4982 15106.5849 0.0658  236.4725 1South         1.0468 0.9553     2.7264 3.0731 0.9774    1.0707 0Sex           1.0916 0.9161     5.3351 6.0135 0.9571    1.1165 0Experience 5184.0939 0.0002 301771.2445 340140.5368 0.0139 5302.4188     1Union         1.1209 0.8922     7.0368 7.9315 0.9445    1.1464 0Age        4645.6650 0.0002 270422.7164 304806.1391 0.0147 4751.7005     1Race          1.0371 0.9642     2.1622 2.4372 0.9819    1.0608 0Occupation    1.2982 0.7703    17.3637 19.5715 0.8777    1.3279 0Sector        1.1987 0.8343    11.5670 13.0378 0.9134    1.2260 0Marr          1.0961 0.9123     5.5969 6.3085 0.9551    1.1211 0 1 –> COLLINEARITY is detected by the test 0 –> COLLINEARITY is not detected by the test Education , South , Experience , Age , Race , Occupation , Sector , Marr , coefficient(s) are non-significant may be due to multicollinearity R-square of y on all x: 0.2805  * use method argument to check which regressors may be the reason of collinearity===================================

The above output shows that Education, Experience and Age have multicollinearity. Also, the VIF value is very high for these variables. Finally, let’s move to examine the pattern of multicollinearity and conduct t-test for correlation coefficients.

> pcor(data1[,c(1:5,7:11)],method = “pearson”)$estimate              Education   South Sex  Experience Union         Age Race OccupationEducation   1.000000000 -0.031750193  0.051510483 -0.99756187 -0.007479144  0.99726160 0.017230877 0.029436911South      -0.031750193  1.000000000 -0.030152499 -0.02231360 -0.097548621  0.02152507 -0.111197596 0.008430595Sex         0.051510483 -0.030152499  1.000000000 0.05497703 -0.120087577 -0.05369785  0.020017315 -0.142750864Experience -0.997561873 -0.022313605  0.054977034 1.00000000 -0.010244447 0.99987574  0.010888486 0.042058560Union      -0.007479144 -0.097548621 -0.120087577 -0.01024445  1.000000000 0.01223890 -0.107706183 0.212996388Age         0.997261601 0.021525073 -0.053697851  0.99987574 0.012238897 1.00000000 -0.010803310 -0.044140293Race        0.017230877 -0.111197596  0.020017315 0.01088849 -0.107706183 -0.01080331  1.000000000 0.057539374Occupation  0.029436911 0.008430595 -0.142750864  0.04205856 0.212996388 -0.04414029 0.057539374  1.000000000Sector     -0.021253493 -0.021518760 -0.112146760 -0.01326166 -0.013531482  0.01456575 0.006412099 0.314746868Marr       -0.040302967  0.030418218 0.004163264 -0.04097664  0.068918496 0.04509033 0.055645964 -0.018580965                 Sector MarrEducation  -0.021253493 -0.040302967South      -0.021518760  0.030418218Sex        -0.112146760  0.004163264Experience -0.013261665 -0.040976643Union      -0.013531482  0.068918496Age         0.014565751 0.045090327Race        0.006412099 0.055645964Occupation  0.314746868 -0.018580965Sector      1.000000000 0.036495494Marr        0.036495494 1.000000000 $p.value           Education    South Sex Experience        Union Age Race Occupation       SectorEducation  0.0000000 0.46745162 0.238259049  0.0000000 8.641246e-01 0.0000000 0.69337880 5.005235e-01 6.267278e-01South      0.4674516 0.00000000 0.490162786  0.6096300 2.526916e-02 0.6223281 0.01070652 8.470400e-01 6.224302e-01Sex        0.2382590 0.49016279 0.000000000  0.2080904 5.822656e-03 0.2188841 0.64692038 1.027137e-03 1.005138e-02Experience 0.0000000 0.60962999 0.208090393  0.0000000 8.146741e-01 0.0000000 0.80325456 3.356824e-01 7.615531e-01Union      0.8641246 0.02526916 0.005822656  0.8146741 0.000000e+00 0.7794483 0.01345383 8.220095e-07 7.568528e-01Age        0.0000000 0.62232811 0.218884070  0.0000000 7.794483e-01 0.0000000 0.80476248 3.122902e-01 7.389200e-01Race       0.6933788 0.01070652 0.646920379  0.8032546 1.345383e-02 0.8047625 0.00000000 1.876376e-01 8.833600e-01Occupation 0.5005235 0.84704000 0.001027137  0.3356824 8.220095e-07 0.3122902 0.18763758 0.000000e+00 1.467261e-13Sector     0.6267278 0.62243025 0.010051378  0.7615531 7.568528e-01 0.7389200 0.88336002 1.467261e-13 0.000000e+00Marr       0.3562616 0.48634504 0.924111163  0.3482728 1.143954e-01 0.3019796 0.20260170 6.707116e-01 4.035489e-01                MarrEducation  0.3562616South      0.4863450Sex        0.9241112Experience 0.3482728Union      0.1143954Age        0.3019796Race       0.2026017Occupation 0.6707116Sector     0.4035489Marr       0.0000000 $statistic              Education South         Sex Experience Union        Age Race Occupation SectorEducation     0.0000000 -0.7271618  1.18069629 -327.2105031 -0.1712102  308.6803174 0.3944914 0.6741338 -0.4866246South        -0.7271618 0.0000000 -0.69053623   -0.5109090 -2.2436907 0.4928456 -2.5613138  0.1929920 -0.4927010Sex           1.1806963 -0.6905362  0.00000000 1.2603880 -2.7689685   -1.2309760 0.4583091 -3.3015287 -2.5834540Experience -327.2105031 -0.5109090  1.26038801 0.0000000 -0.2345184 1451.9092015  0.2492636 0.9636171 -0.3036001Union        -0.1712102 -2.2436907 -2.76896848   -0.2345184 0.0000000 0.2801822 -2.4799336  4.9902208 -0.3097781Age         308.6803174 0.4928456 -1.23097601 1451.9092015  0.2801822 0.0000000 -0.2473135 -1.0114033 0.3334607Race          0.3944914 -2.5613138  0.45830912 0.2492636 -2.4799336   -0.2473135 0.0000000 1.3193223 0.1467827Occupation    0.6741338 0.1929920 -3.30152873    0.9636171 4.9902208 -1.0114033 1.3193223  0.0000000 7.5906763Sector       -0.4866246 -0.4927010 -2.58345399   -0.3036001 -0.3097781 0.3334607 0.1467827  7.5906763 0.0000000Marr         -0.9233273 0.6966272  0.09530228 -0.9387867  1.5813765 1.0332156 1.2757711 -0.4254112  0.8359769                  MarrEducation  -0.92332727South       0.69662719Sex         0.09530228Experience -0.93878671Union       1.58137652Age         1.03321563Race        1.27577106Occupation -0.42541117Sector      0.83597695Marr        0.00000000 $n[1] 534 $gp[1] 8 $method[1] “pearson”

As we saw earlier in the correlation plot, partial correlation between age-experience, age-education and education-experience is statistically significant. There are other pairs also which are statistically significant. Thus, Farrar-Glauber test helps us in identifying the variables which are causing multicollinearity in the model.

There are multiple ways to overcome the problem of multicollinearity. You may use ridge regression or principal component regression or partial least squares regression. The alternate way could be to drop off variables which are resulting in multicollinearity. You may drop of variables which have VIF more than 10. In our case, since age and experience are highly correlated, you may drop one of these variables and build the model again. Try building the model again by removing experience or age and check if you are getting better results. Share your experiences in the comments section below.

Author Bio:

This article was contributed by Perceptive Analytics. Jyothirmayee Thondamallu, Chaitanya Sagar and Saneesh Veetil contributed to this article.

Perceptive Analytics is a marketing analytics company and it also provides Tableau Consulting, data analytics, business intelligence and reporting services to e-commerce, retail, healthcare and pharmaceutical industries. Our client roster includes Fortune 500 and NYSE listed companies in the USA and India.

Longitudinal heat plots

During our research on the effect of prednisone consumption during pregency on health outcomes of the baby (Palmsten K, Rolland M, Hebert MF, et al., Patterns of prednisone use during pregnancy in women with rheumatoid arthritis: Daily and cumulative dose. Pharmacoepidemiol Drug Saf. 2018 Apr;27(4):430-438. https://www.ncbi.nlm.nih.gov/pubmed/29488292) we developed a custom plot to visualize for each patient their daily and cumulative consumption of prednisone during pregenancy. Since the publication these plots have raised some interest so here is the code used to produce them. Data needs to be in the following format: 1 line per patient 1 column for the patient ID (named id) and then 1 column per unit of time (here days) reporting the measure for that day To illustrate the type of data we dealt with I first generate a random dataset containing 25 patients followed for n days (n different for each patient, randomly chosen between 50 and 200) with a daily consumption value randomly selected between 0 and a maximum dose (randomly determined for each patient between 10 and 50 ). Then we compute the cumulated consumption ie sum of all previous days.
# initial parameters for simulating data
n_indiv <- 25
min_days <- 50
max_days <- 200
min_dose <- 10
max_dose <- 50

# list of ids
id_list <- str_c("i", 1:n_indiv)

# intializing empty table
my_data <- as.data.frame(matrix(NA, n_indiv, (max_days+1)))
colnames(my_data) <- c("id", str_c("d", 1:max_days))
my_data$id <- id_list

# daily simulated data
set.seed(113)
for(i in 1:nrow(my_data)){
# n days follow up
n_days <- round(runif(1, min_days, max_days))
# maximum dose
dose <- round(runif(1, min_dose, max_dose))
# random daily value
my_data[i,2:ncol(my_data)] <- c(runif(n_days, 0, max_dose), rep(NA,(max_days-n_days)))
}

# cumulative simulated data
my_cum_data <- my_data
for(i in 3:ncol(my_cum_data)){
my_cum_data[[i]] <- my_cum_data[[i]] + my_cum_data[[i-1]] 
}
Our plots use the legend.col function found here: https://aurelienmadouasse.wordpress.com/2012/01/13/legend-for-a-continuous-color-scale-in-r/ Here is the longitudinal heat plot function:
 # Color legend on the top
long_heat_plot <- function(my_data, cutoff, xmax) {
# my_data: longitudinal data with one line per individual, 1st column with id, and then one column per unit of time
# cutoff: cutoff value for color plot, all values above cutoff are same color
# xmax: x axis max value
n_lines <- nrow(my_data)
line_count <- 1
# color scale
COLS <- rev(heat.colors(cutoff))
# plotting area
par(oma=c(1,1,4,1), mar=c(2, 2, 2, 4), xpd=TRUE)
# plot init
plot(1,1,xlim=c(0,xmax), ylim=c(1, n_lines), pch='.', ylab='Individual',xlab='Time unit',yaxt='n', cex.axis = 0.8)
# plot line for each woman one at a time
for (i in 1 : n_lines) { 
# get id
id1 <- my_data$id[i]
# get trajectory data maxed at max_val
id_traj <- my_data[my_data$id == id1, 2:ncol(my_data)]
# get last day
END <- max(which(!is.na(id_traj)))
# plot dotted line
x1 <- 1:xmax
y1 <- rep(i,xmax)
lines(x1, y1, lty=3)
for (j in 1 : (ncol(my_data) - 1)) {
# trim traj to max val
val <- min(id_traj[j], cutoff)
# plot traj
points(j, i, col=COLS[val], pch=20, cex=1)
}
# add limit line
points((END+1), i, pch="|", cex=0.9)
}
# add legend
legend.col(col = COLS, lev = 1:cutoff)
mtext(side=3, line = 3, 'unit of measurement')
opar <- par()
text_size <- opar$cex.main
}
Then we generate the corresponding plots:
 long_heat_plot(my_data, 50, 200) 

 long_heat_plot(my_cum_data, 5000, 200) 
And here is what we did in our study:

Stencila – an office suite for reproducible research

Stencila launches the first version of its (open source) word processor and spreadsheet editor designed for researchers.

By Michael Aufreiter, Substance, and Aleksandra Pawlik and Nokome Bentley, Stencila

Stencila is an open source office suite designed for researchers. It allows the authoring of interactive, data-driven publications in visual interfaces, similar to those in conventional office suites, but is built from the ground up for reproducibility.

Stencila aims to make it easier for researchers with differing levels of computational skills to collaborate on the same research article. Researchers used to tools like Microsoft Word and Excel will find Stencila’s interfaces intuitive and familiar. And those who use tools such as Jupyter Notebook or R Markdown are still able to embed code for data analysis within their research articles. Once published, Stencila documents are self-contained, interactive and reusable, containing all the text, media, code and data needed to fully support the narrative of research discovery.


Source: https://stenci.la

The Stencila project aims to be part of the wider vision to enable the next generation of research article – all the way from authoring through to publication as a reproducible, self-contained webpage. A key limitation of the current research publishing process is that conventional document formats (e.g. Word, PDF and LaTeX) do not support the inclusion of reproducible research elements, nor do they produce content in the structured format used for science publishing and dissemination (XML). Stencila aims to remove the need for manual conversion of content from source documents to XML and web (HTML) publishing formats, whilst enabling the inclusion of source data and computational methods within the manuscript. We hope that establishing a digital-first, reproducible archive format for publications will facilitate research communication that is faster and more open, and which lowers the barrier for collaboration and reuse. The development of Stencila is driven by community needs and in coordination with the goals of the Reproducible Document Stack, an initiative started by eLife, Substance and Stencila.

A word processor for creating journal-ready scientific manuscripts

Stencila’s article editor builds on Texture, an open source editor built for visually editing JATS XML documents (a standard widely used by scientific journals). Supporting all elements of a standardised research article, the editor features semantic content-oriented editing that allows the user to focus on the research without worrying about layout information, which is normally stripped during the publishing process. While Texture implements all static elements (abstract, figures, references, citations and so on), Stencila extends Texture with code cells which enable computed, data-driven figures.

Spreadsheets for source data and analysis

In Stencila, datasets are an integral part of the publication. They live as individual spreadsheet documents holding structured data. This data can then be referenced from the research article to drive analysis and plots. As within Excel, cells can contain formulas and function calls to run computations directly in a spreadsheet. But not only can users enter simple expressions, they can also add and execute code in a variety of supported programming languages (at the moment R, Python, SQL and Javascript).

A walk-through of some of the features of Stencila, using this Stencila Article. Source: YouTube; video CC-BY Stencila.

Code evaluation in the browser and beyond

Stencila’s user interfaces build on modern web technology and run entirely in the browser – making them available on all major operating systems. The predefined functions available in Stencila use Javascript for execution so they can be run directly in the editor. For example, the plotly() function generates powerful, interactive visualizations solely using Plotly’s Javascript library.



Stencila can also connect to R, Python and SQL sessions, allowing more advanced data analysis and visualization capabilities. Stencila’s execution engine keeps a track of the dependency between code cells, enabling a reactive, spreadsheet-like programming experience both in Stencila Articles and Sheets.

An example of using R within a Stencila Sheet. Source: YouTube; video CC-BY Stencila.

Reproducible Document Archive (Dar)

Stencila stores projects in an open file archive format called Dar. A Dar is essentially a folder with a number of files encompassing the manuscript itself (usually one XML per document) and all associated media.



The Dar format is open source: inspect it and provide feedback at https://github.com/substance/dar

Dar uses existing standards when possible. For instance, articles are represented as JATS XML, the standard preferred by a number of major publishers. The Dar format is a separate effort from Stencila, and aims to establish a strict standard for representing self-contained reproducible publications, which can be submitted directly to publishers. Any other tool should be able to easily read and write such archives, either by supporting it directly or by implementing converters.

Interoperability with existing tools and workflows

Stencila is developed not to replace existing tools, but to complement them. Interoperability is at the heart of the project, with the goal of supporting seamless collaboration between users of Jupyter Notebooks, R Markdown and spreadsheet applications. We are working closely with the communities of existing open source tools to improve interoperability. For instance, we are working with the Jupyter team on tools to turn notebooks into journal submissions. We are also evaluating whether the Stencila editor could be used as another interface to edit Jupyter Notebooks or R Markdown files: we hope this could help researchers who use existing tools to collaborate with peers who are used to other office tools, such as Word and Excel, and thus encourage wider adoption of reproducible computational research practises.

State of development

Over the past two years, we’ve built Stencila from the ground up as a set of modular components that support community-driven open standards for publishing and computation. Stencila Desktop is our prototype of a ‘researcher’s office suite’, built by combining these components into an integrated application. During this beta phase of the project, we are working to address bugs and add missing features, and welcome your feedback and suggestions (see below).

One of our next priorities will be to develop a toolset for generating a web page from a reproducible article in the Dar format. Using progressive enhancement, the reader should be able to reproduce a scientific article right from the journal’s website in various forms, ranging from a traditional static representation of the manuscript and its figures to a fully interactive, executable publication.

We will continue working on Stencila’s various software components, such as the converter module and execution contexts for R and Python, towards improved integration and interoperability with other tools in the open science toolbox (e.g. Jupyter, RStudio and Binder).

Get involved

We’d love to get your input to help shape Stencila. Download Stencila Desktop and take it for a test drive. You could also try porting an existing manuscript over to Stencila using the Stencila command line tool. Give us your feedback and contribute ideas on our community forum or in our chat channel, or drop us an email at [email protected] or [email protected].

Acknowledgments

Development of Stencila has been generously supported by the Alfred P. Sloan Foundation and eLife.

This post was originally published on eLife Labs.

eLife welcomes comments, questions and feedback. Please annotate publicly on the article or contact us at innovation [at] elifesciences [dot] org.

Lyric Analysis with NLP and Machine Learning using R: Part One – Text Mining

June 22
By Debbie Liske

This is Part One of a three part tutorial series originally published on the DataCamp online learning platform in which you will use R to perform a variety of analytic tasks on a case study of musical lyrics by the legendary artist, Prince. The three tutorials cover the following:


Musical lyrics may represent an artist’s perspective, but popular songs reveal what society wants to hear. Lyric analysis is no easy task. Because it is often structured so differently than prose, it requires caution with assumptions and a uniquely discriminant choice of analytic techniques. Musical lyrics permeate our lives and influence our thoughts with subtle ubiquity. The concept of Predictive Lyrics is beginning to buzz and is more prevalent as a subject of research papers and graduate theses. This case study will just touch on a few pieces of this emerging subject.



Prince: The Artist

To celebrate the inspiring and diverse body of work left behind by Prince, you will explore the sometimes obvious, but often hidden, messages in his lyrics. However, you don’t have to like Prince’s music to appreciate the influence he had on the development of many genres globally. Rolling Stone magazine listed Prince as the 18th best songwriter of all time, just behind the likes of Bob Dylan, John Lennon, Paul Simon, Joni Mitchell and Stevie Wonder. Lyric analysis is slowly finding its way into data science communities as the possibility of predicting “Hit Songs” approaches reality.

Prince was a man bursting with music – a wildly prolific songwriter, a virtuoso on guitars, keyboards and drums and a master architect of funk, rock, R&B and pop, even as his music defied genres. – Jon Pareles (NY Times)
In this tutorial, Part One of the series, you’ll utilize text mining techniques on a set of lyrics using the tidy text framework. Tidy datasets have a specific structure in which each variable is a column, each observation is a row, and each type of observational unit is a table. After cleaning and conditioning the dataset, you will create descriptive statistics and exploratory visualizations while looking at different aspects of Prince’s lyrics.

Check out the article here!




(reprint by permission of DataCamp online learning platform)