Ruths.ai recently published a free template on the Ruths.ai Exchange that reads and writes data from/to MS Access. Under the covers, you’ll find two property controls and two data functions working with the RODBC package. Now, we know that templates are good, but being able to replicate the work is better. Users want to be able to recreate that functionality in their own files, which is why I am writing this post to explain the code and how everything fits together so you can recreate this functionality in your own DXP files. Before reading any farther, use this link to download a copy of the template and familiarize yourself with how it works.
- Do you know how to check which version of TERR is installed on your Spotfire installation?
- Are you unsure of which versions of R or RStudio to download so that you know it lines up with TERR?
- Did you have it all figured out but now you’ve upgraded and lost all the answers???
Linear Regression models are the simplest linear models available in statistical literature. While the assumptions of linearity and normality seem to restrict the practical use of this model, it is surprisingly successful at capturing basic relationships and predicting in most scenarios. The idea behind the model is to fit a line that mimics the relationship between target variables and a combination of predictors (called independent variables). Multiple regression refers to only one target variable and multiple predictors. These models are popular not only for solving the prediction task but also for working as a model selection tools allowing to find the most important predictors and eliminate redundant variables from the analysis.
Two weeks ago, I published a Linear and Logistic Regression template on Exchange.ai that can be found here. When I built the template, my process was as follows:
- Add test and training data sets
- Build model on training data set
- Insert predicted column based on model in test data set
When following this process for the logistic regression model (a classification model), it inserts two columns of data — ProbPrediction and ClassPrediction. These two columns give a prediction and a probability. I noticed that some records contained a value for the ClassPrediction but not the ProbPrediction, which seemed odd. This happened in records where one or more of my predictor columns were null, in which case, neither column should have been populated.
It turns out that this is a bug that can be fixed with the steps below.
- Go to the Tools menu and select TERR Tools
- Click the Launch TERR Console button
- Type getOption(“repos”)
- Type install.packages(“SpotfireStats”)
- Type q() to exit the program
- Close the program and relaunch
See below for a screen shot of the console.
After I relaunched Spotfire and reran the model, I saw consistent population of the ProbPrediction and ClassPrediction columns. If you have any questions, feel free to contact me at email@example.com.
Anna Smith is an Engineering Technician at Continental Resources up in Oklahoma. Today she will be sharing her journey creating average lines using TERR.
I had often been asked for average lines on line graphs – seeing the average of a dataset compared to each individual line in that data set. I kept trying to figure it out with just calculated columns and formatting issues, but eventually came to the conclusion that Spotfire just doesn’t give us an easy or clean way to do this. So the idea of using TERR came into play. In my example, we wanted to compare production over time to the average over time for a certain well set – and we want this to be dynamic, i.e., if we change our well set selected, then our calculated average line needs to change. Our TERR code, then, needed to subset each day, calculate an average for that day, and spit out a new value. An important note: the function given at the end of the article that we used requires the input days or months, which means if you have a data set with just dates and production numbers, you need to normalize all those dates back to time zero.
- Have you wanted to to limit calculated columns to a smaller data set but been unable to because calculated columns always take into account the entire data set?
- Have you ever wanted IF statements to be more dynamic?
- Would you like to build filtering or data reduction into a workflow but still maintain the original data connections?
PCA (Principal Component Analysis) is a core data science technique for not only understanding colinearity of independent variables in a dataset, but can provide a reduced dimensional model by rotating your high-D data into lower dimensions. Here’s some quick info on getting PCA in Spotfire. If you want more info on PCA, of course check out Wikipedia or a great interactive example on Setosa.