In this blog, I’m going to show you how to create well sticks as a layer in your map chart visualization. The only table that you will need are your well headers that has both surface and bottom hole locations.
- Are you constantly updating pivot transformation when column names change?
- Do you use an interactive workflow that involves selecting columns from a property control, which then breaks your pivot transformation?
- Would you like to make your workflow more dynamic?
- Do you have WKB (well-known binary) data that you want to bring into Spotfire?
- Are you struggling with SQL geometry columns in Spotfire?
- Do you want to understand more about how Spotfire processes or handles spatial data?
- Do you want to import spatial data into Spotfire but don’t know how to configure the information link?
Howdy! I’m going to be looking at US air traffic delays during the holiday season. More specifically air travel trends in November and December. This is more of a for fun analysis as well as my first real dive into the world of Spotfire. If there’s anything wonky or weird about my analysis, just bear with me! I’ve posted this template on Exchange.ai so feel free to download it here and follow along.
The data I’m using is from a few different sources which I’ve cited at the bottom of this post.
The data set that kicked off this idea was found on Kaggle, a site for data sets and data science competitions. The original set contained air traffic delays for the entire year of 2008 which then led me to an even larger data set with data all the way back to 1987. My analysis is only looking at November/December in 2006 and 2007 so the view is a little narrow. After concatenating the entire data set, the file was almost 11GB which did result in some cool visualizations but it was too large to make a template out of.
This data set has some fun properties to it. It allows you to get an idea of the different types of delays that occur such as weather or security. In Spotfire I joined another data set that I found which contains the latitudes and longitudes of each airport. This allowed me to get a map of each airport in the United States.
In the above visual, the size represents the number of people traveling to each city while the color represents the average departure delay. Right off the bat, you can see the highest traffic ports such as Atlanta, Chicago, and DFW. These guys have an absurd amount of traffic going through them. DFW, for example, has about 46k flights going into the port and Atlanta has a whopping 65k unique flights. For fun I did some napkin math to get an idea of how many people are flying into just Atlanta for these two months. I used a rough average for the number of seats on a plane which was about 200  and about 80-85% of the seats are usually filled . That puts us at something like 166 people if we use 83% of the seats being filled. Which means that Atlanta handled somewhere around 10.8M people in these two months for these years combined. That’s a lot of people for a single airport, and they seem to do a pretty good job at handling it! The average overall delay is about 23 minutes. Chicago (ORD) on the other hand is a little worse off with an average overall delay of about 40 minutes. The overall was calculated by just adding the arrival delay and departure delay for each instance.
The above line graph shows the delay per day. It’s pretty obvious when the holidays occur and end, which was kind of a neat result of visualizing this. To me, the most interesting thing about this graph is the Half Dome peak right before Christmas Eve.
The similarities are striking! I also enjoy how the middle of December sees a giant spike in delays and then a big dip back to normality before the climb to the top.
Interested in digging more into this data? Download the template and play around for some fun visualizations and neat stats. One thing to note is that this template only contains a small subset of the actual dataset that I used. If you want the full set, you’ll have to go to this place and download the zip files. They’re compressed bz2’s so you’ll need a special program to open these such as 7zip or WinRAR. Unless you’re on Linux, then the ole bzip2 -dk in your data’s directory from the shell should be enough. For the longitude and latitudes, I used this site which provides a csv for all airports, not just the USA’s. One thing I found limiting was that the airport data doesn’t contain an airport’s state for those in the United States. Fortunately, I found a site that has this information in table form so you would just need to scrape the site for the relevant information and mess with the data table properties in Spotfire.
Hopefully this post was interesting to you or at the very least an insight into how busy Atlanta is. Thanks for reading and have some happy holidays!
Sources: http://stat-computing.org/dataexpo/2009/the-data.html  https://openflights.org/data.html  https://www.quora.com/What-is-the-average-amount-of-passengers-on-a-plane  https://www.quora.com/How-many-empty-seats-are-there-on-the-average-US-domestic-flight  https://www.kaggle.com/giovamata/airlinedelaycauses
- Are you aware that you can write expressions on the Y-Axis but have no idea how?
- Is the syntax for writing expressions on the Y-Axis confusing?
- Would you like to expand your Spotfire calculation skills?
Data science in Oil and Gas is central stage as operators work in the new “lower for longer” price environment. Want to see what happens when you solve data science questions with the hottest new database and powerful analytics of Spotfire? Read on to learn about our latest analytics module, the DCA Wrangler. If you want to see it in action, scroll down to watch the video.
Layering Data Science on General Purpose Data & Analytics
Ruths.ai is a startup focused on energy analytics and technical data science. We are both TIBCO and MongoDB partners, heavily leveraging these two platforms to solve real-world problems revolving around the application of data science at scale and within the enterprise environment. I started our plucky outfit a little under four years ago. We’ve done a lot of neat things with Spotfire including analyzing seismic, and well log data. Here, we’ll look at competitor/production data.
MongoDB provides a powerful and scalable general purpose database system. TIBCO provides tested and forward thinking general purpose analytics platforms for both streaming and data at rest. They also provide great infrastructure products which isn’t in focus in this blog.
Ruths.ai provides the domain knowledge and we infuse our proprietary algorithms and data structures for solving common analytics problems into products that leverage the TIBCO and MongoDB platforms.
We believe that these two platforms can be combined to solve innumerable problems in the technical industries represented by our readers. TIBCO provides the analytics and visualization while MongoDB provides the database. This is a powerful marriage for problems involving analytics, single view or IOT.
In this blog, I want to dig into a specific and fundamental problem within oil and gas and how we leveraged TIBCO Spotfire and MongoDB to solve it — namely Autocasting.
What is Autocasting?
Oil reserves denote the amount of crude oil that can be technically recovered at a cost that is financially feasible at the present price of oil. Crude oil resides deep underground and must be extracted using wells and completion techniques. Horizontal wells can stretch two miles within a vertical window the height of most office floors.
For those with E&P experience, I’m going to elide some important details, like using “oil” for “hydrocarbons” and other technical nomenclature.
Because the geology of the subsurface cannot be examined directly, indirect techniques must be used to estimate the size and recoverability of the resource. One important indirect technique is called decline curve analysis (DCA), which is a mathematical model that we fit to historical production data to forecast reserves. DCA is so prevalent in oil and gas that we use it for auditing, booking, competitor analysis, workover screening, company growth and many other important tasks. With the rise of analytics, it has therefore become a central piece in any multi-variate workflow looking to find the key drivers for well and resource performance.
At the heart of any resource assessment model is a robust “autocasting” method. Autocasting is the automatic application of DCA to large ensembles of wells, rather than one at a time.
But there’s a problem. Incumbent technologies make the retrieval of decline curves and their parameters very difficult. Decline curve models are complex mathematical forecasts with many components and variation. Retrieving models from a SQL database often requires parsing text expressions. And interacting with many tables within a database.
Further, with the rise of unconventionals, the fundamental workflow of resource assessment through decline curves is being challenged. Spotfire has become a popular tool for revamping and making next generation decline curve analysis solutions.
Autocasting in Action
What I am going to demonstrate is a new autocast workflow that would not be possible without the combined performance and capability of MongoDB and Spotfire. I’ll be demonstrating using our DCA Wrangler product – which is one of over 250 analytics workflows that we provide through a comprehensive subscription.
Its important to note that software exists to decline wells and database their results. People have even declined wells in Spotfire before. What I hope you see in our new product is the step change in performance, ease-of-use, and enablement when you use MongoDB as the backend.
First, we have a home run solution for decline curves that requires a MongoDB backend. In the near future, more vendor companies will be leveraging Mongo as their backend database.
Second, I hope you see the value in MongoDB for storing and retrieving technical data and analytic results, especially within powerful tools like Spotfire. Plus, how easy it is to set up and use.
And Lastly, I hope you get excited about the other problems that can be solved by marrying TIBCO with MongoDB – imagine using Streambase as your IOT processor and MongoDB as your deposition environment. Or even store models and sensor data within Mongo and use Spotfire to tweak model parameters and co-visualize data.
If you’re interested in learning more about our subscription, get registered today.
Let’s make data great again.
This is the sixth and final part of a series on Spotfire Properties. In previous posts, I discussed Document Properties, Data Table Properties, Column Properties, Data Connection Properties, and Data Function Properties. This week, we’ll take a look at Visualization Properties properties.
To begin, each and every Spotfire visualization has it’s own visualization properties dialog controlling what is possible. Basically, if it’s not in visualization properties, it can’t be done. I am sure you have noticed, the dialog changes with each visualization based on the content and functionality of the vis. In the course of this post, I will explain which ones are common across all visualizations and provide a few “pro” tips.
Common Visualization Properties
When writing this blog post, I decided to create a matrix showing which submenus appear in each visualization properties dialog. This seemed like a good idea when I started. Halfway through the assembly, I started to question my motives and the utility of such a matrix. In the end, the result surprised me. You can download the DXP with this matrix, and I have posted a screenshot below.
As it turns, out only three menus are common across all visualization properties — General, Appearance, and Fonts. After these menus, Data, Legend, and Show/Hide Items are the most common.
Next, I promised a few pro tips.
- First, if you ever wonder what’s possible in a given visualization, consult this matrix. For example, if you want to put Labels on a visualization but don’t see a way to do that, check to see if there is a Labels menu. If you don’t see a Labels menu, you can’t put Labels on the visualization.
- Second, always check the Appearance menu for your visualizations, especially if they are new to you or you have gone thru an upgrade. The Appearance menu usually contains little gems for beautifying visualizations. I have seen several new options appear there in the last few upgrades.
- Third, don’t perform formatting in the Formatting menu. Instead, format in Column Properties or Tools –> Options. Formatting via this menu is generally the most inefficient way to apply formatting, unless you have one off needs.
- Fourth, if you aren’t familiar with these menus, I highly recommend checking them out. They are very useful. I have a blog post on using the Line Connection, and I’ll update with posts on Error Bars and Show Hide soon.
- Line Connection — https://datashoptalk.com/8072-2/
- Error Bars — Error bars are used to indicate the estimated error in a measurement or the uncertainty in a value. Bar charts and line charts can display vertical errors, as indicated in the matrix.
- Show/Hide — Allows you to restring content. For example, if you have a bar chart with wells on the X-Axis and production on the Y-Axis, you can ask Spotfire to show only the top 10 producers. Similarly, you could ask Spotfire to hide the bottom 10 producers.
- Lastly, the same is true for fonts. Don’t use the Fonts menu. Go through Themes.
In conclusion, I want to point out that a few visualizations also contain Settings menus. Settings menus are used when the vis has individual, configurable components. For example, the maps menu also contains a Setting menu for each Layer. Graphical Tables contain Settings menus for each element in the graphical table. A summary of such visualizations appears below.
- Maps — Layer Settings
- KPI Tiles — KPI Settings
- Graphical Tables — Icon/Bullet Graph/Sparkline/Calculate Value Settings
In order to wrap up the series, I want to revisit the original questions I posed in the beginning.
What do all of these properties menus do?
Where can I go to change <insert preference here>? I keep setting <insert preference here> over and over again. There must be a better or faster way.
The six-part series has addressed the first question. The second question can be answered with this post on user preferences and administration manager preferences. I hope you found the series useful.
Hey everyone! Here is a quick and dirty HTML code snippet for those of you who are working on your HTML skills. Earlier this week, I was building a template with a Text Area containing several buttons. I wanted to center the buttons in the Text Area, which is really easy to do with HTML. You can also do this with CSS and <style>, but I’m just going to show you the two HTML versions that I worked with.
HTML Snippet No. 1
In this case, I have wrapped the buttons with <p> and </p> and simply added the align attribute with the “center” value. Here’s what it looks like. Note, the buttons sit on their own line because they are individually wrapped with <p> and </p>. <p> is a block level element that always starts on a new line and takes up the full width available (stretches out to the left and right as far as it can in the Text Area). Compare this to the second code snippet below.
HTML Snippet No. 2
In this case, I have placed all buttons inside a <div> container and added the align attribute with the “center” value. Now, <div> is also a block level element, but in this case, the buttons are not on their own line because they are inside of one container. They would be on their own lines if I had placed them in individual <div> containers, but I was just seeing what I could do with less code. Personally, I prefer the first look.
I highly suggest reading up on the different block and inline elements. Understanding what space will be taken up with a given element will help you better design Text Areas. Please feel free to comment and provide suggestions for better code. I am still very much working on HTML and CSS. Thank you!
This post explains my struggle to convert strings to Date or Time with TERR. I recently spent so much time on this that I thought it deserved a blog post. Here’s the story…
I was recently working on a TERR data function that calls a publicly available API and brings all the data into a table. I used the as.data.frame function to parse out my row data. In that function, I used the stringsAsFactors = FALSE argument, and as a result (the desired result), all of my data came back as strings. This was fine because the API included column metadata with the data type. As you can see in the script below, I planned on “sapplying” through the metadata with as.POSIXct and as.numeric. This worked just fine in RStudio, and it also worked for the numeric columns and for the DateTime columns. However, it did not work for Date and Time columns. I tried different syntax, functions (as.Date didn’t work either), packages, etc to get it to work and NOTHING! The struggle was very real.
Finally, I Googled the right terms and came across a TIBCO knowledge base article with this information….
Spotfire data functions recognize TERR objects of class “POSIXct” as date/time information. As designed, the Spotfire/TERR data function interface for date/time information does the following:
– Converts a Spotfire value or column whose DataType is “Date”, “Time” or “DateTime” into a TERR object of class “POSIXct”.
– Converts a TERR object of class “POSIXct” into a Spotfire value or column with a DataType of “DateTime”, which can then be formatted in Spotfire to display only the date (or to display only the time) if needed.
This interface does not use any other TERR object classes (such as the “Date” class in TERR) to transfer date/time information between Spotfire and TERR.
That told me that all my effort was for naught, and it just wasn’t possible. I contacted TIBCO just to make sure there wasn’t some other solution out there that the article was not addressing. In the end, I just used a transformation on the Date and Time columns to change the data type. I hope that you, dear Reader, find this post before you spend hours on the same small problem. I did put in an enhancement request. Fingers crossed. Please let me know if you have a better method!
What is a Data Function?
Data Function Basics
- Create the script
- Create the parameters
- Run the script to map the parameters to the data in the DXP
- The script is the “meat” of the data function. Within the script, you’ll find at least one input and one output parameter. The simplest R script I’ve ever written is output <- input. Input is the input parameter, and…yeah, you can finish that sentence I bet.
- TERR (and R) are object-oriented languages, which means programmers can create objects within the code, assign values to the objects and then reference the object down the road rather than all the values. This makes programming easier. In the example above, input and output are both objects.
- Input and output parameters tell Spotfire what type of object to work with. The object could be a table, column, document property or another object.
- Running the script triggers the dialogs where you will map the parameters to the actual data in the DXP.
- Data functions can be connected to marking and filtering. For example, you can pass the results of marking or filtering to a new table.
- Users may create data functions from scratch in Spotfire, or users may import data functions from the Spotfire library or another file.
- By default, data functions embed within the analysis. However, users have the ability to save them to the library for reuse or sharing.
duplicate <- duplicated(data.frame(key1, key2))
Data Function Properties Main Screen
- Create new data functions
- Edit existing data functions and their parameters
- Refresh data functions
- Delete data functions
- Save data functions to the library
- Export a data function
- The term “Register New” can be a bit confusing to new users. This really means create a new data function. In the process, you’ll have the option to save it in the library or register it.
- Clicking the Edit Script button will let you modify the script or the input and output parameters.
- Clicking the Edit Parameters button allows you to change the mapping of data from the parameters to the DXP content.
Script & Parameters
Input and Output Parameters
- If your input or output is an entire table, choose Table. I use this option when I am simply passing a limited data set from my original table to a new table.
- If your input or output is a single column, choose Column. The script shown above for identifying duplicates uses a Column output. The data function creates a column called “duplicate”.
- If your input is a hard-coded value or a document property, choose Value.
Run the Data Function
After you have entered the script, input parameters, and output parameters, the next step is clicking the Run button. If Spotfire asks if you want to save the data function to the library, you can say no. It will not impact your DXP. This is simply to give the option to save the data function to the library so others may access it. As an administrator, I ask users NOT to do this because it clutters up the library. It is also hard to know what a given data function is for or if it even works.
Anyway, this is the step in the process where you map the parameters to the content of the DXP. Let’s tackle the inputs first. I have intentionally added two unnecessary parameters to demonstrate that the options for input handlers depend on the type of input parameter. Each input parameter type has different options.
- For Column type, there are three options — Column, Expression, and None. The most common input handler is Column, which I have used in data functions that manipulate or calculate based on a specific column of data.
- For Value type, there are six options — Value, Document property, Data table property, Column property, Expression, and None. I most frequently use Document property.
- For Table type, there are three options — Columns, Expression, and None. You can tell Spotfire to work with a subset of the columns in the table by using the Select Columns button. Alternatively, typing “*” in “Search expression” will use all columns in a table. It’s not visible in the screenshot shown, but just below the “Search expression” section, you will also find options to connect the contents of the table to marking or filtering. This is explained in the TERR Basics post.
I do want to note that I have never used the None option in either input or output handlers. If someone has, please tell me about it in Comments.
Now, for outputs, it is also true that the options presented differ depending on the parameter type. As you can see, Column, Value, and Table all have different options.
- The Column and Table Type have the same four options — Data table, Columns, Rows, and None. Use Data table if you are creating an entirely new table. Set the type to Columns if the output is a column that should be added to another table. Use Rows if you are adding rows to a table.
- In Value Type, there are six options — Data table, Columns, Rows, Document property, Data table property, Column property, and None. The same advice is true of outputs here as for inputs.
As I was writing this, I realized that if I were creating a data function that output rows, I’m not sure which type I would use. The options for adding rows are part of both the Column and Table Type. Setting up a Column type to insert rows seems counter-intuitive. I just haven’t had to write this type of data function yet. If you know, please Comment!
Hopefully, explaining some of the common uses of the different types of input and output parameters will help you better understand TERR function and how to convert R code to TERR. Thanks!