Introduction

Rectangling is the art and craft of taking a deeply nested list (often sourced from wild caught JSON or XML) and taming it into a tidy data set of rows and columns. There are three functions from tidyr that are particularly useful for rectangling:

A very large number of data rectangling problems can be solved by combining these functions with a splash of dplyr (largely eliminating prior approaches that combined mutate() with multiple purrr::map()s).

To illustrate these techniques, we’ll use the repurrrsive package, which provides a number deeply nested lists originally mostly captured from web APIs.

library(tidyr)
library(dplyr)
library(repurrrsive)

GitHub users

We’ll start with gh_users, a list which contains information about six GitHub users. To begin, we put the gh_users list into a data frame:

users <- tibble(user = gh_users)

This seems a bit counter-intuitive: why is the first step in making a list simpler to make it more complicated? But a data frame has a big advantage: it bundles together multiple vectors so that everything is tracked together in a single object.

Each user is a named list, where each element represents a column.

There are two ways to turn the list components into columns. unnest_wider() takes every component and makes a new column:

But in this case, there are many components and we don’t need most of them so we can instead use hoist(). hoist() allows us to pull out selected components using the same syntax as purrr::pluck():

hoist() removes the named components from the user list-column, so you can think of it as moving components out of the inner list into the top-level data frame.

Github repos

We start off gh_repos similarly, by putting it in a tibble:

This time the elements of user are a list of repositories that belong to that user. These are observations, so should become new rows, so we use unnest_longer() rather than unnest_wider():

Then we can use unnest_wider() or hoist():

Note the use of c("owner", "login"): this allows us to reach two levels deep inside of a list. An alternative approach would be to pull out just owner and then put each element of it in a column:

Instead of looking at the list and carefully thinking about whether it needs to become rows or columns, you can use unnest_auto(). It uses a handful of heuristics to figure out whether unnest_longer() or unnest_wider() is appropriate, and tells you about its reasoning.

tibble(repo = gh_repos) %>% 
  unnest_auto(repo) %>% 
  unnest_auto(repo)
#> Using `unnest_longer(repo)`; no element has names
#> Using `unnest_wider(repo)`; elements have 68 names in common
#> # A tibble: 176 x 67
#>        id name  full_name owner private html_url description fork  url  
#>     <int> <chr> <chr>     <lis> <lgl>   <chr>    <chr>       <lgl> <chr>
#>  1 6.12e7 after gaborcsa… <nam… FALSE   https:/… Run Code i… FALSE http…
#>  2 4.05e7 argu… gaborcsa… <nam… FALSE   https:/… Declarativ… FALSE http…
#>  3 3.64e7 ask   gaborcsa… <nam… FALSE   https:/… Friendly C… FALSE http…
#>  4 3.49e7 base… gaborcsa… <nam… FALSE   https:/… Do we get … FALSE http…
#>  5 6.16e7 cite… gaborcsa… <nam… FALSE   https:/… Test R pac… TRUE  http…
#>  6 3.39e7 clis… gaborcsa… <nam… FALSE   https:/… Unicode sy… FALSE http…
#>  7 3.72e7 cmak… gaborcsa… <nam… FALSE   https:/… port of cm… TRUE  http…
#>  8 6.80e7 cmark gaborcsa… <nam… FALSE   https:/… CommonMark… TRUE  http…
#>  9 6.32e7 cond… gaborcsa… <nam… FALSE   https:/… <NA>        TRUE  http…
#> 10 2.43e7 cray… gaborcsa… <nam… FALSE   https:/… R package … FALSE http…
#> # … with 166 more rows, and 58 more variables: forks_url <chr>, keys_url <chr>,
#> #   collaborators_url <chr>, teams_url <chr>, hooks_url <chr>,
#> #   issue_events_url <chr>, events_url <chr>, assignees_url <chr>,
#> #   branches_url <chr>, tags_url <chr>, blobs_url <chr>, git_tags_url <chr>,
#> #   git_refs_url <chr>, trees_url <chr>, statuses_url <chr>,
#> #   languages_url <chr>, stargazers_url <chr>, contributors_url <chr>,
#> #   subscribers_url <chr>, subscription_url <chr>, commits_url <chr>,
#> #   git_commits_url <chr>, comments_url <chr>, issue_comment_url <chr>,
#> #   contents_url <chr>, compare_url <chr>, merges_url <chr>, archive_url <chr>,
#> #   downloads_url <chr>, issues_url <chr>, pulls_url <chr>,
#> #   milestones_url <chr>, notifications_url <chr>, labels_url <chr>,
#> #   releases_url <chr>, deployments_url <chr>, created_at <chr>,
#> #   updated_at <chr>, pushed_at <chr>, git_url <chr>, ssh_url <chr>,
#> #   clone_url <chr>, svn_url <chr>, size <int>, stargazers_count <int>,
#> #   watchers_count <int>, language <chr>, has_issues <lgl>,
#> #   has_downloads <lgl>, has_wiki <lgl>, has_pages <lgl>, forks_count <int>,
#> #   open_issues_count <int>, forks <int>, open_issues <int>, watchers <int>,
#> #   default_branch <chr>, homepage <chr>

Game of Thrones characters

got_chars has a similar structure to gh_users: it’s a list of named lists, where each element of the inner list describes some attribute of a GoT character. We start in the same way, first by creating a data frame and then by unnesting each component into a column:

This is more complex than gh_users because some component of char are themselves a list, giving us a collection of list-columns:

What you do next will depend on the purposes of the analysis. Maybe you want a row for every book and TV series that the character appears in:

Or maybe you want to build a table that lets you match title to name:

(Note that the empty titles ("") are due to an infelicity in the input got_chars: ideally people without titles would have a title vector of length 0, not a title vector of length 1 containing an empty string.)

Again, we could rewrite using unnest_auto(). This is convenient for exploration, but I wouldn’t rely on it in the long term - unnest_auto() has the undesirable property that it will always succeed. That means if your data structure changes, unnest_auto() will continue to work, but might give very different output that causes cryptic failures from downstream functions.

Geocoding with google

Next we’ll tackle a more complex form of data that comes from Google’s geocoding service. It’s against the terms of service to cache this data, so I first write a very simple wrapper around the API. This relies on having an Google maps API key stored in an environment; if that’s not available these code chunks won’t be run.

The list that this function returns is quite complex:

Fortunately, we can attack the problem step by step with tidyr functions. To make the problem a bit harder (!) and more realistic, I’ll start by geocoding a few cities:

city <- c("Houston", "LA", "New York", "Chicago", "Springfield")
city_geo <- purrr::map(city, geocode)

I’ll put these results in a tibble, next to the original city name:

loc <- tibble(city = city, json = city_geo)
loc

The first level contains components status and result, which we can reveal with unnest_wider():

Notice that results is a list of lists. Most of the cities have 1 element (representing a unique match from the geocoding API), but Springfield has two. We can pull these out into separate rows with unnest_longer():

Now these all have the same components, as revealed by unnest_wider():

loc %>%
  unnest_wider(json) %>% 
  unnest_longer(results) %>% 
  unnest_wider(results)

We can find the lat and lon coordinates by unnesting geometry:

loc %>%
  unnest_wider(json) %>% 
  unnest_longer(results) %>% 
  unnest_wider(results) %>% 
  unnest_wider(geometry)

And then location:

loc %>%
  unnest_wider(json) %>%
  unnest_longer(results) %>%
  unnest_wider(results) %>%
  unnest_wider(geometry) %>%
  unnest_wider(location)

Again, unnest_auto() makes this simpler with the small risk of failing in unexpected ways if the input structure changes:

loc %>%
  unnest_auto(json) %>%
  unnest_auto(results) %>%
  unnest_auto(results) %>%
  unnest_auto(geometry) %>%
  unnest_auto(location)

We could also just look at the first address for each city:

loc %>%
  unnest_wider(json) %>%
  hoist(results, first_result = 1) %>%
  unnest_wider(first_result) %>%
  unnest_wider(geometry) %>%
  unnest_wider(location)

Or use hoist() to dive deeply to get directly to lat and lng:

loc %>%
  hoist(json,
    lat = list("results", 1, "geometry", "location", "lat"),
    lng = list("results", 1, "geometry", "location", "lng")
  )

Sharla Gelfand’s discography

We’ll finish off with the most complex list, from Sharla Gelfand’s discography. We’ll start the usual way: putting the list into a single column data frame, and then widening so each component is a column. I also parse the date_added column into a real date-time1.

At this level, we see information about when each disc was added to Sharla’s discography, not any information about the disc itself. To do that we need to widen the basic_information column:

Unfortunately that fails because there’s an id column inside basic_information. We can quickly see what’s going on by setting names_repair = "unique":

The problem is that basic_information repeats the id column that’s also stored at the top-level, so we can just drop that:

Alternatively, we could use hoist():

Here I quickly extract the name of the first label and artist by indexing deeply into the nested list.

A more systematic approach would be to create separate tables for artist and label:

discs %>% 
  hoist(basic_information, artist = "artists") %>% 
  select(disc_id = id, artist) %>% 
  unnest_longer(artist) %>% 
  unnest_wider(artist)
#> # A tibble: 167 x 8
#>     disc_id join  name          anv   tracks role  resource_url               id
#>       <int> <chr> <chr>         <chr> <chr>  <chr> <chr>                   <int>
#>  1  7496378 ""    Mollot        ""    ""     ""    https://api.discogs.c… 4.62e6
#>  2  4490852 ""    Una Bèstia I… ""    ""     ""    https://api.discogs.c… 3.19e6
#>  3  9827276 ""    S.H.I.T. (3)  ""    ""     ""    https://api.discogs.c… 2.77e6
#>  4  9769203 ""    Rata Negra    ""    ""     ""    https://api.discogs.c… 4.28e6
#>  5  7237138 ""    Ivy (18)      ""    ""     ""    https://api.discogs.c… 3.60e6
#>  6 13117042 ""    Tashme        ""    ""     ""    https://api.discogs.c… 5.21e6
#>  7  7113575 ""    Desgraciados  ""    ""     ""    https://api.discogs.c… 4.45e6
#>  8 10540713 ""    Phantom Head  ""    ""     ""    https://api.discogs.c… 4.27e6
#>  9 11260950 ""    Sub Space (2) ""    ""     ""    https://api.discogs.c… 5.69e6
#> 10 11726853 ""    Small Man (2) ""    ""     ""    https://api.discogs.c… 6.37e6
#> # … with 157 more rows

discs %>% 
  hoist(basic_information, format = "formats") %>% 
  select(disc_id = id, format) %>% 
  unnest_longer(format) %>% 
  unnest_wider(format) %>% 
  unnest_longer(descriptions)
#> # A tibble: 281 x 5
#>     disc_id descriptions text  name     qty  
#>       <int> <chr>        <chr> <chr>    <chr>
#>  1  7496378 "Numbered"   Black Cassette 1    
#>  2  4490852 "LP"         <NA>  Vinyl    1    
#>  3  9827276 "7\""        <NA>  Vinyl    1    
#>  4  9827276 "45 RPM"     <NA>  Vinyl    1    
#>  5  9827276 "EP"         <NA>  Vinyl    1    
#>  6  9769203 "LP"         <NA>  Vinyl    1    
#>  7  9769203 "Album"      <NA>  Vinyl    1    
#>  8  7237138 "7\""        <NA>  Vinyl    1    
#>  9  7237138 "45 RPM"     <NA>  Vinyl    1    
#> 10 13117042 "7\""        <NA>  Vinyl    1    
#> # … with 271 more rows

Then you could join these back on to the original dataset as needed.


  1. I’d normally use readr::parse_datetime() or lubridate::ymd_hms(), but I can’t here because it’s a vignette and I don’t want to add a dependency to tidyr just to simplify one example.