I was invited to a Korean data science podcastdataholic (데이터홀릭) to talk about my experience presenting at the RStudio and useR conferences! Part 1, Part 2
diff --git a/docs/posts/posts.json b/docs/posts/posts.json
index ed84ee58..dab72502 100644
--- a/docs/posts/posts.json
+++ b/docs/posts/posts.json
@@ -15,7 +15,7 @@
],
"contents": "\r\n\r\nContents\r\nIntro\r\nWhat is an XY problem?\r\nThe question\r\nAttempt 1: after_stat()? I know that!\r\nAttempt 2: Hmm but why not after_scale()?\r\nAttempt 3: Oh. You just wanted a scale_fill_*()…\r\nReflections\r\nEnding on a fun aside - accidentally escaping an XY problem\r\n\r\nIntro\r\nA few months ago, over at the R4DS slack (http://r4ds.io/join), someone posted a ggplot question that was within my area of “expertise”. I got tagged in the thread, I went in, and it took me 3 tries to arrive at the correct solution that the poster was asking for.\r\nThe embarrassing part of the exchange was that I would write one solution, think about what I wrote for a bit, and then write a different solution after realizing that I had misunderstood the intent of the original question. In other words, I was consistently missing the point.\r\nThis is a microcosm of a bigger problem of mine that I’ve been noticing lately, as my role in the R community has shifted from mostly asking questions to mostly answering questions. By this point I’ve sort of pin-pointed the problem: I have a hard time recognizing that I’m stuck in an XY problem.\r\nI have a lot of thoughts on this and I want to document them for future me,1 so here goes a rant. I hope it’s useful to whoever is reading this too.\r\nWhat is an XY problem?\r\nAccording to Wikipedia:\r\n\r\nThe XY problem is a communication problem… where the question is about an end user’s attempted solution (Y) rather than the root problem itself (X).\r\n\r\nThe classic example of this is when a (novice) user asks how to extract the last 3 characters in a filename. There’s no good reason to blindly grab the last 3 characters, so what they probably meant to ask is how to get the file extension (which is not always 3 characters long, like .R or .Rproj).2\r\nAnother somewhat related cult-classic, copypasta3 example is the “Don’t use regex to parse HTML” answer on stackoverflow. Here, a user asks how to use regular expressions to match HTML tags, to which the top-voted answer is don’t (instead, you should use a dedicated parser). The delivery of this answer is a work of art, so I highly suggest you giving it a read if you haven’t seen it already (the link is above for your amusement).\r\nAn example of an XY problem in R that might hit closer to home is when a user complains about the notorious Object of type 'closure' is not subsettable error. It’s often brought up as a cautionary tale for novice users (error messages can only tell you so much, so you must develop debugging strategies), but it has a special meaning for more experienced users who’ve been bit by this multiple times. So for me, when I see novice users reporting this specific error, I usually ask them if they have a variable called data and whether they forgot to run the line assigning that variable. Of course, this answer does not explain what the error means,4 but oftentimes it’s the solution that the user is looking for.\r\n\r\n\r\n# Oops forgot to define `data`!\r\n# `data` is a function (in {base}), which is not subsettable\r\ndata$value\r\n\r\n Error in data$value: object of type 'closure' is not subsettable\r\n\r\nAs one last example, check out this lengthy exchange on splitting a string (Y) to parse JSON (X). I felt compelled to include this example because it does a good job capturing the degree of frustration (very high) that normally comes with XY problems.\r\nBut the thing about the XY problem is that it often prompts the lesson of asking good questions: don’t skip steps in your reasoning, make your goals/intentions clear, use a reprex,5 and so on. But in so far as it’s a communication problem involving both parties, I think we should also talk about what the person answering the question can do to recognize an XY problem and break out of it.\r\nEnter me, someone who really needs to do a better job of recognizing when I’m stuck in an XY problem. So with the definition out of the way, let’s break down how I messed up.\r\nThe question\r\nThe question asks:\r\n\r\nDoes anyone know how to access the number of bars in a barplot? I’m looking for something that will return “15” for the following code, that can be used within ggplot, like after_stat()\r\n\r\nThe question comes with an example code. Not exactly a reprex, but something to help understand the question:\r\n\r\n\r\np <- ggplot(mpg, aes(manufacturer, fill = manufacturer)) +\r\n geom_bar()\r\np\r\n\r\n\r\n\r\nThe key phrase in the question is “can be used within ggplot”. So the user isn’t looking for something like this even though it’s conceptually equivalent:\r\n\r\n\r\nlength(unique(mpg$manufacturer))\r\n\r\n [1] 15\r\n\r\nThe idea here is that ggplot knows that there are 15 bars, so this fact must represented somewhere in the internals. The user wants to be able to access that value dynamically.\r\nAttempt 1: after_stat()? I know that!\r\nThe very last part of the question “… like after_stat()” triggered some alarms in the thread and got me called in. For those unfamiliar, after_stat() is part of the new and obscure family of delayed aesthetic evaluation functions introduced in ggplot 3.3.0. It’s something that you normally don’t think about in ggplot, but it’s a topic that I’ve been obsessed with for the last 2 years or so: it has resulted in a paper, a package (ggtrace), blog posts, and talks (useR!, rstudio::conf, JSM).\r\nThe user asked about after_stat(), so naturally I came up with an after_stat() solution. In the after-stat stage of the bar layer’s data, the layer data looks like this:\r\n\r\n\r\n# remotes::install_github(\"yjunechoe/ggtrace\")\r\nlibrary(ggtrace)\r\n# Grab the state of the layer data in the after-stat\r\nlayer_after_stat(p)\r\n\r\n # A tibble: 15 × 8\r\n count prop x width flipped_aes fill PANEL group\r\n \r\n 1 18 1 1 0.9 FALSE audi 1 1\r\n 2 19 1 2 0.9 FALSE chevrolet 1 2\r\n 3 37 1 3 0.9 FALSE dodge 1 3\r\n 4 25 1 4 0.9 FALSE ford 1 4\r\n 5 9 1 5 0.9 FALSE honda 1 5\r\n 6 14 1 6 0.9 FALSE hyundai 1 6\r\n 7 8 1 7 0.9 FALSE jeep 1 7\r\n 8 4 1 8 0.9 FALSE land rover 1 8\r\n 9 3 1 9 0.9 FALSE lincoln 1 9\r\n 10 4 1 10 0.9 FALSE mercury 1 10\r\n 11 13 1 11 0.9 FALSE nissan 1 11\r\n 12 5 1 12 0.9 FALSE pontiac 1 12\r\n 13 14 1 13 0.9 FALSE subaru 1 13\r\n 14 34 1 14 0.9 FALSE toyota 1 14\r\n 15 27 1 15 0.9 FALSE volkswagen 1 15\r\n\r\nIt’s a tidy data where each row represents a barplot. So the number of bars is the length of any column in the after-stat data, but it’d be most principled to take the length of the group column in this case.6\r\nSo the after-stat expression that returns the desired value 15 is after_stat(length(group)), which essentially evaluates to the following:\r\n\r\n\r\nlength(layer_after_stat(p)$group)\r\n\r\n [1] 15\r\n\r\nFor example, you can use this inside the aes() to annotate the total number of bars on top of each bar:\r\n\r\n\r\nggplot(mpg, aes(manufacturer, fill = manufacturer)) +\r\n geom_bar() +\r\n geom_label(\r\n aes(label = after_stat(length(group))),\r\n fill = \"white\",\r\n stat = \"count\"\r\n )\r\n\r\n\r\n\r\nThe after_stat(length(group)) solution returns the number of bars using after_stat(), as the user asked. But as you can see this is extremely useless: there are many technical constraints on what you can actually do with this information in the after-stat stage.\r\nI should have checked if they actually wanted an after_stat() solution first, before providing this answer. But I got distracted by the after_stat() keyword and got too excited by the prospect of someone else taking interest in the thing that I’m obsessed with. Alas this wasn’t the case - they were trying to do something practical - so I went back into the thread to figure out their goal for my second attempt.\r\nAttempt 2: Hmm but why not after_scale()?\r\nWhat I had neglected in my first attempt was the fact that the user talked more about their problem with someone else who got to the question before I did. That discussion turned out to include an important clue to the intent behind the original question: the user wanted the number of bars in order to interpolate the color of the bars.\r\nSo for example, a palette function like topo.colors() takes n to produce interpolated color values:\r\n\r\n\r\ntopo.colors(n = 16)\r\n\r\n [1] \"#4C00FF\" \"#0F00FF\" \"#002EFF\" \"#006BFF\" \"#00A8FF\" \"#00E5FF\" \"#00FF4D\"\r\n [8] \"#00FF00\" \"#4DFF00\" \"#99FF00\" \"#E6FF00\" \"#FFFF00\" \"#FFEA2D\" \"#FFDE59\"\r\n [15] \"#FFDB86\" \"#FFE0B3\"\r\n\r\nchroma::show_col(topo.colors(16))\r\n\r\n\r\n\r\nIf the intent is to use the number of bars to generate a vector of colors to assign to the bars, then a better place to do it would be in the after_scale(), where the state of the layer data in the after-scale looks like this:\r\n\r\n\r\nlayer_after_scale(p)\r\n\r\n # A tibble: 15 × 16\r\n fill y count prop x flipped_aes PANEL group ymin ymax xmin xmax \r\n \r\n 1 #F87… 18 18 1 1 FALSE 1 1 0 18 0.55 1.45\r\n 2 #E58… 19 19 1 2 FALSE 1 2 0 19 1.55 2.45\r\n 3 #C99… 37 37 1 3 FALSE 1 3 0 37 2.55 3.45\r\n 4 #A3A… 25 25 1 4 FALSE 1 4 0 25 3.55 4.45\r\n 5 #6BB… 9 9 1 5 FALSE 1 5 0 9 4.55 5.45\r\n 6 #00B… 14 14 1 6 FALSE 1 6 0 14 5.55 6.45\r\n 7 #00B… 8 8 1 7 FALSE 1 7 0 8 6.55 7.45\r\n 8 #00C… 4 4 1 8 FALSE 1 8 0 4 7.55 8.45\r\n 9 #00B… 3 3 1 9 FALSE 1 9 0 3 8.55 9.45\r\n 10 #00B… 4 4 1 10 FALSE 1 10 0 4 9.55 10.45\r\n 11 #619… 13 13 1 11 FALSE 1 11 0 13 10.55 11.45\r\n 12 #B98… 5 5 1 12 FALSE 1 12 0 5 11.55 12.45\r\n 13 #E76… 14 14 1 13 FALSE 1 13 0 14 12.55 13.45\r\n 14 #FD6… 34 34 1 14 FALSE 1 14 0 34 13.55 14.45\r\n 15 #FF6… 27 27 1 15 FALSE 1 15 0 27 14.55 15.45\r\n # ℹ 4 more variables: colour , linewidth , linetype ,\r\n # alpha \r\n\r\nIt’s still a tidy data where each row represents a bar. But the important distinction between the after-stat and the after-scale is that the after-scale data reflects the work of the (non-positional) scales. So the fill column here is now the actual hexadecimal color values for the bars:\r\n\r\n\r\nlayer_after_scale(p)$fill\r\n\r\n [1] \"#F8766D\" \"#E58700\" \"#C99800\" \"#A3A500\" \"#6BB100\" \"#00BA38\" \"#00BF7D\"\r\n [8] \"#00C0AF\" \"#00BCD8\" \"#00B0F6\" \"#619CFF\" \"#B983FF\" \"#E76BF3\" \"#FD61D1\"\r\n [15] \"#FF67A4\"\r\n\r\nchroma::show_col(layer_after_scale(p)$fill)\r\n\r\n\r\n\r\nWhat after_scale()/stage(after_scale = ) allows you to do is override these color values right before the layer data is sent off to be drawn. So we again use the same expression length(group) to grab the number of bars in the after-scale data, pass that value to a color palette function like topo.colors(), and re-map to the fill aesthetic.\r\n\r\n\r\nggplot(mpg, aes(manufacturer)) +\r\n geom_bar(aes(fill = stage(manufacturer, after_scale = topo.colors(length(group))))) +\r\n scale_fill_identity()\r\n\r\n\r\n\r\nSo this solution achieves the desired effect, but it’s needlessly complicated. You need complex staging of the fill aesthetic via stage() and you also need to pair this with scale_fill_identity() to let ggplot know that you’re directly supplying the fill values (otherwise you get errors and warnings).\r\nWait hold up - a fill scale? Did this user actually just want a custom fill scale? Ohhh…\r\nAttempt 3: Oh. You just wanted a scale_fill_*()…\r\nSo yeah. It turns out that they just wanted a custom scale that takes some set of colors and interpolate the colors across the bars in the plot.\r\nThe correct way to approach this problem is to create a new fill scale that wraps around discrete_scale(). The scale function should take a set of colors (cols) and pass discrete_scale() a palette function created via the function factory colorRampPalette().\r\n\r\n\r\nscale_fill_interpolate <- function(cols, ...) {\r\n discrete_scale(\r\n aesthetics = \"fill\",\r\n scale_name = \"interpolate\",\r\n palette = colorRampPalette(cols),\r\n ...\r\n )\r\n}\r\n\r\n\r\nOur new scale_fill_interpolate() function can now be added to the plot like any other scale:\r\n\r\n\r\np +\r\n scale_fill_interpolate(c(\"pink\", \"goldenrod\"))\r\n\r\n\r\n\r\n\r\n\r\np +\r\n scale_fill_interpolate(c(\"steelblue\", \"orange\", \"forestgreen\"))\r\n\r\n\r\n\r\n\r\n\r\nset.seed(123)\r\ncols <- sample(colors(), 5)\r\ncols\r\n\r\n [1] \"lightgoldenrodyellow\" \"mediumorchid1\" \"gray26\" \r\n [4] \"palevioletred2\" \"gray42\"\r\n\r\np +\r\n scale_fill_interpolate(cols)\r\n\r\n\r\n\r\nI sent (a variant of) this answer to the thread and the user marked it solved with a thanks, concluding my desperate spiral into finding the right solution to the intended question.\r\nReflections\r\nSo why was this so hard for me to get? The most immediate cause is because I quickly skimmed the wording of the question and extracted two key phrases:\r\n“access the number of bars in a barplot”\r\n“that can be used within ggplot, like after_stat()”\r\nBut neither of these turned out to be important (or even relevant) to the solution. The correct answer was just a clean custom fill scale, where you don’t have to think about the number of bars or accessing that in the internals. Simply extending discrete_scale() allows you to abstract away from those details entirely.\r\nSo in fairness, it was a very difficult XY problem to get out of. But the wording of the question wasn’t the root cause. I think the root cause is some combination of the following:\r\nThere are many ways to do the same thing in R so I automatically assume that my solution counts as a contribution as long as it gets the job done. But solutions should also be understandable for the person asking the question. Looking back, I was insane to even suggest my second attempt as the solution because it’s so contrived and borderline incomprehensible. It only sets the user up for more confusion and bugs in the future, so that was a bit irresponsible and selfish of me (it only scratches my itch).\r\nSolutions to (practical) problems are usually boring and I’m allergic to boring solutions. This is a bad attitude to have when offering to help people. I assumed that people share my excitement about ggplot internals, but actually most users don’t care (that’s why it’s called the internals and hidden from users). An important context that I miss as the person answering questions on the other end is that users post questions when they’re stuck and frustrated. Their goal is not to take a hard problem and turn it into a thinking exercise or a learning experience (that part happens organically, but is not the goal). If anything, that’s what I’m doing when I choose to take interest in other people’s (coding) problems.\r\nI imbue intent to questions that are clearing missing it. I don’t think that’s a categorically bad thing because it can sometimes land you in a shortcut out of an XY problem. But when you miss, it’s catastrophic and pulls you deeper into the problem. I think that was the case for me here - I conflated the X with the Y and assumed that after_stat() was relevant on face value because I personally know it to be a very powerful tool. I let my own history of treating after_stat() like the X (“How can I use after_stat() to solve/simplify this problem?”) guide my interpretation of the question, which is not good practice.\r\nOf course, there are likely more to this, but these are plenty for me to work on for now.\r\nLastly, I don’t want this to detract from the fact that the onus is on users to ask good questions. I don’t want to put question-answer-ers on the spot for their handling of XY problems. After all, most are volunteers who gain nothing from helping others besides status and some internet points.7 Just take this as me telling myself to be a better person.\r\nEnding on a fun aside - accidentally escaping an XY problem\r\nIt’s not my style to write serious blog posts. I think I deserve a break from many paragraphs of self-induced beat down.\r\nSo in that spirit I want to end on a funny anecdote where I escaped an XY problem by pure luck.\r\nI came across a relatively straightforward question which can be summarized as the following:\r\n\r\n\r\ninput <- \"a + c + d + e\"\r\noutput <- c(\"a\", \"c\", \"d\", \"e\")\r\n\r\n\r\nThere are many valid approaches to this and some were already posted to the thread:\r\n\r\n\r\nstrsplit(input, \" + \", TRUE)[[1]]\r\n\r\n [1] \"a\" \"c\" \"d\" \"e\"\r\n\r\nall.vars(parse(text = input))\r\n\r\n [1] \"a\" \"c\" \"d\" \"e\"\r\n\r\nMe, knowing too many useless things (and knowing that the the user already has the best answers), suggested a quirky alternative:8\r\n\r\nThis is super off-label usage but you can also use R’s formula utilities to parse this:9\r\n\r\n\r\n\r\nattr(terms(reformulate(input)), \"term.labels\")\r\n\r\n [1] \"a\" \"c\" \"d\" \"e\"\r\n\r\nTo my surprise, the response I got was:\r\n\r\nLovely! These definitely originated from formula ages ago so it’s actually not far off-label at all 🙂\r\n\r\n\r\nEspecially before slack deletes the old messages.↩︎\r\nIn R, you can use tools::file_ext() or fs::path_ext().↩︎\r\nhttps://en.wikipedia.org/wiki/Copypasta↩︎\r\nGood luck trying to explain the actual error message. Especially closure, a kind of weird vocabulary in R (fun fact - the first edition of Advanced R used to have a section on closure which is absent in the second edition probably because “In R, almost every function is a closure”).↩︎\r\nParadoxically, XY problems sometimes arise when inexperienced users try to come up with a reprex. They might capture the error/problem too narrowly, such that the more important broader context is left out.↩︎\r\nOr the number of distinct combinations between PANEL and group, as in nlevels(interaction(PANEL, group, drop = TRUE)). But of course that’s overkill and only of interest for “theoretical purity”.↩︎\r\nAnd I like the R4DS slack because it doesn’t have “internet points.” There is status (moderator) though I don’t wear the badge (literally - it’s an emoji).↩︎\r\nActually I only thought of this because I’d been writing a statistical package that required some nasty metaprogramming with the formula object.↩︎\r\nThe significance of this solution building on top of R’s formula utilities is that it will also parse stuff like \"a*b\" as c(\"a\", \"b\", \"a:b\"). So given that the inputs originated as R formulas (as the user later clarifies), this is the principled approach.↩︎\r\n",
"preview": "posts/2023-07-09-x-y-problem/preview.png",
- "last_modified": "2023-07-10T17:24:43+09:00",
+ "last_modified": "2023-07-10T04:24:43-04:00",
"input_file": {},
"preview_width": 238,
"preview_height": 205
@@ -37,7 +37,7 @@
],
"contents": "\r\n\r\nContents\r\nIntro\r\nSpecial properties of dplyr::slice()\r\nBasic usage\r\nRe-imagining slice() with data-masking\r\nSpecial properties of slice()\r\n\r\nA gallery of row operations with slice()\r\nRepeat rows (in place)\r\nSubset a selection of rows + the following row\r\nSubset a selection of rows + multiple following rows\r\nFilter (and encode) neighboring rows\r\nWindowed min/max/median (etc.)\r\nEvenly distributed row shuffling of balanced categories\r\nInserting a new row at specific intervals\r\nEvenly distributed row shuffling of unequal categories\r\n\r\nConclusion\r\n\r\nIntro\r\nIn data wrangling, there are a handful of classes of operations on data frames that we think of as theoretically well-defined and tackling distinct problems. To name a few, these include subsetting, joins, split-apply-combine, pairwise operations, nested-column workflows, and so on.\r\nAgainst this rich backdrop, there’s one aspect of data wrangling that doesn’t receive as much attention: ordering of rows. This isn’t necessarily surprising - we often think of row order as an auxiliary attribute of data frames since they don’t speak to the content of the data, per se. I think we all share the intuition that two dataframe that differ only in row order are practically the same for most analysis purposes.\r\nExcept when they aren’t.\r\nIn this blog post I want to talk about a few, somewhat esoteric cases of what I like to call row-relational operations. My goal is to try to motivate row-relational operations as a full-blown class of data wrangling operation that includes not only row ordering, but also sampling, shuffling, repeating, interweaving, and so on (I’ll go over all of these later).\r\nWithout spoiling too much, I believe that dplyr::slice() offers a powerful context for operations over row indices, even those that at first seem to lack a “tidy” solution. You may already know slice() as an indexing function, but my hope is to convince you that it can do so much more.\r\nLet’s start by first talking about some special properties of dplyr::slice(), and then see how we can use it for various row-relational operations.\r\nSpecial properties of dplyr::slice()\r\nBasic usage\r\nFor the following demonstration, I’ll use a small subset of the dplyr::starwars dataset:\r\n\r\n\r\nstarwars_sm <- dplyr::starwars[1:10, 1:3]\r\nstarwars_sm\r\n\r\n # A tibble: 10 × 3\r\n name height mass\r\n \r\n 1 Luke Skywalker 172 77\r\n 2 C-3PO 167 75\r\n 3 R2-D2 96 32\r\n 4 Darth Vader 202 136\r\n 5 Leia Organa 150 49\r\n 6 Owen Lars 178 120\r\n 7 Beru Whitesun lars 165 75\r\n 8 R5-D4 97 32\r\n 9 Biggs Darklighter 183 84\r\n 10 Obi-Wan Kenobi 182 77\r\n\r\n1) Row selection\r\nslice() is a row indexing verb - if you pass it a vector of integers, it subsets data frame rows:\r\n\r\n\r\nstarwars_sm |> \r\n slice(1:6) # First six rows\r\n\r\n # A tibble: 6 × 3\r\n name height mass\r\n \r\n 1 Luke Skywalker 172 77\r\n 2 C-3PO 167 75\r\n 3 R2-D2 96 32\r\n 4 Darth Vader 202 136\r\n 5 Leia Organa 150 49\r\n 6 Owen Lars 178 120\r\n\r\nLike other dplyr verbs with mutate-semantics, you can use context-dependent expressions inside slice(). For example, you can use n() to grab the last row (or last couple of rows):\r\n\r\n\r\nstarwars_sm |> \r\n slice( n() ) # Last row\r\n\r\n # A tibble: 1 × 3\r\n name height mass\r\n \r\n 1 Obi-Wan Kenobi 182 77\r\n\r\nstarwars_sm |> \r\n slice( n() - 2:0 ) # Last three rows\r\n\r\n # A tibble: 3 × 3\r\n name height mass\r\n \r\n 1 R5-D4 97 32\r\n 2 Biggs Darklighter 183 84\r\n 3 Obi-Wan Kenobi 182 77\r\n\r\nAnother context-dependent expression that comes in handy is row_number(), which returns all row indices. Using it inside slice() essentially performs an identity transformation:\r\n\r\n\r\nidentical(\r\n starwars_sm,\r\n starwars_sm |> slice( row_number() )\r\n)\r\n\r\n [1] TRUE\r\n\r\nLastly, similar to in select(), you can use - for negative indexing (to remove rows):\r\n\r\n\r\nidentical(\r\n starwars_sm |> slice(1:3), # First three rows\r\n starwars_sm |> slice(-(4:n())) # All rows except fourth row to last row\r\n)\r\n\r\n [1] TRUE\r\n\r\n2) Dynamic dots\r\nslice() supports dynamic dots. If you pass row indices into multiple argument positions, slice() will concatenate them for you:\r\n\r\n\r\nidentical(\r\n starwars_sm |> slice(1:6),\r\n starwars_sm |> slice(1, 2:4, 5, 6)\r\n)\r\n\r\n [1] TRUE\r\n\r\nIf you have a list() of row indices, you can use the splice operator !!! to spread them out:\r\n\r\n\r\nstarwars_sm |> \r\n slice( !!!list(1, 2:4, 5, 6) )\r\n\r\n # A tibble: 6 × 3\r\n name height mass\r\n \r\n 1 Luke Skywalker 172 77\r\n 2 C-3PO 167 75\r\n 3 R2-D2 96 32\r\n 4 Darth Vader 202 136\r\n 5 Leia Organa 150 49\r\n 6 Owen Lars 178 120\r\n\r\nThe above call to slice() evaluates to the following after splicing:\r\n\r\n\r\nrlang::expr( slice(!!!list(1, 2:4, 5, 6)) )\r\n\r\n slice(1, 2:4, 5, 6)\r\n\r\n3) Row ordering\r\nslice() respects the order in which you supplied the row indices:\r\n\r\n\r\nstarwars_sm |> \r\n slice(3, 1, 2, 5)\r\n\r\n # A tibble: 4 × 3\r\n name height mass\r\n \r\n 1 R2-D2 96 32\r\n 2 Luke Skywalker 172 77\r\n 3 C-3PO 167 75\r\n 4 Leia Organa 150 49\r\n\r\nThis means you can do stuff like random sampling with sample():\r\n\r\n\r\nstarwars_sm |> \r\n slice( sample(n()) )\r\n\r\n # A tibble: 10 × 3\r\n name height mass\r\n \r\n 1 Obi-Wan Kenobi 182 77\r\n 2 Owen Lars 178 120\r\n 3 Leia Organa 150 49\r\n 4 Darth Vader 202 136\r\n 5 Luke Skywalker 172 77\r\n 6 R5-D4 97 32\r\n 7 C-3PO 167 75\r\n 8 Beru Whitesun lars 165 75\r\n 9 Biggs Darklighter 183 84\r\n 10 R2-D2 96 32\r\n\r\nYou can also shuffle a subset of rows (ex: just the first five):\r\n\r\n\r\nstarwars_sm |> \r\n slice( sample(5), 6:n() )\r\n\r\n # A tibble: 10 × 3\r\n name height mass\r\n \r\n 1 C-3PO 167 75\r\n 2 Leia Organa 150 49\r\n 3 R2-D2 96 32\r\n 4 Darth Vader 202 136\r\n 5 Luke Skywalker 172 77\r\n 6 Owen Lars 178 120\r\n 7 Beru Whitesun lars 165 75\r\n 8 R5-D4 97 32\r\n 9 Biggs Darklighter 183 84\r\n 10 Obi-Wan Kenobi 182 77\r\n\r\nOr reorder all rows by their indices (ex: in reverse):\r\n\r\n\r\nstarwars_sm |> \r\n slice( rev(row_number()) )\r\n\r\n # A tibble: 10 × 3\r\n name height mass\r\n \r\n 1 Obi-Wan Kenobi 182 77\r\n 2 Biggs Darklighter 183 84\r\n 3 R5-D4 97 32\r\n 4 Beru Whitesun lars 165 75\r\n 5 Owen Lars 178 120\r\n 6 Leia Organa 150 49\r\n 7 Darth Vader 202 136\r\n 8 R2-D2 96 32\r\n 9 C-3PO 167 75\r\n 10 Luke Skywalker 172 77\r\n\r\n4) Out-of-bounds handling\r\nIf you pass a row index that’s out of bounds, slice() returns a 0-row data frame:\r\n\r\n\r\nstarwars_sm |> \r\n slice( n() + 1 ) # Select the row after the last row\r\n\r\n # A tibble: 0 × 3\r\n # ℹ 3 variables: name , height , mass \r\n\r\nWhen mixed with valid row indices, out-of-bounds indices are simply ignored (much 💜 for this behavior):\r\n\r\n\r\nstarwars_sm |> \r\n slice(\r\n 0, # 0th row - ignored\r\n 1:3, # first three rows\r\n n() + 1 # 1 after last row - ignored\r\n )\r\n\r\n # A tibble: 3 × 3\r\n name height mass\r\n \r\n 1 Luke Skywalker 172 77\r\n 2 C-3PO 167 75\r\n 3 R2-D2 96 32\r\n\r\nThis lets you do funky stuff like select all even numbered rows by passing slice() all row indices times 2:\r\n\r\n\r\nstarwars_sm |> \r\n slice( row_number() * 2 ) # Add `- 1` at the end for *odd* rows!\r\n\r\n # A tibble: 5 × 3\r\n name height mass\r\n \r\n 1 C-3PO 167 75\r\n 2 Darth Vader 202 136\r\n 3 Owen Lars 178 120\r\n 4 R5-D4 97 32\r\n 5 Obi-Wan Kenobi 182 77\r\n\r\nRe-imagining slice() with data-masking\r\nslice() is already pretty neat as it is, but that’s just the tip of the iceberg.\r\nThe really cool, under-rated feature of slice() is that it’s data-masked, meaning that you can reference column vectors as if they’re variables. Another way of describing this property of slice() is to say that it has mutate-semantics.\r\nAt a very basic level, this means that slice() can straightforwardly replicate the behavior of some dplyr verbs like arrange() and filter()!\r\nslice() as arrange()\r\nFrom our starwars_sm data, if we want to sort by height we can use arrange():\r\n\r\n\r\nstarwars_sm |> \r\n arrange(height)\r\n\r\n # A tibble: 10 × 3\r\n name height mass\r\n \r\n 1 R2-D2 96 32\r\n 2 R5-D4 97 32\r\n 3 Leia Organa 150 49\r\n 4 Beru Whitesun lars 165 75\r\n 5 C-3PO 167 75\r\n 6 Luke Skywalker 172 77\r\n 7 Owen Lars 178 120\r\n 8 Obi-Wan Kenobi 182 77\r\n 9 Biggs Darklighter 183 84\r\n 10 Darth Vader 202 136\r\n\r\nBut we can also do this with slice() to the same effect, using order():\r\n\r\n\r\nstarwars_sm |> \r\n slice( order(height) )\r\n\r\n # A tibble: 10 × 3\r\n name height mass\r\n \r\n 1 R2-D2 96 32\r\n 2 R5-D4 97 32\r\n 3 Leia Organa 150 49\r\n 4 Beru Whitesun lars 165 75\r\n 5 C-3PO 167 75\r\n 6 Luke Skywalker 172 77\r\n 7 Owen Lars 178 120\r\n 8 Obi-Wan Kenobi 182 77\r\n 9 Biggs Darklighter 183 84\r\n 10 Darth Vader 202 136\r\n\r\nThis is conceptually equivalent to combining the following 2-step process:\r\n\r\n\r\nordered_val_ind <- order(starwars_sm$height)\r\n ordered_val_ind\r\n\r\n [1] 3 8 5 7 2 1 6 10 9 4\r\n\r\n\r\n\r\nstarwars_sm |> \r\n slice( ordered_val_ind )\r\n\r\n # A tibble: 10 × 3\r\n name height mass\r\n \r\n 1 R2-D2 96 32\r\n 2 R5-D4 97 32\r\n 3 Leia Organa 150 49\r\n 4 Beru Whitesun lars 165 75\r\n 5 C-3PO 167 75\r\n 6 Luke Skywalker 172 77\r\n 7 Owen Lars 178 120\r\n 8 Obi-Wan Kenobi 182 77\r\n 9 Biggs Darklighter 183 84\r\n 10 Darth Vader 202 136\r\n\r\nslice() as filter()\r\nWe can also use slice() to filter(), using which():\r\n\r\n\r\nidentical(\r\n starwars_sm |> filter( height > 150 ),\r\n starwars_sm |> slice( which(height > 150) )\r\n)\r\n\r\n [1] TRUE\r\n\r\nThus, we can think of filter() and slice() as two sides of the same coin:\r\nfilter() takes a logical vector that’s the same length as the number of rows in the data frame\r\nslice() takes an integer vector that’s a (sub)set of a data frame’s row indices.\r\nTo put it more concretely, this logical vector was being passed to the above filter() call:\r\n\r\n\r\nstarwars_sm$height > 150\r\n\r\n [1] TRUE TRUE FALSE TRUE FALSE TRUE TRUE FALSE TRUE TRUE\r\n\r\nWhile this integer vector was being passed to the above slice() call, where which() returns the position of TRUE values, given a logical vector:\r\n\r\n\r\nwhich( starwars_sm$height > 150 )\r\n\r\n [1] 1 2 4 6 7 9 10\r\n\r\nSpecial properties of slice()\r\nThis re-imagined slice() that heavily exploits data-masking gives us two interesting properties:\r\nWe can work with sets of row indices that need not to be the same length as the data frame (vs. filter()).\r\nWe can work with row indices as integers, which are legible to arithmetic operations (ex: + and *)\r\nTo grok the significance of working with rows as integer sets, let’s work through some examples where slice() comes in very handy.\r\nA gallery of row operations with slice()\r\nRepeat rows (in place)\r\nIn {tidyr}, there’s a function called uncount() which does the opposite of dplyr::count():\r\n\r\n\r\nlibrary(tidyr)\r\n# Example from `tidyr::uncount()` docs\r\nuncount_df <- tibble(x = c(\"a\", \"b\"), n = c(1, 2))\r\nuncount_df\r\n\r\n # A tibble: 2 × 2\r\n x n\r\n \r\n 1 a 1\r\n 2 b 2\r\n\r\nuncount_df |> \r\n uncount(n)\r\n\r\n # A tibble: 3 × 1\r\n x \r\n \r\n 1 a \r\n 2 b \r\n 3 b\r\n\r\nWe can mimic this behavior with slice(), using rep(times = ...):\r\n\r\n\r\nrep(1:nrow(uncount_df), times = uncount_df$n)\r\n\r\n [1] 1 2 2\r\n\r\nuncount_df |> \r\n slice( rep(row_number(), times = n) ) |> \r\n select( -n )\r\n\r\n # A tibble: 3 × 1\r\n x \r\n \r\n 1 a \r\n 2 b \r\n 3 b\r\n\r\nWhat if instead of a whole column storing that information, we only have information about row position?\r\nLet’s say we want to duplicate the rows of starwars_sm at the repeat_at positions:\r\n\r\n\r\nrepeat_at <- sample(5, 2)\r\nrepeat_at\r\n\r\n [1] 4 5\r\n\r\nIn slice(), you’d just select all rows plus those additional rows, then sort the integer row indices:\r\n\r\n\r\nstarwars_sm |> \r\n slice( sort(c(row_number(), repeat_at)) )\r\n\r\n # A tibble: 12 × 3\r\n name height mass\r\n \r\n 1 Luke Skywalker 172 77\r\n 2 C-3PO 167 75\r\n 3 R2-D2 96 32\r\n 4 Darth Vader 202 136\r\n 5 Darth Vader 202 136\r\n 6 Leia Organa 150 49\r\n 7 Leia Organa 150 49\r\n 8 Owen Lars 178 120\r\n 9 Beru Whitesun lars 165 75\r\n 10 R5-D4 97 32\r\n 11 Biggs Darklighter 183 84\r\n 12 Obi-Wan Kenobi 182 77\r\n\r\nWhat if we also separately have information about how much to repeat those rows by?\r\n\r\n\r\nrepeat_by <- c(3, 4)\r\n\r\n\r\nYou can apply the same rep() method for just the subset of rows to repeat:\r\n\r\n\r\nstarwars_sm |> \r\n slice( sort(c(row_number(), rep(repeat_at, times = repeat_by - 1))) )\r\n\r\n # A tibble: 15 × 3\r\n name height mass\r\n \r\n 1 Luke Skywalker 172 77\r\n 2 C-3PO 167 75\r\n 3 R2-D2 96 32\r\n 4 Darth Vader 202 136\r\n 5 Darth Vader 202 136\r\n 6 Darth Vader 202 136\r\n 7 Leia Organa 150 49\r\n 8 Leia Organa 150 49\r\n 9 Leia Organa 150 49\r\n 10 Leia Organa 150 49\r\n 11 Owen Lars 178 120\r\n 12 Beru Whitesun lars 165 75\r\n 13 R5-D4 97 32\r\n 14 Biggs Darklighter 183 84\r\n 15 Obi-Wan Kenobi 182 77\r\n\r\nCircling back to uncount(), you could also initialize a vector of 1s and replace() where the rows should be repeated:\r\n\r\n\r\nstarwars_sm |> \r\n uncount( replace(rep(1, n()), repeat_at, repeat_by) )\r\n\r\n # A tibble: 15 × 3\r\n name height mass\r\n \r\n 1 Luke Skywalker 172 77\r\n 2 C-3PO 167 75\r\n 3 R2-D2 96 32\r\n 4 Darth Vader 202 136\r\n 5 Darth Vader 202 136\r\n 6 Darth Vader 202 136\r\n 7 Leia Organa 150 49\r\n 8 Leia Organa 150 49\r\n 9 Leia Organa 150 49\r\n 10 Leia Organa 150 49\r\n 11 Owen Lars 178 120\r\n 12 Beru Whitesun lars 165 75\r\n 13 R5-D4 97 32\r\n 14 Biggs Darklighter 183 84\r\n 15 Obi-Wan Kenobi 182 77\r\n\r\nSubset a selection of rows + the following row\r\nRow order can sometimes encode a meaningful continuous measure, like time.\r\nTake for example this subset of the flights dataset in {nycflights13}:\r\n\r\n\r\nflights_df <- nycflights13::flights |> \r\n filter(month == 3, day == 3, origin == \"JFK\") |> \r\n select(dep_time, flight, carrier) |> \r\n slice(1:100) |> \r\n arrange(dep_time)\r\nflights_df\r\n\r\n # A tibble: 100 × 3\r\n dep_time flight carrier\r\n \r\n 1 535 1141 AA \r\n 2 551 5716 EV \r\n 3 555 145 B6 \r\n 4 556 208 B6 \r\n 5 556 79 B6 \r\n 6 601 501 B6 \r\n 7 604 725 B6 \r\n 8 606 135 B6 \r\n 9 606 600 UA \r\n 10 607 829 US \r\n # ℹ 90 more rows\r\n\r\nHere, the rows are ordered by dep_time, such that given a row, the next row is a data point for the next flight that departed from the airport.\r\nAnd let’s say we’re interested in flights that took off immediately after American Airlines (\"AA\") flights. Given what we just noted about the ordering of rows in the data frame, we can do this in slice() by adding 1 to the row index of AA flights:\r\n\r\n\r\nflights_df |> \r\n slice( which(carrier == \"AA\") + 1 )\r\n\r\n # A tibble: 14 × 3\r\n dep_time flight carrier\r\n \r\n 1 551 5716 EV \r\n 2 627 905 B6 \r\n 3 652 117 B6 \r\n 4 714 825 AA \r\n 5 717 987 B6 \r\n 6 724 11 VX \r\n 7 742 183 DL \r\n 8 802 655 AA \r\n 9 805 2143 DL \r\n 10 847 59 B6 \r\n 11 858 647 AA \r\n 12 859 120 DL \r\n 13 1031 179 AA \r\n 14 1036 641 B6\r\n\r\nWhat if we also want to keep observations for the preceding AA flights as well? We can just stick which(carrier == \"AA\") inside slice() too:\r\n\r\n\r\nflights_df |> \r\n slice(\r\n which(carrier == \"AA\"),\r\n which(carrier == \"AA\") + 1\r\n )\r\n\r\n # A tibble: 28 × 3\r\n dep_time flight carrier\r\n \r\n 1 535 1141 AA \r\n 2 626 413 AA \r\n 3 652 1815 AA \r\n 4 711 443 AA \r\n 5 714 825 AA \r\n 6 724 33 AA \r\n 7 739 59 AA \r\n 8 802 1838 AA \r\n 9 802 655 AA \r\n 10 843 1357 AA \r\n # ℹ 18 more rows\r\n\r\nBut now the rows are now ordered such that all the AA flights come before the other flights! How can we preserve the original order of increasing dep_time?\r\nWe could reconstruct the initial row order by piping the result into arrange(dep_time) again, but the simplest solution would be to concatenate the set of row indices and sort() them, since the output of which() is already integer!\r\n\r\n\r\nflights_df |> \r\n slice(\r\n sort(c(\r\n which(carrier == \"AA\"),\r\n which(carrier == \"AA\") + 1\r\n ))\r\n )\r\n\r\n # A tibble: 28 × 3\r\n dep_time flight carrier\r\n \r\n 1 535 1141 AA \r\n 2 551 5716 EV \r\n 3 626 413 AA \r\n 4 627 905 B6 \r\n 5 652 1815 AA \r\n 6 652 117 B6 \r\n 7 711 443 AA \r\n 8 714 825 AA \r\n 9 714 825 AA \r\n 10 717 987 B6 \r\n # ℹ 18 more rows\r\n\r\nNotice how the 8th and 9th rows are repeated here - that’s because 2 AA flights departed in a row (ha!). We can use unique() to remove duplicate rows in the same call to slice():\r\n\r\n\r\nflights_df |> \r\n slice(\r\n unique(sort(c(\r\n which(carrier == \"AA\"),\r\n which(carrier == \"AA\") + 1\r\n )))\r\n )\r\n\r\n # A tibble: 24 × 3\r\n dep_time flight carrier\r\n \r\n 1 535 1141 AA \r\n 2 551 5716 EV \r\n 3 626 413 AA \r\n 4 627 905 B6 \r\n 5 652 1815 AA \r\n 6 652 117 B6 \r\n 7 711 443 AA \r\n 8 714 825 AA \r\n 9 717 987 B6 \r\n 10 724 33 AA \r\n # ℹ 14 more rows\r\n\r\nImportantly, we can do all of this inside slice() because we’re working with integer sets. The integer part allows us to do things like + 1 and sort(), while the set part allows us to combine with c() and remove duplicates with unique().\r\nSubset a selection of rows + multiple following rows\r\nIn this example, let’s problematize our approach with the repeated which() calls in our previous solution.\r\nImagine another scenario where we want to filter for all AA flights and three subsequent flights for each.\r\nDo we need to write the solution out like this? That’s a lot of repetition!\r\n\r\n\r\nflights_df |> \r\n slice(\r\n which(carrier == \"AA\"),\r\n which(carrier == \"AA\") + 1,\r\n which(carrier == \"AA\") + 2,\r\n which(carrier == \"AA\") + 3\r\n )\r\n\r\n\r\nYou might think we can get away with + 0:3, but it doesn’t work as we’d like. The + just forces 0:3 to be (partially) recycled to the same length as carrier for element-wise addition:\r\n\r\n\r\nwhich(flights_df$carrier == \"AA\") + 0:3\r\n\r\n Warning in which(flights_df$carrier == \"AA\") + 0:3: longer object length is not\r\n a multiple of shorter object length\r\n [1] 1 14 20 27 25 28 34 40 38 62 66 68 91 93\r\n\r\nIf only we can get the outer sum of the two arrays, 0:3 and which(carrier == \"AA\") … Oh wait, we can - that’s what outer() does!\r\n\r\n\r\nouter(0:3, which(flights_df$carrier == \"AA\"), `+`)\r\n\r\n [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14]\r\n [1,] 1 13 18 24 25 27 32 37 38 61 64 65 91 92\r\n [2,] 2 14 19 25 26 28 33 38 39 62 65 66 92 93\r\n [3,] 3 15 20 26 27 29 34 39 40 63 66 67 93 94\r\n [4,] 4 16 21 27 28 30 35 40 41 64 67 68 94 95\r\n\r\nThis is essentially the repeated which() vectors stacked on top of each other, but as a matrix:\r\n\r\n\r\nprint( which(flights_df$carrier == \"AA\") )\r\nprint( which(flights_df$carrier == \"AA\") + 1 )\r\nprint( which(flights_df$carrier == \"AA\") + 2 )\r\nprint( which(flights_df$carrier == \"AA\") + 3 )\r\n\r\n [1] 1 13 18 24 25 27 32 37 38 61 64 65 91 92\r\n [1] 2 14 19 25 26 28 33 38 39 62 65 66 92 93\r\n [1] 3 15 20 26 27 29 34 39 40 63 66 67 93 94\r\n [1] 4 16 21 27 28 30 35 40 41 64 67 68 94 95\r\n\r\nThe fact that outer() returns all the relevant row indices inside a single matrix is nice because we can collect the indices column-by-column, preserving row order. Matrices, like data frames, are column-major, so coercing a matrix to a vector collapses it column-wise:\r\n\r\n\r\nas.integer( outer(0:3, which(flights_df$carrier == \"AA\"), `+`) )\r\n\r\n [1] 1 2 3 4 13 14 15 16 18 19 20 21 24 25 26 27 25 26 27 28 27 28 29 30 32\r\n [26] 33 34 35 37 38 39 40 38 39 40 41 61 62 63 64 64 65 66 67 65 66 67 68 91 92\r\n [51] 93 94 92 93 94 95\r\n\r\n\r\nOther ways to coerce matrix to vector\r\nThere are two other options for coercing a matrix to vector - c() and as.vector(). I like to stick with as.integer() because that enforces integer type (which makes sense for row indices), and c() can be nice because it’s less to type (although it’s off-label usage):\r\n\r\n\r\n# Not run, but equivalent to `as.integer()` method\r\nas.vector( outer(0:3, which(flights_df$carrier == \"AA\"), `+`) )\r\nc( outer(0:3, which(flights_df$carrier == \"AA\"), `+`) )\r\n\r\n\r\nSomewhat relatedly - and this only works inside the tidy-eval context of slice() - you can get a similar effect of “collapsing” a matrix using the splice operator !!!:\r\n\r\n\r\nseq_matrix <- matrix(1:9, byrow = TRUE, nrow = 3)\r\nas.integer(seq_matrix)\r\n\r\n [1] 1 4 7 2 5 8 3 6 9\r\n\r\nidentical(\r\n mtcars |> slice( as.vector(seq_matrix) ),\r\n mtcars |> slice( !!!seq_matrix )\r\n)\r\n\r\n [1] TRUE\r\n\r\nHere, the !!!seq_matrix was slotting each individual “cell” as argument to slice():\r\n\r\n\r\nrlang::expr( slice(!!!seq_matrix) )\r\n\r\n slice(1L, 4L, 7L, 2L, 5L, 8L, 3L, 6L, 9L)\r\n\r\nA big difference in behavior between as.integer() vs. !!! is that the latter works for lists of indices too, by slotting each element of the list as an argument to slice():\r\n\r\n\r\nseq_list <- list(c(1, 4, 7, 2), c(5, 8, 3, 6, 9))\r\nrlang::expr( slice( !!!seq_list ) )\r\n\r\n slice(c(1, 4, 7, 2), c(5, 8, 3, 6, 9))\r\n\r\nHowever, as you may already know, as.integer() cannot flatten lists:\r\n\r\n\r\nas.integer(seq_list)\r\n\r\n Error in eval(expr, envir, enclos): 'list' object cannot be coerced to type 'integer'\r\n\r\nNote that as.vector() and c() leaves lists as is, which is another reason to prefer as.integer() for type-checking:\r\n\r\n\r\nidentical(seq_list, as.vector(seq_list))\r\nidentical(seq_list, c(seq_list))\r\n\r\n [1] TRUE\r\n [1] TRUE\r\n\r\nFinally, back in our !!!seq_matrix example, we could have applied asplit(MARGIN = 2) to chunk the splicing by matrix column, although the overall effect would be the same:\r\n\r\n\r\nrlang::expr(slice( !!!seq_matrix ))\r\n\r\n slice(1L, 4L, 7L, 2L, 5L, 8L, 3L, 6L, 9L)\r\n\r\nrlang::expr(slice( !!!asplit(seq_matrix, 2) ))\r\n\r\n slice(c(1L, 4L, 7L), c(2L, 5L, 8L), c(3L, 6L, 9L))\r\n\r\nThis lets us ask questions like: Which AA flights departed within 3 flights of another AA flight?\r\n\r\n\r\nflights_df |> \r\n slice( as.integer( outer(0:3, which(carrier == \"AA\"), `+`) ) ) |> \r\n filter( carrier == \"AA\", duplicated(flight) ) |> \r\n distinct(flight, carrier)\r\n\r\n # A tibble: 6 × 2\r\n flight carrier\r\n \r\n 1 825 AA \r\n 2 33 AA \r\n 3 655 AA \r\n 4 1 AA \r\n 5 647 AA \r\n 6 179 AA\r\n\r\n\r\nSlicing all the way down: Case 1\r\nWith the addition of the .by argument to slice() in dplyr v1.10, we can re-write the above code as three calls to slice() (+ a call to select()):\r\n\r\n\r\nflights_df |> \r\n slice( as.integer( outer(0:3, which(carrier == \"AA\"), `+`) ) ) |> \r\n slice( which(carrier == \"AA\" & duplicated(flight)) ) |> # filter()\r\n slice( 1, .by = c(flight, carrier) ) |> # distinct()\r\n select(flight, carrier)\r\n\r\n # A tibble: 6 × 2\r\n flight carrier\r\n \r\n 1 825 AA \r\n 2 33 AA \r\n 3 655 AA \r\n 4 1 AA \r\n 5 647 AA \r\n 6 179 AA\r\n\r\nThe next example will demonstrate another, perhaps more practical usecase for outer() in slice().\r\nFilter (and encode) neighboring rows\r\nLet’s use a subset of the {gapminder} data set for this one. Here, we have data for each European country’s GDP-per-capita by year, between 1992 to 2007:\r\n\r\n\r\ngapminder_df <- gapminder::gapminder |> \r\n left_join(gapminder::country_codes, by = \"country\") |> # `multiple = \"all\"`\r\n filter(year >= 1992, continent == \"Europe\") |> \r\n select(country, country_code = iso_alpha, year, gdpPercap)\r\ngapminder_df\r\n\r\n # A tibble: 120 × 4\r\n country country_code year gdpPercap\r\n \r\n 1 Albania ALB 1992 2497.\r\n 2 Albania ALB 1997 3193.\r\n 3 Albania ALB 2002 4604.\r\n 4 Albania ALB 2007 5937.\r\n 5 Austria AUT 1992 27042.\r\n 6 Austria AUT 1997 29096.\r\n 7 Austria AUT 2002 32418.\r\n 8 Austria AUT 2007 36126.\r\n 9 Belgium BEL 1992 25576.\r\n 10 Belgium BEL 1997 27561.\r\n # ℹ 110 more rows\r\n\r\nThis time, let’s see the desired output (plot) first and build our way up. The goal is to plot the GDP growth of Germany over the years, and its yearly GDP neighbors side-by-side:\r\n\r\n\r\n\r\nFirst, let’s think about what a “GDP neighbor” means in row-relational terms. If you arranged the data by GDP, the GDP neighbors would be the rows that come immediately before and after the rows for Germany. You need to recalculate neighbors every year though, so this arrange() + slice() combo should happen by-year.\r\nWith that in mind, let’s set up a year grouping and arrange by gdpPercap within year:1\r\n\r\n\r\ngapminder_df |> \r\n group_by(year) |> \r\n arrange(gdpPercap, .by_group = TRUE)\r\n\r\n # A tibble: 120 × 4\r\n # Groups: year [4]\r\n country country_code year gdpPercap\r\n \r\n 1 Albania ALB 1992 2497.\r\n 2 Bosnia and Herzegovina BIH 1992 2547.\r\n 3 Turkey TUR 1992 5678.\r\n 4 Bulgaria BGR 1992 6303.\r\n 5 Romania ROU 1992 6598.\r\n 6 Montenegro MNE 1992 7003.\r\n 7 Poland POL 1992 7739.\r\n 8 Croatia HRV 1992 8448.\r\n 9 Serbia SRB 1992 9325.\r\n 10 Slovak Republic SVK 1992 9498.\r\n # ℹ 110 more rows\r\n\r\nNow within each year, we want to grab the row for Germany and its neighboring rows. We can do this by taking the outer() sum of -1:1 and the row indices for Germany:\r\n\r\n\r\ngapminder_df |> \r\n group_by(year) |> \r\n arrange(gdpPercap, .by_group = TRUE) |> \r\n slice( as.integer(outer( -1:1, which(country == \"Germany\"), `+` )) )\r\n\r\n # A tibble: 12 × 4\r\n # Groups: year [4]\r\n country country_code year gdpPercap\r\n \r\n 1 Denmark DNK 1992 26407.\r\n 2 Germany DEU 1992 26505.\r\n 3 Netherlands NLD 1992 26791.\r\n 4 Belgium BEL 1997 27561.\r\n 5 Germany DEU 1997 27789.\r\n 6 Iceland ISL 1997 28061.\r\n 7 United Kingdom GBR 2002 29479.\r\n 8 Germany DEU 2002 30036.\r\n 9 Belgium BEL 2002 30486.\r\n 10 France FRA 2007 30470.\r\n 11 Germany DEU 2007 32170.\r\n 12 United Kingdom GBR 2007 33203.\r\n\r\n\r\nSlicing all the way down: Case 2\r\nThe new .by argument in slice() comes in handy again here, allowing us to collapse the group_by() + arrange() combo into one slice() call:\r\n\r\n\r\ngapminder_df |> \r\n slice( order(gdpPercap), .by = year) |> \r\n slice( as.integer(outer( -1:1, which(country == \"Germany\"), `+` )) )\r\n\r\n # A tibble: 12 × 4\r\n country country_code year gdpPercap\r\n \r\n 1 Denmark DNK 1992 26407.\r\n 2 Germany DEU 1992 26505.\r\n 3 Netherlands NLD 1992 26791.\r\n 4 Belgium BEL 1997 27561.\r\n 5 Germany DEU 1997 27789.\r\n 6 Iceland ISL 1997 28061.\r\n 7 United Kingdom GBR 2002 29479.\r\n 8 Germany DEU 2002 30036.\r\n 9 Belgium BEL 2002 30486.\r\n 10 France FRA 2007 30470.\r\n 11 Germany DEU 2007 32170.\r\n 12 United Kingdom GBR 2007 33203.\r\n\r\nFor our purposes here we want actually the grouping to persist for the following mutate() call, but there may be other cases where you’d want to use slice(.by = ) for temporary grouping.\r\nNow we’re already starting to see the shape of the data that we want! The last step is to encode the relationship of each row to Germany - does a row represent Germany itself, or a country that’s one GDP ranking below or above Germany?\r\nContinuing with our grouped context, we make a new column grp that assigns a factor value \"lo\"-\"is\"-\"hi\" (for “lower” than Germany, “is” Germany and “higher” than Germany) to each country trio by year. Notice the use of fct_inorder() below - this ensures that the factor levels are in the order of their occurrence (necessary for the correct ordering of bars in geom_col() later):\r\n\r\n\r\ngapminder_df |> \r\n group_by(year) |> \r\n arrange(gdpPercap) |> \r\n slice( as.integer(outer( -1:1, which(country == \"Germany\"), `+` )) ) |> \r\n mutate(grp = forcats::fct_inorder(c(\"lo\", \"is\", \"hi\")))\r\n\r\n # A tibble: 12 × 5\r\n # Groups: year [4]\r\n country country_code year gdpPercap grp \r\n \r\n 1 Denmark DNK 1992 26407. lo \r\n 2 Germany DEU 1992 26505. is \r\n 3 Netherlands NLD 1992 26791. hi \r\n 4 Belgium BEL 1997 27561. lo \r\n 5 Germany DEU 1997 27789. is \r\n 6 Iceland ISL 1997 28061. hi \r\n 7 United Kingdom GBR 2002 29479. lo \r\n 8 Germany DEU 2002 30036. is \r\n 9 Belgium BEL 2002 30486. hi \r\n 10 France FRA 2007 30470. lo \r\n 11 Germany DEU 2007 32170. is \r\n 12 United Kingdom GBR 2007 33203. hi\r\n\r\nWe now have everything that’s necessary to make our desired plot, so we ungroup(), write some {ggplot2} code, and voila!\r\n\r\n\r\ngapminder_df |> \r\n group_by(year) |> \r\n arrange(gdpPercap) |> \r\n slice( as.integer(outer( -1:1, which(country == \"Germany\"), `+` )) ) |> \r\n mutate(grp = forcats::fct_inorder(c(\"lo\", \"is\", \"hi\"))) |> \r\n # Ungroup and make ggplot\r\n ungroup() |> \r\n ggplot(aes(as.factor(year), gdpPercap, group = grp)) +\r\n geom_col(aes(fill = grp == \"is\"), position = position_dodge()) +\r\n geom_text(\r\n aes(label = country_code),\r\n vjust = 1.3,\r\n position = position_dodge(width = .9)\r\n ) +\r\n scale_fill_manual(\r\n values = c(\"grey75\", \"steelblue\"),\r\n guide = guide_none()\r\n ) +\r\n theme_classic() +\r\n labs(x = \"Year\", y = \"GDP per capita\")\r\n\r\n\r\n\r\n\r\nSolving the harder version of the problem\r\nThe solution presented above relies on a fragile assumption that Germany will always have a higher and lower ranking GDP neighbor every year. But nothing about the problem description guarantees this, so how can we re-write our code to be more robust?\r\nFirst, let’s simulate a data where Germany is the lowest ranking country in 2002 and the highest ranking in 2007. In other words, Germany only has one GDP neighbor in those years:\r\n\r\n\r\ngapminder_harder_df <- gapminder_df |> \r\n slice( order(gdpPercap), .by = year) |> \r\n slice( as.integer(outer( -1:1, which(country == \"Germany\"), `+` )) ) |> \r\n slice( -7, -12 )\r\ngapminder_harder_df\r\n\r\n # A tibble: 10 × 4\r\n country country_code year gdpPercap\r\n \r\n 1 Denmark DNK 1992 26407.\r\n 2 Germany DEU 1992 26505.\r\n 3 Netherlands NLD 1992 26791.\r\n 4 Belgium BEL 1997 27561.\r\n 5 Germany DEU 1997 27789.\r\n 6 Iceland ISL 1997 28061.\r\n 7 Germany DEU 2002 30036.\r\n 8 Belgium BEL 2002 30486.\r\n 9 France FRA 2007 30470.\r\n 10 Germany DEU 2007 32170.\r\n\r\nGiven this data, we cannot assign the full, length-3 lo-is-hi factor by group, because the groups for year 2002 and 2007 only have 2 observations:\r\n\r\n\r\ngapminder_harder_df |> \r\n group_by(year) |> \r\n mutate(grp = forcats::fct_inorder(c(\"lo\", \"is\", \"hi\")))\r\n\r\n Error in `mutate()`:\r\n ℹ In argument: `grp = forcats::fct_inorder(c(\"lo\", \"is\", \"hi\"))`.\r\n ℹ In group 3: `year = 2002`.\r\n Caused by error:\r\n ! `grp` must be size 2 or 1, not 3.\r\n\r\nThe trick here is to turn each group of rows into an integer sequence where Germany is “anchored” to 2, and then use that vector to subset the lo-is-hi factor:\r\n\r\n\r\ngapminder_harder_df |> \r\n group_by(year) |> \r\n mutate(\r\n Germany_anchored_to_2 = row_number() - which(country == \"Germany\") + 2,\r\n grp = forcats::fct_inorder(c(\"lo\", \"is\", \"hi\"))[Germany_anchored_to_2]\r\n )\r\n\r\n # A tibble: 10 × 6\r\n # Groups: year [4]\r\n country country_code year gdpPercap Germany_anchored_to_2 grp \r\n \r\n 1 Denmark DNK 1992 26407. 1 lo \r\n 2 Germany DEU 1992 26505. 2 is \r\n 3 Netherlands NLD 1992 26791. 3 hi \r\n 4 Belgium BEL 1997 27561. 1 lo \r\n 5 Germany DEU 1997 27789. 2 is \r\n 6 Iceland ISL 1997 28061. 3 hi \r\n 7 Germany DEU 2002 30036. 2 is \r\n 8 Belgium BEL 2002 30486. 3 hi \r\n 9 France FRA 2007 30470. 1 lo \r\n 10 Germany DEU 2007 32170. 2 is\r\n\r\nWe find that the lessons of working with row indices from slice() translated to solving this complex mutate() problem - neat!\r\nWindowed min/max/median (etc.)\r\nLet’s say we have this small time series data, and we want to calculate a lagged 3-window moving minimum for the val column:\r\n\r\n\r\nts_df <- tibble(\r\n time = 1:6,\r\n val = sample(1:6 * 10)\r\n)\r\nts_df\r\n\r\n # A tibble: 6 × 2\r\n time val\r\n \r\n 1 1 50\r\n 2 2 40\r\n 3 3 60\r\n 4 4 30\r\n 5 5 20\r\n 6 6 10\r\n\r\nIf you’re new to window functions, think of them as a special kind of group_by() + summarize() where groups are chunks of observations along a (typically unique) continuous measure like time, and observations can be shared between groups.\r\nThere are several packages implementing moving/sliding/rolling window functions. My current favorite is {r2c} (see a review of other implementations therein), but I also like {slider} for an implementation that follows familiar “tidy” design principles:\r\n\r\n\r\nlibrary(slider)\r\nts_df |> \r\n mutate(moving_min = slide_min(val, before = 2L, complete = TRUE))\r\n\r\n # A tibble: 6 × 3\r\n time val moving_min\r\n \r\n 1 1 50 NA\r\n 2 2 40 NA\r\n 3 3 60 40\r\n 4 4 30 30\r\n 5 5 20 20\r\n 6 6 10 10\r\n\r\nMoving window is a general class of operations that encompass any arbitrary summary statistic - so not just min but other reducing functions like mean, standard deviation, etc. But what makes moving min (along with max, median, etc.) a particularly interesting case for our current discussion is that the value comes from an existing observation in the data. And if our time series is tidy, every observation makes up a row. See where I’m going with this?\r\nUsing outer() again, we can take the outer sum of all row indices of ts_df and -2:0. This gives us a matrix where each column represents a lagged size-3 moving window:\r\n\r\n\r\nwindows_3lag <- outer(-2:0, 1:nrow(ts_df), \"+\")\r\nwindows_3lag\r\n\r\n [,1] [,2] [,3] [,4] [,5] [,6]\r\n [1,] -1 0 1 2 3 4\r\n [2,] 0 1 2 3 4 5\r\n [3,] 1 2 3 4 5 6\r\n\r\nThe “lagged size-3” property of this moving window means that the first two windows are incomplete (consisting of less than 3 observations). We want to treat those as invalid, so we can drop the first two columns from our matrix:\r\n\r\n\r\nwindows_3lag[,-(1:2)]\r\n\r\n [,1] [,2] [,3] [,4]\r\n [1,] 1 2 3 4\r\n [2,] 2 3 4 5\r\n [3,] 3 4 5 6\r\n\r\nFor each remaining column, we want to grab the values of val at the corresponding row indices and find which row has the minimum val. In terms of code, we use apply() with MARGIN = 2L to column-wise apply a function where we use which.min() to find the location of the minimum val and convert it back to row index via subsetting:\r\n\r\n\r\nwindows_3lag[, -(1:2)] |> \r\n apply(MARGIN = 2L, \\(i) i[which.min(ts_df$val[i])])\r\n\r\n [1] 2 4 5 6\r\n\r\nNow let’s stick this inside slice(), exploiting the fact that it’s data-masked (ts_df$val can just be val) and exposes context-dependent expressions (1:nrow(ts_df) can just be row_number()):\r\n\r\n\r\nmoving_mins <- ts_df |> \r\n slice(\r\n outer(-2:0, row_number(), \"+\")[,-(1:2)] |> \r\n apply(MARGIN = 2L, \\(i) i[which.min(val[i])])\r\n )\r\nmoving_mins\r\n\r\n # A tibble: 4 × 2\r\n time val\r\n \r\n 1 2 40\r\n 2 4 30\r\n 3 5 20\r\n 4 6 10\r\n\r\nFrom here, we can grab the val column and pad it with NA to add our desired window_min column to the original data frame:\r\n\r\n\r\nts_df |> \r\n mutate(moving_min = c(NA, NA, moving_mins$val))\r\n\r\n # A tibble: 6 × 3\r\n time val moving_min\r\n \r\n 1 1 50 NA\r\n 2 2 40 NA\r\n 3 3 60 40\r\n 4 4 30 30\r\n 5 5 20 20\r\n 6 6 10 10\r\n\r\nAt this point you might think that this is a very round-about way of solving the same problem. But actually I think that it’s a faster route to solving a slightly more complicated problem - augmenting each observation of a data frame with information about comparison observations.\r\nFor example, our slice()-based solution sets us up nicely for also bringing along information about the time at which the moving_min occurred. After some rename()-ing and adding the original time information back in, we get back a relational data structure where time is a key shared with ts_df:\r\n\r\n\r\nmoving_mins2 <- moving_mins |> \r\n rename(moving_min_val = val, moving_min_time = time) |> \r\n mutate(time = ts_df$time[-(1:2)], .before = 1L)\r\nmoving_mins2\r\n\r\n # A tibble: 4 × 3\r\n time moving_min_time moving_min_val\r\n \r\n 1 3 2 40\r\n 2 4 4 30\r\n 3 5 5 20\r\n 4 6 6 10\r\n\r\nWe can then left-join this to the original data to augment it with information about both the value of the 3-window minimum and the time that the minimum occurred:\r\n\r\n\r\nleft_join(ts_df, moving_mins2, by = \"time\")\r\n\r\n # A tibble: 6 × 4\r\n time val moving_min_time moving_min_val\r\n \r\n 1 1 50 NA NA\r\n 2 2 40 NA NA\r\n 3 3 60 2 40\r\n 4 4 30 4 30\r\n 5 5 20 5 20\r\n 6 6 10 6 10\r\n\r\nThis is particularly useful if rows contain other useful information for comparison and you have memory to spare:\r\n\r\n\r\nts_wide_df <- ts_df |> \r\n mutate(\r\n col1 = rnorm(6),\r\n col2 = rnorm(6)\r\n )\r\nts_wide_df\r\n\r\n # A tibble: 6 × 4\r\n time val col1 col2\r\n \r\n 1 1 50 0.0183 0.00501\r\n 2 2 40 0.705 -0.0376 \r\n 3 3 60 -0.647 0.724 \r\n 4 4 30 0.868 -0.497 \r\n 5 5 20 0.376 0.0114 \r\n 6 6 10 0.310 0.00986\r\n\r\nThe below code augments each observation in the original ts_wide_df data with information about the corresponding 3-window moving min (columns prefixed with \"min3val_\")\r\n\r\n\r\nmoving_mins_wide <- ts_wide_df |> \r\n slice(\r\n outer(-2:0, row_number(), \"+\")[,-(1:2)] |> \r\n apply(MARGIN = 2L, \\(i) i[which.min(val[i])])\r\n ) |> \r\n rename_with(~ paste0(\"min3val_\", .x)) |> \r\n mutate(time = ts_wide_df$time[-(1:2)])\r\nleft_join(ts_wide_df, moving_mins_wide, by = \"time\")\r\n\r\n # A tibble: 6 × 8\r\n time val col1 col2 min3val_time min3val_val min3val_col1\r\n \r\n 1 1 50 0.0183 0.00501 NA NA NA \r\n 2 2 40 0.705 -0.0376 NA NA NA \r\n 3 3 60 -0.647 0.724 2 40 0.705\r\n 4 4 30 0.868 -0.497 4 30 0.868\r\n 5 5 20 0.376 0.0114 5 20 0.376\r\n 6 6 10 0.310 0.00986 6 10 0.310\r\n # ℹ 1 more variable: min3val_col2 \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nEvenly distributed row shuffling of balanced categories\r\nSometimes the ordering of rows in a data frame can be meaningful for an external application.\r\nFor example, many experiment-building platforms for psychology research require researchers to specify the running order of trials in an experiment via a csv, where each row represents a trial and each column represents information about the trial.\r\nSo an experiment testing the classic Stroop effect may have the following template:\r\n\r\n\r\nmismatch_trials <- tibble(\r\n item_id = 1:5,\r\n trial = \"mismatch\",\r\n word = c(\"red\", \"green\", \"purple\", \"brown\", \"blue\"),\r\n color = c(\"brown\", \"red\", \"green\", \"blue\", \"purple\")\r\n)\r\nmismatch_trials\r\n\r\n # A tibble: 5 × 4\r\n item_id trial word color \r\n \r\n 1 1 mismatch red brown \r\n 2 2 mismatch green red \r\n 3 3 mismatch purple green \r\n 4 4 mismatch brown blue \r\n 5 5 mismatch blue purple\r\n\r\nWe probably also want to mix in some control trials where the word and color do match:\r\n\r\n\r\nmatch_trials <- mismatch_trials |> \r\n mutate(trial = \"match\", color = word)\r\nmatch_trials\r\n\r\n # A tibble: 5 × 4\r\n item_id trial word color \r\n \r\n 1 1 match red red \r\n 2 2 match green green \r\n 3 3 match purple purple\r\n 4 4 match brown brown \r\n 5 5 match blue blue\r\n\r\nNow that we have all materials for our experiment, we next want the running order to interleave the match and mismatch trials.\r\nWe first add them together into a longer data frame:\r\n\r\n\r\nstroop_trials <- bind_rows(mismatch_trials, match_trials)\r\nstroop_trials\r\n\r\n # A tibble: 10 × 4\r\n item_id trial word color \r\n \r\n 1 1 mismatch red brown \r\n 2 2 mismatch green red \r\n 3 3 mismatch purple green \r\n 4 4 mismatch brown blue \r\n 5 5 mismatch blue purple\r\n 6 1 match red red \r\n 7 2 match green green \r\n 8 3 match purple purple\r\n 9 4 match brown brown \r\n 10 5 match blue blue\r\n\r\nAnd from here we can exploit the fact that all mismatch items come before match items, and that they share the same length of 5:\r\n\r\n\r\nstroop_trials |> \r\n slice( as.integer(outer(c(0, 5), 1:5, \"+\")) )\r\n\r\n # A tibble: 10 × 4\r\n item_id trial word color \r\n \r\n 1 1 mismatch red brown \r\n 2 1 match red red \r\n 3 2 mismatch green red \r\n 4 2 match green green \r\n 5 3 mismatch purple green \r\n 6 3 match purple purple\r\n 7 4 mismatch brown blue \r\n 8 4 match brown brown \r\n 9 5 mismatch blue purple\r\n 10 5 match blue blue\r\n\r\nThis relies on a strong assumptions about the row order in the original data, though. So a safer alternative is to represent the row indices for \"match\" and \"mismatch\" trials as rows of a matrix, and then collapse column-wise.\r\nLet’s try this outside of slice() first. We start with a call to sapply() to construct a matrix where the columns contain row indices for each unique category of trial:\r\n\r\n\r\nsapply(unique(stroop_trials$trial), \\(x) which(stroop_trials$trial == x))\r\n\r\n mismatch match\r\n [1,] 1 6\r\n [2,] 2 7\r\n [3,] 3 8\r\n [4,] 4 9\r\n [5,] 5 10\r\n\r\nThen we transpose the matrix with t(), which rotates it:\r\n\r\n\r\nt( sapply(unique(stroop_trials$trial), \\(x) which(stroop_trials$trial == x)) )\r\n\r\n [,1] [,2] [,3] [,4] [,5]\r\n mismatch 1 2 3 4 5\r\n match 6 7 8 9 10\r\n\r\nNow lets stick that inside slice, remembering to collapse the transposed matrix into vector:\r\n\r\n\r\ninterleaved_stroop_trials <- stroop_trials |> \r\n slice( as.integer(t(sapply(unique(trial), \\(x) which(trial == x)))) )\r\ninterleaved_stroop_trials\r\n\r\n # A tibble: 10 × 4\r\n item_id trial word color \r\n \r\n 1 1 mismatch red brown \r\n 2 1 match red red \r\n 3 2 mismatch green red \r\n 4 2 match green green \r\n 5 3 mismatch purple green \r\n 6 3 match purple purple\r\n 7 4 mismatch brown blue \r\n 8 4 match brown brown \r\n 9 5 mismatch blue purple\r\n 10 5 match blue blue\r\n\r\nAt the moment, we have both “red” word trails showing up together, and then the “green”s, the “purple”s, and so on. If we wanted to introduce some randomness to the presentation order within each type of trial, we can wrap the row indices in sample() to shuffle them first:\r\n\r\n\r\nshuffled_stroop_trials <- stroop_trials |> \r\n slice( as.integer(t(sapply(unique(trial), \\(x) sample(which(trial == x))))) )\r\nshuffled_stroop_trials\r\n\r\n # A tibble: 10 × 4\r\n item_id trial word color \r\n \r\n 1 1 mismatch red brown \r\n 2 5 match blue blue \r\n 3 2 mismatch green red \r\n 4 4 match brown brown \r\n 5 3 mismatch purple green \r\n 6 1 match red red \r\n 7 4 mismatch brown blue \r\n 8 3 match purple purple\r\n 9 5 mismatch blue purple\r\n 10 2 match green green\r\n\r\n\r\nInserting a new row at specific intervals\r\nContinuing with our Stroop experiment template example, let’s say we want to give participants a break every two trials.\r\nIn a matrix representation, this means constructing this 2-row matrix of row indices:\r\n\r\n\r\nmatrix(1:nrow(shuffled_stroop_trials), nrow = 2)\r\n\r\n [,1] [,2] [,3] [,4] [,5]\r\n [1,] 1 3 5 7 9\r\n [2,] 2 4 6 8 10\r\n\r\nAnd adding a row of that represent a separator/break, before collapsing column-wise:\r\n\r\n\r\nmatrix(1:nrow(shuffled_stroop_trials), nrow = 2) |> \r\n rbind(11)\r\n\r\n [,1] [,2] [,3] [,4] [,5]\r\n [1,] 1 3 5 7 9\r\n [2,] 2 4 6 8 10\r\n [3,] 11 11 11 11 11\r\n\r\nUsing slice, this means adding a row to the data representing a break trial first, and then adding a row to the row index matrix representing that row:\r\n\r\n\r\nstroop_with_breaks <- shuffled_stroop_trials |> \r\n add_row(trial = \"BREAK\") |> \r\n slice(\r\n matrix(row_number()[-n()], nrow = 2) |> \r\n rbind(n()) |> \r\n as.integer()\r\n )\r\nstroop_with_breaks\r\n\r\n # A tibble: 15 × 4\r\n item_id trial word color \r\n \r\n 1 1 mismatch red brown \r\n 2 5 match blue blue \r\n 3 NA BREAK \r\n 4 2 mismatch green red \r\n 5 4 match brown brown \r\n 6 NA BREAK \r\n 7 3 mismatch purple green \r\n 8 1 match red red \r\n 9 NA BREAK \r\n 10 4 mismatch brown blue \r\n 11 3 match purple purple\r\n 12 NA BREAK \r\n 13 5 mismatch blue purple\r\n 14 2 match green green \r\n 15 NA BREAK \r\n\r\nIf we don’t want a break after the last trial, we can use negative indexing with slice(-n()):\r\n\r\n\r\nstroop_with_breaks |> \r\n slice(-n())\r\n\r\n # A tibble: 14 × 4\r\n item_id trial word color \r\n \r\n 1 1 mismatch red brown \r\n 2 5 match blue blue \r\n 3 NA BREAK \r\n 4 2 mismatch green red \r\n 5 4 match brown brown \r\n 6 NA BREAK \r\n 7 3 mismatch purple green \r\n 8 1 match red red \r\n 9 NA BREAK \r\n 10 4 mismatch brown blue \r\n 11 3 match purple purple\r\n 12 NA BREAK \r\n 13 5 mismatch blue purple\r\n 14 2 match green green\r\n\r\nWhat about after 3 trials, where the number of trials (10) is not divisibly by 3? Can we still use a matrix?\r\nYes, you’d just need to explicitly fill in the “blanks”!\r\nConceptually, we want a matrix like this, where extra “cells” are padded with 0s (recall that 0s are ignored in slice()):\r\n\r\n\r\nmatrix(c(1:10, rep(0, 3 - 10 %% 3)), nrow = 3)\r\n\r\n [,1] [,2] [,3] [,4]\r\n [1,] 1 4 7 10\r\n [2,] 2 5 8 0\r\n [3,] 3 6 9 0\r\n\r\nAnd this is how that could be implemented inside slice(), minding the fact that adding the break trial increases original row count by 1:\r\n\r\n\r\nshuffled_stroop_trials |> \r\n add_row(trial = \"BREAK\") |> \r\n slice(\r\n c(seq_len(n()-1), rep(0, 3 - (n()-1) %% 3)) |> \r\n matrix(nrow = 3) |> \r\n rbind(n()) |> \r\n as.integer()\r\n ) |> \r\n slice(-n())\r\n\r\n # A tibble: 13 × 4\r\n item_id trial word color \r\n \r\n 1 1 mismatch red brown \r\n 2 5 match blue blue \r\n 3 2 mismatch green red \r\n 4 NA BREAK \r\n 5 4 match brown brown \r\n 6 3 mismatch purple green \r\n 7 1 match red red \r\n 8 NA BREAK \r\n 9 4 mismatch brown blue \r\n 10 3 match purple purple\r\n 11 5 mismatch blue purple\r\n 12 NA BREAK \r\n 13 2 match green green\r\n\r\nHow about inserting a break trial after every \"purple\" word trials?\r\nConceptually, we want a matrix that binds these two vectors as rows before collapsing:\r\n\r\n\r\nprint( 1:nrow(shuffled_stroop_trials) )\r\nprint(\r\n replace(rep(0, nrow(shuffled_stroop_trials)),\r\n which(shuffled_stroop_trials$word == \"purple\"), 11)\r\n)\r\n\r\n [1] 1 2 3 4 5 6 7 8 9 10\r\n [1] 0 0 0 0 11 0 0 11 0 0\r\n\r\nAnd this is how you could do that inside slice():\r\n\r\n\r\nshuffled_stroop_trials |> \r\n add_row(trial = \"BREAK\") |> \r\n slice(\r\n c(seq_len(n()-1), replace(rep(0, n()-1), which(word == \"purple\"), n())) |>\r\n matrix(nrow = 2, byrow = TRUE) |> \r\n as.integer()\r\n )\r\n\r\n # A tibble: 12 × 4\r\n item_id trial word color \r\n \r\n 1 1 mismatch red brown \r\n 2 5 match blue blue \r\n 3 2 mismatch green red \r\n 4 4 match brown brown \r\n 5 3 mismatch purple green \r\n 6 NA BREAK \r\n 7 1 match red red \r\n 8 4 mismatch brown blue \r\n 9 3 match purple purple\r\n 10 NA BREAK \r\n 11 5 mismatch blue purple\r\n 12 2 match green green\r\n\r\nYou might protest that this is a pretty convoluted approach to a seemingly simple problem of inserting rows, and you’d be right!2 Not only is the code difficult to read, you can only insert the same single row over and over.\r\nIt turns out that these cases of row insertion actually fall under the broader class of interweaving unequal categories - let’s see this next.\r\nEvenly distributed row shuffling of unequal categories\r\nLet’s return to our solution for the initial “break every 2 trials” problem:\r\n\r\n\r\nshuffled_stroop_trials |> \r\n add_row(trial = \"BREAK\") |> \r\n slice(\r\n matrix(row_number()[-n()], nrow = 2) |> \r\n rbind(n()) |> \r\n as.integer()\r\n ) |> \r\n slice(-n())\r\n\r\n # A tibble: 14 × 4\r\n item_id trial word color \r\n \r\n 1 1 mismatch red brown \r\n 2 5 match blue blue \r\n 3 NA BREAK \r\n 4 2 mismatch green red \r\n 5 4 match brown brown \r\n 6 NA BREAK \r\n 7 3 mismatch purple green \r\n 8 1 match red red \r\n 9 NA BREAK \r\n 10 4 mismatch brown blue \r\n 11 3 match purple purple\r\n 12 NA BREAK \r\n 13 5 mismatch blue purple\r\n 14 2 match green green\r\n\r\nHere, we were working with a matrix that looks like this, where 11 represents the new row we added representing a break trial:\r\n\r\n [,1] [,2] [,3] [,4] [,5]\r\n [1,] 1 3 5 7 9\r\n [2,] 2 4 6 8 10\r\n [3,] 11 11 11 11 11\r\n\r\nAnd recall that to insert every 3 rows, we needed to pad with 0 first to satisfy the matrix’s rectangle constraint:\r\n\r\n [,1] [,2] [,3] [,4]\r\n [1,] 1 4 7 10\r\n [2,] 2 5 8 0\r\n [3,] 3 6 9 0\r\n [4,] 11 11 11 11\r\n\r\nBut a better way of thinking about this is to have one matrix row representing all row indices, and then add a sparse row that represent breaks:\r\nBreak after every 2 trials:\r\n\r\n\r\nmatrix(c(1:10, rep_len(c(0, 11), 10)), nrow = 2, byrow = TRUE)\r\n\r\n [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]\r\n [1,] 1 2 3 4 5 6 7 8 9 10\r\n [2,] 0 11 0 11 0 11 0 11 0 11\r\n\r\nBreak after every 3 trials:\r\n\r\n\r\nmatrix(c(1:10, rep_len(c(0, 0, 11), 10)), nrow = 2, byrow = TRUE)\r\n\r\n [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]\r\n [1,] 1 2 3 4 5 6 7 8 9 10\r\n [2,] 0 0 11 0 0 11 0 0 11 0\r\n\r\nBreak after every 4 trials:\r\n\r\n\r\nmatrix(c(1:10, rep_len(c(0, 0, 0, 11), 10)), nrow = 2, byrow = TRUE)\r\n\r\n [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]\r\n [1,] 1 2 3 4 5 6 7 8 9 10\r\n [2,] 0 0 0 11 0 0 0 11 0 0\r\n\r\nAnd it turns out that this method generalizes to balanced shuffling across categories that are not equal in size!\r\nLet’s start with a really basic example - here we have three kinds of fruits with varying counts:\r\n\r\n\r\nfruits <- c(\"🍎\", \"🍋\", \"🍇\")[c(2,1,3,3,2,3,1,2,2,1,2,2,3,3,3)]\r\nfruits <- factor(fruits, levels = c(\"🍇\", \"🍋\", \"🍎\"))\r\ntable(fruits)\r\n\r\n fruits\r\n 🍇 🍋 🍎 \r\n 6 6 3\r\n\r\nTheir current order looks like this:\r\n\r\n\r\ncat(levels(fruits)[fruits])\r\n\r\n 🍋 🍎 🍇 🍇 🍋 🍇 🍎 🍋 🍋 🍎 🍋 🍋 🍇 🍇 🍇\r\n\r\nBut I want them to be ordered such that individuals of the same fruit kind are maximally apart from one another. This effectively re-orders the fruits to be distributed “evenly”:\r\n\r\n\r\ncat(levels(fruits)[fruits[c(3,1,2,4,5,0,6,8,10,13,9,0,14,11,7,15,12,0)]])\r\n\r\n 🍇 🍋 🍎 🍇 🍋 🍇 🍋 🍎 🍇 🍋 🍇 🍋 🍎 🍇 🍋\r\n\r\nWith our “build row-wise, collapse col-wise” approach, this takes the following steps:\r\nFind the most frequent category - that N-max becomes the number of columns in the matrix of row indices.\r\nIn this case it’s grapes and lemons, of which there are 6 each:\r\n\r\n\r\ngrape_rows <- which(fruits == \"🍇\")\r\n setNames(grape_rows, rep(\"🍇\", 6))\r\n\r\n 🍇 🍇 🍇 🍇 🍇 🍇 \r\n 3 4 6 13 14 15\r\n\r\n\r\n\r\nlemon_rows <- which(fruits == \"🍋\")\r\n setNames(lemon_rows, rep(\"🍋\", 6))\r\n\r\n 🍋 🍋 🍋 🍋 🍋 🍋 \r\n 1 5 8 9 11 12\r\n\r\nNormalize (“stretch”) all vectors to have the same length as N.\r\nIn this case we need to stretch the apples vector, which is currently only length-3:\r\n\r\n\r\napple_rows <- which(fruits == \"🍎\")\r\n apple_rows\r\n\r\n [1] 2 7 10\r\n\r\nThe desired “sparse” representation is something like this, where each instance of apple is equidistant, with 0s in between:\r\n\r\n\r\napple_rows_sparse <- c(2, 0, 7, 0, 10, 0)\r\n setNames(apple_rows_sparse, c(\"🍎\", \"\", \"🍎\", \"\", \"🍎\", \"\"))\r\n\r\n 🍎 🍎 🍎 \r\n 2 0 7 0 10 0\r\n\r\nThere are many ways to get at this, but one trick involves creating an evenly spaced float sequence from 1 to N-apple over N-max steps:\r\n\r\n\r\nseq(1, 3, length.out = 6)\r\n\r\n [1] 1.0 1.4 1.8 2.2 2.6 3.0\r\n\r\nFrom there, we round the numbers:\r\n\r\n\r\nround(seq(1, 3, length.out = 6))\r\n\r\n [1] 1 1 2 2 3 3\r\n\r\nThen mark the first occurance of each number using !duplicated():\r\n\r\n\r\n!duplicated(round(seq(1, 3, length.out = 6)))\r\n\r\n [1] TRUE FALSE TRUE FALSE TRUE FALSE\r\n\r\nAnd lastly, we initialize a vector of 0s and replace() the TRUEs with apple indices:\r\n\r\n\r\nreplace(\r\n rep(0, 6),\r\n !duplicated(round(seq(1, 3, length.out = 6))),\r\n which(fruits == \"🍎\")\r\n )\r\n\r\n [1] 2 0 7 0 10 0\r\n\r\nStack up the category vectors by row and collapse column-wise:\r\nManually, we would build the full matrix row-by-row like this:\r\n\r\n\r\nfruits_matrix <- matrix(\r\n c(grape_rows, lemon_rows, apple_rows_sparse),\r\n nrow = 3, byrow = TRUE\r\n )\r\n rownames(fruits_matrix) <- c(\"🍇\", \"🍋\", \"🍎\")\r\n fruits_matrix\r\n\r\n [,1] [,2] [,3] [,4] [,5] [,6]\r\n 🍇 3 4 6 13 14 15\r\n 🍋 1 5 8 9 11 12\r\n 🍎 2 0 7 0 10 0\r\n\r\nAnd dynamically we can use sapply() to fill the matrix column-by-column, and then t()-ing the output:\r\n\r\n\r\nfruits_distributed <- sapply(levels(fruits), \\(x) {\r\n n_max <- max(table(fruits))\r\n ind <- which(fruits == x)\r\n nums <- seq(1, length(ind), length.out = n_max)\r\n replace(rep(0, n_max), !duplicated(round(nums)), ind)\r\n }) |> \r\n t()\r\n fruits_distributed\r\n\r\n [,1] [,2] [,3] [,4] [,5] [,6]\r\n 🍇 3 4 6 13 14 15\r\n 🍋 1 5 8 9 11 12\r\n 🍎 2 0 7 0 10 0\r\n\r\nFinally, we collapse the vector and we see that it indeed distributed the fruits evenly!\r\n\r\n\r\nfruits[as.integer(fruits_distributed)]\r\n\r\n [1] 🍇 🍋 🍎 🍇 🍋 🍇 🍋 🍎 🍇 🍋 🍇 🍋 🍎 🍇 🍋\r\n Levels: 🍇 🍋 🍎\r\n\r\nWe can go even further and wrap the dynamic, sapply()-based solution into a function for use within slice(). Here, I also added an optional argument for shuffling within categories:\r\n\r\n\r\nrshuffle <- function(x, shuffle_within = FALSE) {\r\n categories <- as.factor(x)\r\n n_max <- max(table(categories))\r\n sapply(levels(categories), \\(lvl) {\r\n ind <- which(categories == lvl)\r\n if (shuffle_within) ind <- sample(ind)\r\n nums <- seq(1, length(ind), length.out = n_max)\r\n replace(rep(0, n_max), !duplicated(round(nums)), ind)\r\n }) |> \r\n t() |> \r\n as.integer()\r\n}\r\n\r\n\r\nReturning back to our Stroop experiment template example, imagine we also had two filler trials, where no word is shown and just the color flashes on the screen:\r\n\r\n\r\nstroop_fillers <- tibble(\r\n item_id = 1:2,\r\n trial = \"filler\",\r\n word = NA,\r\n color = c(\"red\", \"blue\")\r\n)\r\nstroop_with_fillers <- bind_rows(stroop_fillers, stroop_trials) |> \r\n mutate(trial = factor(trial, c(\"match\", \"mismatch\", \"filler\")))\r\nstroop_with_fillers\r\n\r\n # A tibble: 12 × 4\r\n item_id trial word color \r\n \r\n 1 1 filler red \r\n 2 2 filler blue \r\n 3 1 mismatch red brown \r\n 4 2 mismatch green red \r\n 5 3 mismatch purple green \r\n 6 4 mismatch brown blue \r\n 7 5 mismatch blue purple\r\n 8 1 match red red \r\n 9 2 match green green \r\n 10 3 match purple purple\r\n 11 4 match brown brown \r\n 12 5 match blue blue\r\n\r\nWe can evenly shuffle between the unequal trial types with our new rshuffle() function:\r\n\r\n\r\nstroop_with_fillers |> \r\n slice( rshuffle(trial, shuffle_within = TRUE) )\r\n\r\n # A tibble: 12 × 4\r\n item_id trial word color \r\n \r\n 1 2 match green green \r\n 2 2 mismatch green red \r\n 3 1 filler red \r\n 4 1 match red red \r\n 5 4 mismatch brown blue \r\n 6 3 match purple purple\r\n 7 3 mismatch purple green \r\n 8 2 filler blue \r\n 9 4 match brown brown \r\n 10 5 mismatch blue purple\r\n 11 5 match blue blue \r\n 12 1 mismatch red brown\r\n\r\nConclusion\r\nWhen I started drafting this blog post, I thought I’d come with a principled taxonomy of row-relational operations. Ha. This was a lot trickier to think through than I thought.\r\nBut I hope that this gallery of esoteric use-cases for slice() inspires you to use it more, and to think about “tidy” solutions to seemingly “untidy” problems.\r\n\r\nThe .by_group = TRUE is not strictly necessary here, but it’s good for visually inspecting the within-group ordering.↩︎\r\nAlthough row insertion is a generally tricky problem for column-major data frame structures, which is partly why dplyr’s row manipulation verbs have stayed experimental for quite some time.↩︎\r\n",
"preview": "posts/2023-06-11-row-relational-operations/preview.png",
- "last_modified": "2023-06-11T13:51:42+09:00",
+ "last_modified": "2023-06-11T00:51:42-04:00",
"input_file": {},
"preview_width": 1800,
"preview_height": 1080
@@ -62,7 +62,7 @@
],
"contents": "\r\n\r\nContents\r\nIntro\r\nTL;DR - Big takeaways\r\nSetup\r\nQuick example\r\nList of 💜s and 💔s\r\n1) 💜 The distinctness of the “grouped df” type\r\n2) 💜 The imperative -! variants\r\n3) 💔 Competition between Base.filter() and DataFrames.subset()\r\n4) 💜 The operation specification syntax is like {data.table}’s j on steroids\r\n5) 💜 Rowwise operations with ByRow() and eachrow()\r\n6) 💔 Confusingly, select() is more like dplyr::transmute() than dplyr::select()\r\n7) 💔 Selection helpers are not powered by boolean algebra\r\n8) 💜 groupby() has select-semantics\r\n9) 💔 No special marking of context-dependent expressions\r\n10) 💜 The op-spec syntax gives you dplyr::across()/c_across() for free\r\n\r\nConcluding thoughts\r\nOverall impression\r\nNext steps\r\n\r\n\r\nIntro\r\nDataFrames.jl is a Julia package for data wrangling.\r\nAs of this writing it is at v1.4.x - it’s a mature library that’s been in active development for over a decade.1\r\nFor some background, I comfortably switch between {dplyr} and {data.table}, having used both for nearly 5 years.\r\nI love digging into the implementational details of both - I really appreciate the thoughtfulness behind {dplyr}’s tidyeval/tidyselect semantics, as well as {data.table}’s conciseness and abstraction in the j.\r\nI have not been exposed to any other data wrangling frameworks but was recently compelled to learn Julia for independent reasons,2 so I decided why not pick up Julia-flavored data wrangling while I’m at it?\r\nThis blog post is a rough (and possibly evolving?) list of my first impressions of DataFrames.jl and “DataFrames.jl accessories”, namely Chain.jl and DataFramesMeta.jl.3\r\nIf you’re Julia-curious and/or just want to hear an R person talk about how another language does data wrangling differently, you’re the target audience!\r\nHowever, this blog post is NOT:\r\nMy first impressions of the Julia language or a pitch for why you should use Julia. If you want that from an R user’s perspective, check out Trang Le’s blog post and the Julia documentation on “Noteworthy differences from R”.\r\nA DataFrames.jl tutorial. But if you’re curious, aside from the docs I learned almost exclusively from Bogumił Kamiński’s JuliaCon 2022 workshop, the Julia Data Science book, and the Julia for Data Analysis book.4\r\nA {dplyr}/{data.table} to DataFrames.jl translation cheatsheet since those already exist, though I’ll be doing some of that myself when it helps illustrate a point.\r\nAll of this to say that I have no skin in the game and I don’t endorse or represent anything I write here.\r\nIn fact I’m a Julia noob myself (it’s only been like 3 months) so take everything with a grain of salt and please feel free to let me know if I did anything wrong or inefficiently!\r\nTL;DR - Big takeaways\r\nThe syntax mimics {dplyr} but works more like {data.table} under the hood. There’s a bit of unlearning to do for {dplyr} users.\r\nThere are not as many idiomatic ways of doing things like in {dplyr}. Whereas you can get very far in {dplyr} without thinking much about base R, learning DataFrames.jl requires a good amount of “base” Julia first (especially distinctions between data types, which R lacks).\r\nI love Chain.jl but I’m not that drawn to DataFramesMeta.jl because it feels like {dtplyr}5 - I’d personally rather just focus on learning the thing itself.\r\nSome aspects of DataFrames.jl are relatively underdeveloped IMO (e.g., context dependent expressions) but it’s in active development and I plan to stick around to see more.\r\nSetup\r\n\r\n\r\nR\r\n\r\n\r\n# R v4.2.1\r\nlibrary(dplyr) # v1.0.10\r\nlibrary(data.table) # v1.14.5\r\nmtcars_df <- mtcars |>\r\n as_tibble(rownames = \"model\") |>\r\n type.convert(as.is = TRUE)\r\nmtcars_dt <- as.data.table(mtcars_df)\r\n\r\n\r\n\r\n\r\nJulia\r\n\r\n# Julia v1.8.2\r\nusing DataFrames # (v1.4.3)\r\nusing DataFramesMeta # (v0.12.0) Also imports Chain.jl\r\n# using Chain.jl (v0.5.0)\r\nusing StatsBase # (v0.33.21) Like base R {stats}\r\nusing RDatasets # (v0.7.7) Self-explanatory; like the {Rdatasets} package\r\nmtcars = RDatasets.dataset(\"datasets\", \"mtcars\")\r\n 32×12 DataFrame\r\n Row │ Model MPG Cyl Disp HP DRat WT QS ⋯\r\n │ String31 Float64 Int64 Float64 Int64 Float64 Float64 Fl ⋯\r\n ─────┼──────────────────────────────────────────────────────────────────────────\r\n 1 │ Mazda RX4 21.0 6 160.0 110 3.9 2.62 ⋯\r\n 2 │ Mazda RX4 Wag 21.0 6 160.0 110 3.9 2.875\r\n 3 │ Datsun 710 22.8 4 108.0 93 3.85 2.32\r\n 4 │ Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215\r\n 5 │ Hornet Sportabout 18.7 8 360.0 175 3.15 3.44 ⋯\r\n 6 │ Valiant 18.1 6 225.0 105 2.76 3.46\r\n 7 │ Duster 360 14.3 8 360.0 245 3.21 3.57\r\n 8 │ Merc 240D 24.4 4 146.7 62 3.69 3.19\r\n ⋮ │ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱\r\n 26 │ Fiat X1-9 27.3 4 79.0 66 4.08 1.935 ⋯\r\n 27 │ Porsche 914-2 26.0 4 120.3 91 4.43 2.14\r\n 28 │ Lotus Europa 30.4 4 95.1 113 3.77 1.513\r\n 29 │ Ford Pantera L 15.8 8 351.0 264 4.22 3.17\r\n 30 │ Ferrari Dino 19.7 6 145.0 175 3.62 2.77 ⋯\r\n 31 │ Maserati Bora 15.0 8 301.0 335 3.54 3.57\r\n 32 │ Volvo 142E 21.4 4 121.0 109 4.11 2.78\r\n 5 columns and 17 rows omitted\r\n\r\n\r\n\r\nQuick example\r\nFrom mtcars…\r\nFilter for rows that represent \"Merc\"6 car models\r\nCalculate the average mpg by cyl\r\nReturn a new column called kmpg that converts miles to kilometers (1:1.61)\r\n\r\n\r\n{dplyr}\r\n\r\n\r\nmtcars_df |>\r\n filter(stringr::str_detect(model, \"^Merc \")) |>\r\n group_by(cyl) |>\r\n summarize(kmpg = mean(mpg) * 1.61)\r\n\r\n # A tibble: 3 × 2\r\n cyl kmpg\r\n