diff --git a/_posts/2024-07-21-enumerate-possible-options/enumerate-possible-options.Rmd b/_posts/2024-07-21-enumerate-possible-options/enumerate-possible-options.Rmd index 82f9ac22..1cd8a5e4 100644 --- a/_posts/2024-07-21-enumerate-possible-options/enumerate-possible-options.Rmd +++ b/_posts/2024-07-21-enumerate-possible-options/enumerate-possible-options.Rmd @@ -10,7 +10,7 @@ author: affiliation: University of Pennsylvania Linguistics affiliation_url: https://live-sas-www-ling.pantheon.sas.upenn.edu/ orcid_id: 0000-0002-0701-921X -date: "`r Sys.Date()`" +date: 07-21-2024 output: distill::distill_article: include-after-body: "highlighting.html" diff --git a/_posts/2024-07-21-enumerate-possible-options/enumerate-possible-options.html b/_posts/2024-07-21-enumerate-possible-options/enumerate-possible-options.html index 800cd107..81700328 100644 --- a/_posts/2024-07-21-enumerate-possible-options/enumerate-possible-options.html +++ b/_posts/2024-07-21-enumerate-possible-options/enumerate-possible-options.html @@ -94,8 +94,8 @@ - - + + @@ -115,7 +115,7 @@ @@ -1524,7 +1524,7 @@ @@ -1547,7 +1547,7 @@
Compilation of some code-snippets, mostly for my own use
+Every so often I’ll have a link to some file on hand and want to read it in R without going out of my way to browse the web page, find a download link, download it somewhere onto my computer, grab the path to it, and then finally read it into R.
+Over the years I’ve accumulated some tricks to get data into R “straight from a url”, even if the url does not point to the raw file itself. The method varies between data sources though, and I have a hard time keeping track of them in my head, so I thought I’d write some of these down for my own reference.
+GitHub has nice a point-and-click interface for browsing repositories and previewing files. For example, you can navigate to the dplyr::starwars
dataset from tidyverse/dplyr, at https://github.com/tidyverse/dplyr/blob/main/data-raw/starwars.csv:
That url, despite ending in a .csv
, does not point to the raw data - instead, it’s a full html webpage:
rvest::read_html("https://github.com/tidyverse/dplyr/blob/main/data-raw/starwars.csv")
+ {html_document}
+ <html lang="en" data-color-mode="auto" data-light-theme="light" ...
+ [1] <head>\n<meta http-equiv="Content-Type" content="text/html; charset=UTF-8 ...
+ [2] <body class="logged-out env-production page-responsive" style="word-wrap: ...
+To actually point to the raw file, you want to click on the Raw button to the top-right corner of the preview:
+That gets you to the actual contents of the comma separated values, at https://raw.githubusercontent.com/tidyverse/dplyr/main/data-raw/starwars.csv:
+You can then read that URL starting with “raw.githubusercontent.com/…” with read.csv()
:
read.csv("https://raw.githubusercontent.com/tidyverse/dplyr/main/data-raw/starwars.csv") |>
+ dplyr::glimpse()
+ Rows: 87
+ Columns: 14
+ $ name <chr> "Luke Skywalker", "C-3PO", "R2-D2", "Darth Vader", "Leia Or…
+ $ height <int> 172, 167, 96, 202, 150, 178, 165, 97, 183, 182, 188, 180, 2…
+ $ mass <dbl> 77.0, 75.0, 32.0, 136.0, 49.0, 120.0, 75.0, 32.0, 84.0, 77.…
+ $ hair_color <chr> "blond", NA, NA, "none", "brown", "brown, grey", "brown", N…
+ $ skin_color <chr> "fair", "gold", "white, blue", "white", "light", "light", "…
+ $ eye_color <chr> "blue", "yellow", "red", "yellow", "brown", "blue", "blue",…
+ $ birth_year <dbl> 19.0, 112.0, 33.0, 41.9, 19.0, 52.0, 47.0, NA, 24.0, 57.0, …
+ $ sex <chr> "male", "none", "none", "male", "female", "male", "female",…
+ $ gender <chr> "masculine", "masculine", "masculine", "masculine", "femini…
+ $ homeworld <chr> "Tatooine", "Tatooine", "Naboo", "Tatooine", "Alderaan", "T…
+ $ species <chr> "Human", "Droid", "Droid", "Human", "Human", "Human", "Huma…
+ $ films <chr> "A New Hope, The Empire Strikes Back, Return of the Jedi, R…
+ $ vehicles <chr> "Snowspeeder, Imperial Speeder Bike", "", "", "", "Imperial…
+ $ starships <chr> "X-wing, Imperial shuttle", "", "", "TIE Advanced x1", "", …
+But note that this method of “click the Raw button to get the corresponding raw.githubusercontent.com/… url to the file contents” will not work for file formats that cannot be displayed in plain text (clicking the button will instead download the file via your browser). So sometimes (especially when you have a binary file) you have to construct this “remote-readable” url to the file manually.
+Fortunately, going from one link to the other is pretty formulaic. To use the starwars dataset example again:
+emphatic::hl_diff(
+ "https://github.com/tidyverse/dplyr/blob/main/data-raw/starwars.csv",
+ "https://raw.githubusercontent.com/tidyverse/dplyr/main/data-raw/starwars.csv"
+)
++[1] "https:// github .com/tidyverse/dplyr/blob/main/data-raw/starwars.csv"+
[1] "https://raw.githubusercontent.com/tidyverse/dplyr /main/data-raw/starwars.csv" +
It’s a similar idea with GitHub Gists (sometimes I like to store small datasets for demos as gists). For example, here’s a link to a simulated data for a Stroop experiment stroop.csv
: https://gist.github.com/yjunechoe/17b3787fb7aec108c19b33d71bc19bc6
The modified url where you can read the csv contents off of is https://gist.githubusercontent.com/yjunechoe/17b3787fb7aec108c19b33d71bc19bc6/raw/c643b9760126d92b8ac100860ac5b50ba492f316/stroop.csv, which you can again get to by clicking the Raw button at the top-right corner of the gist
+But actually, that long link you get by default points specifically to the current commit. If you instead want to keep the link up to date with the most recent commit, you can remove the second hash that comes after raw/
:
emphatic::hl_diff(
+ "https://gist.githubusercontent.com/yjunechoe/17b3787fb7aec108c19b33d71bc19bc6/raw/c643b9760126d92b8ac100860ac5b50ba492f316/stroop.csv",
+ "https://gist.githubusercontent.com/yjunechoe/17b3787fb7aec108c19b33d71bc19bc6/raw/stroop.csv"
+)
++[1] "https://gist.githubusercontent.com/yjunechoe/17b3787fb7aec108c19b33d71bc19bc6/raw/c643b9760126d92b8ac100860ac5b50ba492f316/stroop.csv"+
[1] "https://gist.githubusercontent.com/yjunechoe/17b3787fb7aec108c19b33d71bc19bc6/raw /stroop.csv" +
In practice, I don’t use gists to store replicability-sensitive data, so I prefer to just use the shorter link that’s not tied to a specific commit.
+read.csv("https://gist.githubusercontent.com/yjunechoe/17b3787fb7aec108c19b33d71bc19bc6/raw/stroop.csv") |>
+ dplyr::glimpse()
+ Rows: 240
+ Columns: 5
+ $ subj <chr> "S01", "S01", "S01", "S01", "S01", "S01", "S01", "S01", "S02…
+ $ word <chr> "blue", "blue", "green", "green", "red", "red", "yellow", "y…
+ $ condition <chr> "match", "mismatch", "match", "mismatch", "match", "mismatch…
+ $ accuracy <int> 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
+ $ RT <int> 400, 549, 576, 406, 296, 231, 433, 1548, 561, 1751, 286, 710…
+We now turn to the harder problem of accessing a file in a private GitHub repository. If you already have the GitHub webpage open and you’re signed in, you can follow the same step of copying the link that the Raw button redirects to.
+Except this time, you’ll see the url come with a “token”. This token is necessary to remotely access the data in a private repo. Once a token is generated, the file can be accessed using that token from anywhere, but it will expire at some point because GitHub refreshes these tokens periodically (so treat them as if they’re for single use).
+For a more robust approach, you can use the GitHub Contents API. If you have your credentials set up in {gh}
, you can request a token-tagged url to the private file using the syntax:
gh::gh("/repos/{user}/{repo}/contents/{path}")$download_url
+This is a general solution to getting a url to file contents. So for example, even without any credentials set up you can point to dplyr’s starwars.csv
since that’s publicly accessible. This produces the same “raw.githubusercontent.com/…” url we saw above:
gh::gh("/repos/tidyverse/dplyr/contents/data-raw/starwars.csv")$download_url
+ [1] "https://raw.githubusercontent.com/tidyverse/dplyr/main/data-raw/starwars.csv"
+For demonstration with a private repo, here is one of mine that you cannot access https://github.com/yjunechoe/my-super-secret-repo. But because I set up my credentials in {gt}
, I can get a link to a content within that repo with the access token attached in the url (“?token=…”):
gh::gh("/repos/yjunechoe/my-super-secret-repo/contents/README.md")$download_url |>
+ # truncating...
+ substr(1, 100) |>
+ paste0("...")
+ [1] "https://raw.githubusercontent.com/yjunechoe/my-super-secret-repo/main/README.md?token=AMTCUR7TQUFUHE..."
+I can then use this url to read the private file:1
+ [1] "Surprise!"
+Reading files off of OSF follows a similar strategy to fetching public files on GitHub. Consider, for example, the dyestuff.arrow
file in the OSF repository for MixedModels.jl. Browsing the repository through the point-and-click interface can get you to the page for the file at https://osf.io/9vztj/, where it shows:
The download button can be found inside the dropdown menubar:
+But instead of clicking on it (which will start a download via the browser), we can grab the link address that it redirects to, which is https://osf.io/download/9vztj/. That url can then be passed directly into a read function:
+arrow::read_feather("https://osf.io/download/9vztj/") |>
+ dplyr::glimpse()
+ Rows: 30
+ Columns: 2
+ $ batch <fct> A, A, A, A, A, B, B, B, B, B, C, C, C, C, C, D, D, D, D, D, E, E…
+ $ yield <int> 1545, 1440, 1440, 1520, 1580, 1540, 1555, 1490, 1560, 1495, 1595…
+You might have already caught on to this, but the pattern is simply to point to osf.io/download/
instead of osf.io/
.
This method also works for view-only links to anonymized OSF projects as well. For example, this is an anonymized link to a csv file from one of my projects https://osf.io/tr8qm?view_only=998ad87d86cc4049af4ec6c96a91d9ad. Navigating to this link will show a web preview of the csv file contents, just like in the GitHub example with dplyr::starwars
.
By inserting /download
into this url, we read the csv file contents directly:
Item plaus_bias trans_bias
+ 1 Awakened -0.29631221 -1.2200901
+ 2 Calmed 0.09877074 -0.4102332
+ 3 Choked 1.28401957 -1.4284905
+ 4 Dressed -0.59262442 -1.2087228
+ 5 Failed -0.98770736 0.1098839
+ 6 Groomed -1.08647810 0.9889550
+I think it’s severly underrated how base R has a readClipboard()
function and a collection of read.*()
functions which can also read directly from a "clipboard"
connection.2
I often do this for html/markdown summary tables that a website might display, or sometimes even for entire excel/googlesheets tables after doing a select-all. For such relatively small chunks of data that you just want to quickly get into R, you can lean on base R’s clipboard functionalities.
+For example, given this markdown table:
+cyl | +mpg | +
---|---|
4 | +26.66364 | +
6 | +19.74286 | +
8 | +15.10000 | +
You can copy it and run the following code to get that back as an R data frame:
+read.delim("clipboard")
+# Or, `read.delim(text = readClipboard())`
+ cyl mpg
+ 1 4 26.66364
+ 2 6 19.74286
+ 3 8 15.10000
+If you’re instead copying something flat like a list of numbers or strings, you can use scan()
and specify the appropriate sep
to get that back as a vector:3
scan("clipboard", sep = ",")
+# Or, `scan(textConnection(readClipboard()), sep = ",")`
+ [1] 1 2 3 4 5 6 7 8 9 10
+It should be noted though that parsing clipboard contents is not a robust feature in base R. If you want a more principled approach to reading data from clipboard, you should use {datapasta}
. And for printing data for others to copy-paste into R, use {constructive}
. See also {clipr}
which extends clipboard read/write functionalities.
Note that the API will actually generate a new token every time you send a request (and the tokens will expire with time).↩︎
The special value "clipboard"
works for most base-R read functions that take a file
or con
argument.↩︎
Thanks @coolbutuseless for pointing me to textConnection()
!↩︎
`,e.githubCompareUpdatesUrl&&(t+=`View all changes to this article since it was first published.`),t+=` + If you see mistakes or want to suggest changes, please create an issue on GitHub.
+ `);const n=e.journal;return'undefined'!=typeof n&&'Distill'===n.title&&(t+=` +Diagrams and text are licensed under Creative Commons Attribution CC-BY 4.0 with the source available on GitHub, unless noted otherwise. The figures that have been reused from other sources don’t fall under this license and can be recognized by a note in their caption: “Figure from …”.
+ `),'undefined'!=typeof e.publishedDate&&(t+=` +For attribution in academic contexts, please cite this work as
+${e.concatenatedAuthors}, "${e.title}", Distill, ${e.publishedYear}.+
BibTeX citation
+${m(e)}+ `),t}var An=Math.sqrt,En=Math.atan2,Dn=Math.sin,Mn=Math.cos,On=Math.PI,Un=Math.abs,In=Math.pow,Nn=Math.LN10,jn=Math.log,Rn=Math.max,qn=Math.ceil,Fn=Math.floor,Pn=Math.round,Hn=Math.min;const zn=['Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday'],Bn=['Jan.','Feb.','March','April','May','June','July','Aug.','Sept.','Oct.','Nov.','Dec.'],Wn=(e)=>10>e?'0'+e:e,Vn=function(e){const t=zn[e.getDay()].substring(0,3),n=Wn(e.getDate()),i=Bn[e.getMonth()].substring(0,3),a=e.getFullYear().toString(),d=e.getUTCHours().toString(),r=e.getUTCMinutes().toString(),o=e.getUTCSeconds().toString();return`${t}, ${n} ${i} ${a} ${d}:${r}:${o} Z`},$n=function(e){const t=Array.from(e).reduce((e,[t,n])=>Object.assign(e,{[t]:n}),{});return t},Jn=function(e){const t=new Map;for(var n in e)e.hasOwnProperty(n)&&t.set(n,e[n]);return t};class Qn{constructor(e){this.name=e.author,this.personalURL=e.authorURL,this.affiliation=e.affiliation,this.affiliationURL=e.affiliationURL,this.affiliations=e.affiliations||[]}get firstName(){const e=this.name.split(' ');return e.slice(0,e.length-1).join(' ')}get lastName(){const e=this.name.split(' ');return e[e.length-1]}}class Gn{constructor(){this.title='unnamed article',this.description='',this.authors=[],this.bibliography=new Map,this.bibliographyParsed=!1,this.citations=[],this.citationsCollected=!1,this.journal={},this.katex={},this.publishedDate=void 0}set url(e){this._url=e}get url(){if(this._url)return this._url;return this.distillPath&&this.journal.url?this.journal.url+'/'+this.distillPath:this.journal.url?this.journal.url:void 0}get githubUrl(){return this.githubPath?'https://github.com/'+this.githubPath:void 0}set previewURL(e){this._previewURL=e}get previewURL(){return this._previewURL?this._previewURL:this.url+'/thumbnail.jpg'}get publishedDateRFC(){return Vn(this.publishedDate)}get updatedDateRFC(){return Vn(this.updatedDate)}get publishedYear(){return this.publishedDate.getFullYear()}get publishedMonth(){return Bn[this.publishedDate.getMonth()]}get publishedDay(){return this.publishedDate.getDate()}get publishedMonthPadded(){return Wn(this.publishedDate.getMonth()+1)}get publishedDayPadded(){return Wn(this.publishedDate.getDate())}get publishedISODateOnly(){return this.publishedDate.toISOString().split('T')[0]}get volume(){const e=this.publishedYear-2015;if(1>e)throw new Error('Invalid publish date detected during computing volume');return e}get issue(){return this.publishedDate.getMonth()+1}get concatenatedAuthors(){if(2
tag. We found the following text: '+t);const n=document.createElement('span');n.innerHTML=e.nodeValue,e.parentNode.insertBefore(n,e),e.parentNode.removeChild(e)}}}}).observe(this,{childList:!0})}}var Ti='undefined'==typeof window?'undefined'==typeof global?'undefined'==typeof self?{}:self:global:window,_i=f(function(e,t){(function(e){function t(){this.months=['jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec'],this.notKey=[',','{','}',' ','='],this.pos=0,this.input='',this.entries=[],this.currentEntry='',this.setInput=function(e){this.input=e},this.getEntries=function(){return this.entries},this.isWhitespace=function(e){return' '==e||'\r'==e||'\t'==e||'\n'==e},this.match=function(e,t){if((void 0==t||null==t)&&(t=!0),this.skipWhitespace(t),this.input.substring(this.pos,this.pos+e.length)==e)this.pos+=e.length;else throw'Token mismatch, expected '+e+', found '+this.input.substring(this.pos);this.skipWhitespace(t)},this.tryMatch=function(e,t){return(void 0==t||null==t)&&(t=!0),this.skipWhitespace(t),this.input.substring(this.pos,this.pos+e.length)==e},this.matchAt=function(){for(;this.input.length>this.pos&&'@'!=this.input[this.pos];)this.pos++;return!('@'!=this.input[this.pos])},this.skipWhitespace=function(e){for(;this.isWhitespace(this.input[this.pos]);)this.pos++;if('%'==this.input[this.pos]&&!0==e){for(;'\n'!=this.input[this.pos];)this.pos++;this.skipWhitespace(e)}},this.value_braces=function(){var e=0;this.match('{',!1);for(var t=this.pos,n=!1;;){if(!n)if('}'==this.input[this.pos]){if(0 =k&&(++x,i=k);if(d[x]instanceof n||d[T-1].greedy)continue;w=T-x,y=e.slice(i,k),v.index-=i}if(v){g&&(h=v[1].length);var S=v.index+h,v=v[0].slice(h),C=S+v.length,_=y.slice(0,S),L=y.slice(C),A=[x,w];_&&A.push(_);var E=new n(o,u?a.tokenize(v,u):v,b,v,f);A.push(E),L&&A.push(L),Array.prototype.splice.apply(d,A)}}}}}return d},hooks:{all:{},add:function(e,t){var n=a.hooks.all;n[e]=n[e]||[],n[e].push(t)},run:function(e,t){var n=a.hooks.all[e];if(n&&n.length)for(var d,r=0;d=n[r++];)d(t)}}},i=a.Token=function(e,t,n,i,a){this.type=e,this.content=t,this.alias=n,this.length=0|(i||'').length,this.greedy=!!a};if(i.stringify=function(e,t,n){if('string'==typeof e)return e;if('Array'===a.util.type(e))return e.map(function(n){return i.stringify(n,t,e)}).join('');var d={type:e.type,content:i.stringify(e.content,t,n),tag:'span',classes:['token',e.type],attributes:{},language:t,parent:n};if('comment'==d.type&&(d.attributes.spellcheck='true'),e.alias){var r='Array'===a.util.type(e.alias)?e.alias:[e.alias];Array.prototype.push.apply(d.classes,r)}a.hooks.run('wrap',d);var l=Object.keys(d.attributes).map(function(e){return e+'="'+(d.attributes[e]||'').replace(/"/g,'"')+'"'}).join(' ');return'<'+d.tag+' class="'+d.classes.join(' ')+'"'+(l?' '+l:'')+'>'+d.content+''+d.tag+'>'},!t.document)return t.addEventListener?(t.addEventListener('message',function(e){var n=JSON.parse(e.data),i=n.language,d=n.code,r=n.immediateClose;t.postMessage(a.highlight(d,a.languages[i],i)),r&&t.close()},!1),t.Prism):t.Prism;var d=document.currentScript||[].slice.call(document.getElementsByTagName('script')).pop();return d&&(a.filename=d.src,document.addEventListener&&!d.hasAttribute('data-manual')&&('loading'===document.readyState?document.addEventListener('DOMContentLoaded',a.highlightAll):window.requestAnimationFrame?window.requestAnimationFrame(a.highlightAll):window.setTimeout(a.highlightAll,16))),t.Prism}();e.exports&&(e.exports=n),'undefined'!=typeof Ti&&(Ti.Prism=n),n.languages.markup={comment://,prolog:/<\?[\w\W]+?\?>/,doctype://i,cdata://i,tag:{pattern:/<\/?(?!\d)[^\s>\/=$<]+(?:\s+[^\s>\/=]+(?:=(?:("|')(?:\\\1|\\?(?!\1)[\w\W])*\1|[^\s'">=]+))?)*\s*\/?>/i,inside:{tag:{pattern:/^<\/?[^\s>\/]+/i,inside:{punctuation:/^<\/?/,namespace:/^[^\s>\/:]+:/}},"attr-value":{pattern:/=(?:('|")[\w\W]*?(\1)|[^\s>]+)/i,inside:{punctuation:/[=>"']/}},punctuation:/\/?>/,"attr-name":{pattern:/[^\s>\/]+/,inside:{namespace:/^[^\s>\/:]+:/}}}},entity:/?[\da-z]{1,8};/i},n.hooks.add('wrap',function(e){'entity'===e.type&&(e.attributes.title=e.content.replace(/&/,'&'))}),n.languages.xml=n.languages.markup,n.languages.html=n.languages.markup,n.languages.mathml=n.languages.markup,n.languages.svg=n.languages.markup,n.languages.css={comment:/\/\*[\w\W]*?\*\//,atrule:{pattern:/@[\w-]+?.*?(;|(?=\s*\{))/i,inside:{rule:/@[\w-]+/}},url:/url\((?:(["'])(\\(?:\r\n|[\w\W])|(?!\1)[^\\\r\n])*\1|.*?)\)/i,selector:/[^\{\}\s][^\{\};]*?(?=\s*\{)/,string:{pattern:/("|')(\\(?:\r\n|[\w\W])|(?!\1)[^\\\r\n])*\1/,greedy:!0},property:/(\b|\B)[\w-]+(?=\s*:)/i,important:/\B!important\b/i,function:/[-a-z0-9]+(?=\()/i,punctuation:/[(){};:]/},n.languages.css.atrule.inside.rest=n.util.clone(n.languages.css),n.languages.markup&&(n.languages.insertBefore('markup','tag',{style:{pattern:/(
+
+
+ ${e.map(l).map((e)=>`
`)}}const Mi=`
+d-citation-list {
+ contain: layout style;
+}
+
+d-citation-list .references {
+ grid-column: text;
+}
+
+d-citation-list .references .title {
+ font-weight: 500;
+}
+`;class Oi extends HTMLElement{static get is(){return'd-citation-list'}connectedCallback(){this.hasAttribute('distill-prerendered')||(this.style.display='none')}set citations(e){x(this,e)}}var Ui=f(function(e){var t='undefined'==typeof window?'undefined'!=typeof WorkerGlobalScope&&self instanceof WorkerGlobalScope?self:{}:window,n=function(){var e=/\blang(?:uage)?-(\w+)\b/i,n=0,a=t.Prism={util:{encode:function(e){return e instanceof i?new i(e.type,a.util.encode(e.content),e.alias):'Array'===a.util.type(e)?e.map(a.util.encode):e.replace(/&/g,'&').replace(/e.length)break tokenloop;if(!(y instanceof n)){c.lastIndex=0;var v=c.exec(y),w=1;if(!v&&f&&x!=d.length-1){if(c.lastIndex=i,v=c.exec(e),!v)break;for(var S=v.index+(g?v[1].length:0),C=v.index+v[0].length,T=x,k=i,p=d.length;T
+
+`);class Ni extends ei(Ii(HTMLElement)){renderContent(){if(this.languageName=this.getAttribute('language'),!this.languageName)return void console.warn('You need to provide a language attribute to your
Footnotes
+
+`,!1);class Fi extends qi(HTMLElement){connectedCallback(){super.connectedCallback(),this.list=this.root.querySelector('ol'),this.root.style.display='none'}set footnotes(e){if(this.list.innerHTML='',e.length){this.root.style.display='';for(const t of e){const e=document.createElement('li');e.id=t.id+'-listing',e.innerHTML=t.innerHTML;const n=document.createElement('a');n.setAttribute('class','footnote-backlink'),n.textContent='[\u21A9]',n.href='#'+t.id,e.appendChild(n),this.list.appendChild(e)}}else this.root.style.display='none'}}const Pi=ti('d-hover-box',`
+
+
+