-
Notifications
You must be signed in to change notification settings - Fork 9
/
README.Rmd
173 lines (127 loc) · 6.32 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
---
output: rmarkdown::github_document
editor_options:
chunk_output_type: console
---
*** BREAKING CHANGES ***
# newsflash
Tools to Work with the Internet Archive and GDELT Television Explorer
## Description
Ref:
- <http://television.gdeltproject.org/cgi-bin/iatv_ftxtsearch/iatv_ftxtsearch>
- <https://archive.org/details/third-eye>
TV Explorer:
>_"In collaboration with the Internet Archive's Television News Archive, GDELT's Television Explorer allows you to keyword search the closed captioning streams of the Archive's 6 years of American television news and explore macro-level trends in how America's television news is shaping the conversation around key societal issues. Unlike the Archive's primary Television News interface, which returns results at the level of an hour or half-hour "show," the interface here reaches inside of those six years of programming and breaks the more than one million shows into individual sentences and counts how many of those sentences contain your keyword of interest. Instead of reporting that CNN had 24 hour-long shows yesterday that mentioned Donald Trump, the interface here will count how many sentences uttered on CNN yesterday mentioned his name - a vastly more accurate metric for assessing media attention."_
Third Eye:
>_The TV News Archive's Third Eye project captures the chyrons–or narrative text–that appear on the lower third of TV news screens and turns them into downloadable data and a Twitter feed for research, journalism, online tools, and other projects. At project launch (September 2017) we are collecting chyrons from BBC News, CNN, Fox News, and MSNBC–more than four million collected over just two weeks."_
An advantage of using this over the TV Explorer interactive selector & downloader or Third Eye API is that you get tidy tibbles with this package, ready to use in R.
NOTE: While I don't claim that this alpha-package is anywhere near perfect, the IA/GDELT TV API hiccups every so often so when there are critical errors run the same query in their web interface before submitting an issue. I kept getting errors when searching all affiliate markets for the "mexican president" query that also generate errors on the web site when JSON is selected as output (it's fine on the web site if the choice is interactive browser visualizations). Submit those errors to them, not here.
## What's Inside The Tin
The following functions are implemented:
- `list_chyrons`: Retrieve Third Eye chyron index
- `list_networks`: Helper function to identify station/network keyword and corpus date range for said market
- `newsflash`: Tools to Work with the Internet Archive and GDELT Television Explorer
- `query_tv`: Issue a query to the TV Explorer
- `read_chyrons`: Retrieve TV News Archive chyrons from the Internet Archive's Third Eye project
- `gd_top_trending`: Top Trending (GDELT)
- `iatv_top_trending: Top Trending Topics (Internet Archive TV Archive)
- `word_cloud`: Retrieve top words that appear most frequently in clips matching your search
## Installation
```{r eval=FALSE}
devtools::install_github("hrbrmstr/newsflash")
```
```{r message=FALSE, warning=FALSE, error=FALSE}
options(width=120)
```
## Usage
```{r message=FALSE, warning=FALSE, error=FALSE}
library(newsflash)
library(ggalt)
library(hrbrthemes)
library(tidyverse)
# current verison
packageVersion("newsflash")
```
### "Third Eye" Chyrons are simpler so we'll start with them first:
```{r fig.width=8, fig.height=5, cache=TRUE}
list_chyrons()
ch <- read_chyrons("2018-04-13")
mutate(
ch,
hour = lubridate::hour(ts),
text = tolower(text),
mention = grepl("comey", text)
) %>%
filter(mention) %>%
count(hour, channel) %>%
ggplot(aes(hour, n)) +
geom_segment(aes(xend=hour, yend=0), color = "lightslategray", size=1) +
scale_x_continuous(name="Hour (GMT)", breaks=seq(0, 23, 6),
labels=sprintf("%02d:00", seq(0, 23, 6))) +
scale_y_continuous(name="# Chyrons", limits=c(0,20)) +
facet_wrap(~channel, scales="free") +
labs(title="Chyrons mentioning 'Comey' per hour per channel",
caption="Source: Internet Archive Third Eye project & <github.com/hrbrmstr/newsflash>") +
theme_ipsum_rc(grid="Y")
```
## Now for the TV Explorer:
### See what networks & associated corpus date ranges are available:
```{r}
list_networks(widget=FALSE)
```
### Basic search:
```{r fig.width=8, fig.height=7, cache=TRUE}
comey <- query_tv('comey', start_date = "2018-04-01")
comey
query_tv('comey', start_date = "2018-04-01") %>%
arrange(date) %>%
ggplot(aes(date, value, group=network)) +
ggalt::geom_xspline(aes(color=network)) +
ggthemes::scale_color_tableau(name=NULL) +
labs(x=NULL, y="Volume Metric", title="'Comey' Trends Across National Networks") +
facet_wrap(~network) +
theme_ipsum_rc(grid="XY") +
theme(legend.position="none")
```
```{r cache=TRUE}
query_tv("comey Network:CNN", mode = "TimelineVol", start_date = "2018-01-01") %>%
arrange(date) %>%
ggplot(aes(date, value, group=network)) +
ggalt::geom_xspline(color="lightslategray") +
ggthemes::scale_color_tableau(name=NULL) +
labs(x=NULL, y="Volume Metric", title="'Comey' Trend on CNN") +
theme_ipsum_rc(grid="XY")
```
### Relative Network Attention To Syria since January 1, 2018
```{r cache=TRUE}
query_tv('syria Market:"National"', mode = "StationChart", start_date = "2018-01-01") %>%
arrange(desc(count)) %>%
knitr::kable("markdown")
```
### Video Clips
```{r cache=TRUE}
clips <- query_tv('comey Market:"National"', mode = "ClipGallery", start_date = "2018-01-01")
clips
```
`r clips$show_date[1]` | `r clips$station[1]` | `r clips$show[1]`
<a href="`r clips$preview_url[1]`"><img src="`r clips$preview_thumb[1]`"></a>
`r clips$snippet[1]`
### "Word Cloud" (top associated words to the query)
```{r fig.height=8, fig.width=8, cache=TRUE}
wc <- query_tv('hannity Market:"National"', mode = "WordCloud", start_date = "2018-04-13")
ggplot(wc, aes(x=1, y=1)) +
ggrepel::geom_label_repel(aes(label=label, size=count), segment.colour="#00000000", segment.size=0) +
scale_size_continuous(trans="sqrt") +
labs(x=NULL, y=NULL) +
theme_ipsum_rc(grid="") +
theme(axis.text=element_blank()) +
theme(legend.position="none")
```
### Last 15 Minutes Top Trending
```{r}
gd_top_trending()
```
### Top Overall Trending from the Internet Archive TV Archive (2017 and earlier)
```{r}
iatv_top_trending("2017-12-01 18:00", "2017-12-02 06:00")
```