The goal here is to outline in a couple of paragraphs and few lines
of code some simple ways in which we can use the Windsor.ai API and R
package windsoraiR
to gain insights into marketing campaign
performance across channels like Google and Facebook. The nice thing
about Windsor.ai is that you can have all of your marketing channels
aggregating in a single place and then access all data at once using
this package. Of course, once the data is in R
you can do
much more than the examples below, and work on analysis, predictions or
dashboards.
After we create an account at Windsor.ai and obtain an API key, collecting our data from Windsor to R is as easy as:
library(windsoraiR)
my_data <-
windsor_fetch(
api_key = "your api key",
date_preset = "last_7d",
fields = c("source", "campaign", "clicks",
"medium", "sessions", "spend")
)
Lets take a peek at the data we just downloaded to get a better idea about the structure and type of information included.
str(my_data)
#> 'data.frame': 1677 obs. of 6 variables:
#> $ data.campaign: chr "Website visits - May 25, 2019" "(ID)<00_mat>[id-cat]{eb}: mattress" "(ID)<00_mat>[emma mattress]{eb}: emma mattress" "Kampanja #1" ...
#> $ data.clicks : int 1 0 0 0 0 0 0 0 0 0 ...
#> $ data.spend : chr "4" "0" "0" "0" ...
#> $ data.medium : chr "Unknown" "Unknown" "Unknown" "Unknown" ...
#> $ data.source : chr "linkedin" "google" "google" "google" ...
#> $ googlesheets : chr "'spreadsheet_id'" "'spreadsheet_id'" "'spreadsheet_id'" "'spreadsheet_id'" ...
First, lets try to find the campaigns with most clicks. To do this,
we’ll filter only those rows that have clicks, then group the dataset by
campaign (data.campaign
column) and sum up the click count
per campaign.
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(ggplot2)
top_10 <-
my_data %>%
filter(data.clicks > 0) %>%
group_by(data.campaign) %>%
summarise(n_clicks = sum(data.clicks)) %>%
ungroup %>%
arrange(desc(n_clicks)) %>%
slice_head(n = 10)
knitr::kable(top_10)
data.campaign | n_clicks |
---|---|
AO_Smart_Shopping_Running_Jogging | 56670 |
city_display_awareness_hyd_april_2021 | 54711 |
AO_Dynamic_Remarketing_Cart_Abandoners | 54089 |
AO_Dynamic_Remarketing_Cart_Abandoners_Website | 39969 |
AO_Smart_Shopping_Yoga | 27551 |
AO_Search_Brand_Exact | 24535 |
AO_UAC_app_install | 21958 |
AO_UAC_app_install_Hyd | 20155 |
AO_Shopping_Cycling | 18852 |
AO_UAC_app_install_chennai | 16905 |
Thereafter we can quickly visualize our data using
ggplot2
We can gain further insight by grouping by other variables add
mapping them to the plot aesthetics. For example, in this case we might
want to keep track of the data source (linkedin, google, …). So we’ll
modify our magrittr
chain to add another grouping variable
before tallying the clicks.
top_10 <-
my_data %>%
filter(data.clicks > 0) %>%
group_by(data.source, data.campaign) %>%
summarise(n_clicks = sum(data.clicks)) %>%
ungroup %>%
arrange(desc(n_clicks)) %>%
slice_head(n = 10)
#> `summarise()` has grouped output by 'data.source'. You can override using the
#> `.groups` argument.
knitr::kable(top_10)
data.source | data.campaign | n_clicks |
---|---|---|
AO_Smart_Shopping_Running_Jogging | 56670 | |
city_display_awareness_hyd_april_2021 | 54711 | |
AO_Dynamic_Remarketing_Cart_Abandoners | 54089 | |
AO_Dynamic_Remarketing_Cart_Abandoners_Website | 39969 | |
AO_Smart_Shopping_Yoga | 27551 | |
AO_Search_Brand_Exact | 24535 | |
AO_UAC_app_install | 21958 | |
AO_UAC_app_install_Hyd | 20155 | |
AO_Shopping_Cycling | 18852 | |
AO_UAC_app_install_chennai | 16905 |
In this case the vast majority of data (all of it for the top 10 ad campaigns) comes from Google, so only this source is labeled on the graph.
We can apply the same type of data manipulation and plotting to check
the data.spend
values.
my_data %>%
filter(data.clicks > 0) %>%
group_by(data.campaign) %>%
summarise(sum_spend = sum(as.numeric(data.spend))) %>%
ungroup %>%
arrange(desc(sum_spend)) %>%
slice_head(n = 10) %>%
ggplot(aes(x = sum_spend, y = data.campaign)) +
geom_col()
Finally, for a direct comparison, we can aggregate both the clicks and spending per ad campaign and plot them jointly:
library(tidyr)
my_data %>%
filter(data.clicks > 0) %>%
group_by(data.campaign) %>%
summarise(n_clicks = sum(data.clicks), sum_spend = sum(as.numeric(data.spend))) %>%
arrange(desc(sum_spend)) %>%
slice_head(n = 10) %>%
pivot_longer(cols = c("n_clicks", "sum_spend"), names_to = "aggreg", values_to = "values") %>%
ggplot(aes(x = values, y = data.campaign, fill = aggreg)) +
geom_col() +
facet_wrap("aggreg", ncol = 2) +
theme(legend.position="bottom")