This python application is used to collect social media data and append daily results to sheets file on Google Drive.
Scrape is running on an ExpressJS server with a daily cron job that fires at 9:00AM ET.
This data is collected from Instagram, Facebook, and YouTube to automate how a marketing department can track and collect data.
Server is being hosted on Raspberry Pi
For this application to successfully compile by default, the following packages below are required:
// Scraping
$ pip install requests bs4
// Pathing
$ pip install dotenv pathlib
// Google Drive Authentication
$ pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib
// Server
$ npm install express
// Cron and Email Notification
$ npm install node-cron shelljs nodemailer
File that holds confidential keys:
USERNAME=****
PASSWORD=****
CLINICA_API_KEY=****
CLINICA_CHANNEL_ID=****
VOIR_API_KEY=****
VOIR_CHANNEL_ID=****
After creating credentials to Google Drive API, a JSON file will be downloaded to your machine:
{
"type": "service_account",
"project_id": ****,
"private_key_id": ****,
"private_key": ****,
"client_email": ****,
"client_id": ****,
"auth_uri": ****,
"token_uri": ****,
"auth_provider_x509_cert_url": ****,
"client_x509_cert_url": ****
}