Made as part of the Interview Assignment For RA Position at IIIT-Delhi MIDAS Labs. (I got selected, but due to some unfortunate circumstances at my then workplace, i wasn't able to continue.)
The data consists of the audio recordings of quarterly board performance meetings where the stock performance of publically traded companies is announced and predictions for future performance are made. The data also consists of the text transcripts of the same meetings. The target is the stock prices for the next 30 days.
-
The audio and text data is the dataset mentioned in the paper, consisting of a total of 575 meetings with subsequent audio and text transcripts.
-
The paper mentions the use of CSRP data for the stock prices as the target, but i used the data from the yahoo finance website. The adjusted closing prices is used as the target.
-
The data contains audio files for each sentence in the transcript.
The problem statement is a multimodal approach for stock prices prediction, Relying on the correlation between, text and audio data. The main study is of taking the sentiment queues from the audio data and the contextual information from the text and learning a joint representation for accurate prediction of stock prices.
- To obtain the certain speech features from the audio, i used a pretrained speech sentiment classifier and took the last hidden layer activations as the input feature representation for the audio. The speech sentiment classifier uses MFCC features as input and uses convolutional filters to learn the embeddings for speech.
- To obtain the word embeddings for the text data, i used a pretrained BERT model. Instead of using pretrained glove encodings, i went for BERT because of it's ability to learn context information within the embeddings.
- The sentences from the text transcript are padded to be of same length and are converted to sequences through the BERT embeddings. These embedded sentence sequences serve as the input features for the text data.
- Each of the text and audio input features are passed through a BiLSTM Layer with an attention head to produce the within modality encodings.
- To learn the correlation between modalities, both encodings are concatenated and are passed through a BiLSTM layer which produces the combined encoding of both modalities.
- These encodings are then passed through a feed forward layer with a dropout regularizer.
- Finally this is passed through a regression layer that predicts the next 30 days of stock prices.
- I compare 3-day, 7-day, 15-day and 30-day activations for generating the scores.
The scores are generated according to the formula for stock volatility mentioned in the paper :
Where
And volatility is defined as :
where
and return price is defined as
where
- Instead of directly concatenating the audio and text encodings, use the approach similar to the paper.
- Use a better speech encoder like pase.
- The documents contain 3 different types of statements, generatl statements, past or current performance, future ambitions and predictions, these statements can be modelled separately to learn weighted features from each type of statement and utilize it to perform prediction and to get better insight how the speech queues affect the prediction.
- Clone the repo.
- Make a data folder in the root.
- Download the data from the link mentioned in the paper.
- Extract the data and move it to the features folder in the data folder.
- Run the script prepare_data.py : it'll create train, validation and test folders, convert the mp3 audios to wav files, and download the yahoo finance data.
- Run the training using train.py
- To predict, pass the directory where the data is stored and run test.py
MSE Scores |
||||
Model | 3-days | 7-days | 15-days | 30-days |
---|---|---|---|---|
Paper | 1.371 | 0.420 | 0.300 | 0.217 |
Past Volatility | 1.389 | 0.517 | 0.292 | 0.254 |
Text only | 1.879 | 0.503 | 0.373 | 0.279 |
Audio only | 4.389 | 9.138 | 11.242 | 12.256 |
Multimodal | TBD | TBD | TBD | TBD |