- The link to the dataset is DATASET
- Data chart
- Select inputs for model target and features
- Delete emoticons, special characters, limit the number of words in a sentence
- Standardize train data, validation data
- Converted our label to a one-time encoded value for the label
- Model architecture
- Show loss and accuracy train and validation
- Basic results and examples
- Results with binance test set
- Vector data image
- Points: 9999
- Dimension: 64
- You can try this link tensorflow
- Word embeddings are powerful representations of words in a continuous vector space, capturing semantic relationships and improving NLP tasks' performance. They offer advantages such as semantic representation, dimensionality reduction, and transfer learning. However, they have limitations like fixed vocabulary, contextual ambiguity, and data bias.
- Further development in terms of application
- Train the model with larger data
- Model improvements (combining other models, changing parameters,...)