Our research article titled “Multi Class Review Rating Classification Using Deep Recurrent Neural Networks” was published in an international journal “Neural Processing Letters” (Impact Factor: 2.591) on 15 October 2019. In this tutorial, we briefly discuss the objectives, short summary, key contributions, and main findings presented in the article.
Short Summary – Abstract:
This paper presents a gated-recurrent-unit (GRU) based recurrent neural network (RNN) architecture titled as DSWE-GRNN for multi-class review rating classification problem. Our model incorporates domain-specific word embeddings and does not depend on the reviewer’s information because we usually don’t have many reviews from the same user to measure the leniency of the user towards a specific sentiment. The RNN based architecture captures the hidden contextual information from the domain-specific word embeddings to effectively and efficiently train the model for review rating classification. In this work, we also demonstrate that downsampling technique for data balancing can be very effective for the model’s performance. We have evaluated our model over two datasets i.e IMDB dataset and the Hotel Reviews dataset. The results demonstrate that our model’s performance (accuracy) is comparable with or even better than the four baseline methods used for sentiment classification in literature.
https://link.springer.com/article/10.1007/s11063-019-10125-6
Problem Statement:
Now days there are many online platforms like Weblogs, Facebook, Instagram etc. where users express their opinions or sentiments about products, services, applications or any other type of entities. The problem is that we as humans cannot analyze this large amount of data. Thus, there is a need to design and train a deep-forward neural network-based model which can automatically classify our textual data (review) into multiple classes i.e. numbered rating (1-5 stars). The model predicts the star ratings from the textual reviews so that the rating system is reviewer independent with accurate predictions.
Data Description:
The authors have used two benchmark datasets i.e IMDB Datatset and Hotel Reviews Dataset. The IMDB dataset consists of 50,000 data samples (reviews) divided into ten classes whereas the Hotel Reviews dataset comprised of 14,895 data samples (reviews) divided into five classes. A data sample means a movie review in IMDB dataset and a hotel review in Hotel Reviews dataset.
Proposed Model:
The authors have proposed a model titled as Domain Specific Word Embeddings with Gated Recurrent Neural Networks (DSWE-GRNN) comprised of Gated Recurrent Units (GRU’s), domain-specific word embeddings (DSWE), reviewer/author independence and down-sampling technique for data balancing. The motivation behind using GRNN architecture is that, it can capture the contextual information from the given text while training the model using word embeddings. Moreover, the GRNN model is time efficient as compared to other recurrent neural network architectures like Long Short Term Memory Network (LSTM). The presented model does not depend on the reviewer specific attributes.
Baseline Methods For Comparison:
Following benchmark, baseline methods are used for the evaluation of the proposed model.
- WE-SimpleNN: In Embedd-SimpleNN, word embeddings are used as an input to the simple feed forward neural network
- CNN: Convolutional neural networks (CNN) is also considered as the stateof-the-art composition architecture for text sentiment classification
- LSTM: Long-Short-Term-Memory Network (LSTM), which is a type of Recurrent Neural Network (RNN), has also been implemented and the results were compared with the proposed model
- CNN-LSTM: A combination of Convolutional Neural Network (CNN) and Long Short Term Memory Network (LSTM) has also been implemented and the results were compared with the proposed model
Results and Discussion:
The accuracy of each baseline method over each dataset when compared with the proposed model (DSWE-GRNN) shows some interesting patterns. The methods like WE-SimpleNN and Convolutional Neural Networks (CNN) give similar accuracies at IMDB dataset. LSTM and CNN-LSTM methods gave relatively poor accuracy on Hotel Reviews dataset as compared with the IMDB dataset due to the fact that the Hotel Reviews dataset contains lesser number of domain specific keywords to train the network towards the possible decision classes. The proposed method outperforms all of the baseline methods on both datasets and the accuracies are highlighted in bold.
Method | IMDB Dataset (Accuracy) | Hotel Reviews Dataset (Accuracy) |
WE-SimpleNN | 0.8675 | 0.8024 |
CNN | 0.8645 | 0.7877 |
LSTM | 0.8360 | 0.8092 |
CNN-LSTM | 0.8268 | 0.7843 |
DSWE-GRNN | 0.8780 | 0.8132 |
Conclusion:
The authors have introduced a gated-recurrent neural network based model (DSWE-GRNN) with domain specific word embeddings. The proposed model encodes data samples into domain specific word embeddings which act as feature vectors for the training of the gated recurrent neural network (GRNN). The proposed model is evaluated over two benchmark datasets and the results are compared with four baseline methods. The results as presented in the above table, clearly demonstrate that the proposed model achieves state-of-the-art performances on both datasets. The further analysis shows that:
- Gated recurrent neural network model (GRNN) efficiently encodes the training samples by incorporating the contextual information in the model
- Domain-specific word embeddings and downsampling technique for data balancing dramatically boosts the performance of the model
- GRNN based model is time efficient as compared with the other recurrent neural network model like LSTM
- The proposed model (DSWE-GRNN) outperforms the baseline models used for multi-class review rating classification problem
- The proposed model will be helpful in building a more intelligent review rating system which will be machine and data dependent instead of reviewer/user