deep learning [8]. Jacob Devlin pointed out in the
pre training of BERT: Deep Bidirectional Converter
for Language Understanding that BERT is simple in
concept and rich in experience. It has obtained new
and most advanced results on 11 natural language
processing tasks [9].
In October 2018, Google AI Research Institute
proposed a BERT (Bidirectional Encoder
Representation from Transformers) pre training
model [10], which is different from the traditional
emotion classification technology in the past and
has achieved the most advanced results in many
popular NLP tasks. Bert model not only introduces
the two-way coding mechanism of lstm, but also
uses the Transformer in GPT for feature extraction.
It has a very strong ability to extract text features,
and can learn the potential syntactic and semantic
information in sentences. Bai Qingchun and others
invented a position gated recurrent neural network
to dynamically integrate sentence level global and
local information to achieve attribute based text
emotion classification [11]. Duan et al. proposed a
Chinese short text classification algorithm based on
Transformer Bi directional Encoder Representation
(BERT) [12].The research team of Chengdu
University of Information Engineering put forward a
time-series multimodal emotion classification model
based on multiple perspectives to extract the key
emotional information in a specific time period and
multiple perspectives in view of the poor
multimodal fusion effect and the inability to fully
tap the key emotional information in a specific time
period and multiple perspectives [13].Suzhou
University proposed a small sample emotion
classification method based on the distillation of
large and small tutor knowledge, which reduced the
frequency of visiting the big tutor model, reduced
the distillation time in the process of training student
models, reduced resource consumption and
improved the accuracy of classification and
recognition [14].In order to improve the existing
Chinese comment emotion classification method
based on deep learning network and improve the
accuracy and efficiency of Chinese comment
emotion classification, Fan Anmin and others
improved the traditional BERT model based on
Tensorflow framework; On the Nlpcc2014 [15] data
set, each index is 1.30%, 0.54%, 2.32% and 1.44%
higher than the BERT model. Research shows that
this model performs well in the classification and
processing of Chinese comments' emotions, and is
better than previous deep learning network models
[16].
On this basis, in order to further deal with the
"emotional phenomenon" of subjectivity, emotion,
mood, mood, attitude and feeling in the text [17], Lv
Xueqiang, Peng Chen and others proposed multi
label text classification based on TLA-BERT model,
which integrates BERT and label semantic attention
MLTC method. Different from multi category text
classification, multi label text classification can
refine the text center from multiple label
perspectives [18];Zheng Yangyu and Jiang Hongwei
fully control the emotional information implied in
the context by using local context and gated
convolutional network model [19];Literature [20]
proposed a multi-channel emotion classification
method integrating feature vectors and attention
mechanisms of part of speech and word location,
which achieved high accuracy on the crawled
microblog dataset. Literature [21] added attention
mechanism to multi-channel CNN and BiGRU for
experiment, and its classification effect is better than
that of single channel network model. The word
vectors mentioned in the above research are static
word vectors, which cannot represent rich emotional
semantic information.
This paper analyzes the user's Chinese emotion
through the user's emotion classification technology
based on BERT-RCNN-ATT model, and gets
inspiration from the research on news text
classification based on improved BERT-CNN
model [22] and medical information classification
based on BERT-ATT-BiLSTM model [23].The use
of relevant technologies, as well as the combination
of Transformer to research BERT model, complete
the collection of data sets and other technologies, so
as to classify users' Chinese emotions, is conducive
to improving the existing Chinese comment emotion
classification methods based on deep learning
networks, and improve the accuracy and efficiency
of Chinese comment emotion classification.BERT
model absorbs the design idea of unsupervised
models such as auto encoder and word2vec, and
combines the characteristics of information such as
unordered relationship and sentence to sentence
relationship to be captured, and proposes a new
unsupervised objective function for the
converter.From this contribution, BERT model is
well deserved to be called the first pre training
language representation model to capture the
bidirectional relationship of text.
2 Related Work
Among the methods for studying Chinese emotion
analysis, there are currently three categories:
WSEAS TRANSACTIONS on COMMUNICATIONS
DOI: 10.37394/23204.2023.22.2
Feng Li, Yintong Huo, Lingling Wang