4.6 Article

Developing Novel Robust Loss Functions-Based Classification Layers for DLLSTM Neural Networks

期刊

IEEE ACCESS
卷 11, 期 -, 页码 49863-49873

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2023.3275964

关键词

DNN; DLLSTM; loss function; mean absolute error; sum squared error

向作者/读者索取更多资源

This paper suggests improving the performance of Deep Learning Long Short-Term Memory (DLLSTM) structures by using robust loss functions and creating new classification layers. The effectiveness of the suggested DLLSTM classifier was examined using three loss functions (Crossentropy, MAE, SSE) for two different applications. The results show that the suggested classifier with SSE loss function outperforms others and the suggested activation functions are more accurate than the tanh function.
In this paper, we suggest improving the performance of developed activation function-based Deep Learning Long Short-Term Memory (DLLSTM) structures by employing robust loss functions like Mean Absolute Error (MAE) and Sum Squared Error (SSE) to create new classification layers. The classification layer is the last layer in any DLLSTM neural network structure where the loss function resides. The LSTM is an improved recurrent neural network that fixes the problem of the vanishing gradient that goes away and other issues. Fast convergence and optimum performance depend on the loss function. Three loss functions (default(Crossentropyex), (MAE) and (SSE)) that compute the error between the actual and desired output for two distinct applications were used to examine the effectiveness of the suggested DLLSTM classifier. The results show that one of the suggested classifiers' specific loss functions(SSE)) works better than other loss functions and does a great job. The suggested functions Softsign, Modified-Elliott, Root-sig, Bi-tanh1, Bi-tanh2, Sech and wave are more accurate than the tanh function.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据