4.6 Article

Electric Power Audit Text Classification With Multi-Grained Pre-Trained Language Model

Related references

Note: Only part of the references are listed.
Proceedings Paper Computer Science, Artificial Intelligence

Masked Autoencoders Are Scalable Vision Learners

Kaiming He et al.

Summary: This paper presents a self-supervised learning method for computer vision based on masked autoencoders. By masking a portion of the input image and reconstructing the missing pixels, large models can be trained efficiently and effectively. The approach achieves high generalization performance and outperforms supervised pretraining in transfer learning tasks.

2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) (2022)

Article Computer Science, Artificial Intelligence

SpanBERT: Improving Pre-training by Representing and Predicting Spans

Mandar Joshi et al.

TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (2020)

Article Computer Science, Artificial Intelligence

Bidirectional LSTM with attention mechanism and convolutional layer for text classification

Gang Liu et al.

NEUROCOMPUTING (2019)

Review Computer Science, Information Systems

Text Classification Algorithms: A Survey

Kamran Kowsari et al.

INFORMATION (2019)