•  
  •  
 

Abstract

This study used Kaggle data, the ASAP date set, and applied NLP and Bidirectional Encoder Representations from Transformers (BERT) for corpus processing and feature extraction, and applied different machine learning models, both traditional machine-learning classifiers and neural-network-based approaches. Six out of the eight essay prompts were trained separately and concatenated. Results with different approaches were evaluated. It was found that besides using routing NLP process such as Lemmatization, Stemming, and N-grams etc, adding more features such as readability scores using Spacy Textsta improved the prediction results. The neural network model, trained on all prompt data and utilizing NLP for corpus processing and feature extraction, outperformed other models with an overall test quadratic weighted kappa (QWK) of 0.9724. Specifically, it achieved the highest QWK score of 0.859 for prompt 1 and an average QWK of 0.771 across all 6 prompts, making it the best-performing machine learning model.

DOI

https://doi.org/10.59863/DQIZ8440

Share

COinS