•  
  •  
 

ORCID

H. Eren Suna: https://orcid.org/0000-0002-6874-7472

Mahmut Özer https://orcid.org/0000-0001-8722-8670

Abstract

In recent years, artificial intelligence (AI) and machine learning (ML) algorithms have played an influential role in advancing educational assessment. As a means of improving equal opportunities in education, assessing students' learning deficiencies and developing personalized learning suggestions is considered an important aspect. Furthermore, big data-based algorithms play an increasing role in assessing students' cognitive and social-emotional development and conducting research on school-based monitoring. A number of these developments, on the one hand, provide valuable feedback to both students and educational authorities regarding assessment of student growth, and on the other hand, they raise the issue of bias in the algorithms. As we experience numerous examples of AI- and ML- algorithms that provide biased results and reproduce existing inequalities based on training data inadequacies. Studies indicate that relevant algorithms can lead biased inferences about a number of factors, such as gender, race, ethnicity, socioeconomic status, and migration. For this reason, various approaches and methods have been developed, including sample weight adjustments, bias attenuation methods, fairness through unawareness, adversarial learning, and participatory management, so that AI- and ML- algorithms can deliver more fair and valid outcomes. In this paper, we examine the challenges that lead AI and ML to produce biased results in evaluating cognitive and social-emotional skills of educational students, the recent methods and approaches to examine and debias the algorithmic bias as well as the policies and precautions for achieving more unbiased and valid results. We anticipate that insights from this study will increase awareness of objective and valid assessment of educational skills via AI- and ML- algorithms.

DOI

https://doi.org/10.59863/YPKL4338

Share

COinS