Grin logo
en de es fr
Shop
GRIN Website
Publish your texts - enjoy our full service for authors
Go to shop › Pedagogy - General

Leveraging AI-Generated Data for Factor Structure Simulation and Analytical Protocol Development of the Digital Change in Schools Scale (DCSS)

Summary Excerpt Details

The ongoing digital transformation of education necessitates robust tools to measure its scope and impact. However, the field lacks a thoroughly validated instrument to assess digital change in schools, hindering empirical research and evidence-based policy. Addressing this gap, this working paper presents a novel methodological simulation that leverages AI-generated synthetic data to develop and refine an analytical protocol for the Digital Change in Schools Scale (DCSS). The original Digital Transformation Scale was adapted to the Chinese context via AI-assisted translation and expert review. Using the AITurk platform, iterative synthetic datasets were generated (n₁=200, n₂=400, n₃=600) to simulate the entire psychometric workflow. Exploratory Factor Analysis on the initial dataset refined the scale from 17 to 10 items, revealing a three-factor structure (digitization, digitalization, digital transformation). Subsequent Confirmatory Factor Analyses on independent samples confirmed the model's acceptable fit and internal consistency, although the digitization subscale showed reliability concerns. Crucially, this simulation does not constitute validation but serves as a rigorous proof-of-concept and feasibility study. It provides a pre-tested analytical framework and a hypothesized factor structure, offering researchers a efficient and cost-effective pipeline for future empirical validation with human subjects, thereby accelerating research into digital change in educational contexts。

Excerpt


Leveraging AI-Generated Data for Factor Structure Simulation and Analytical Protocol Development of the Digital Change in Schools Scale (DCSS)

Abstract

The ongoing digital transformation of education necessitates robust tools to measure its scope and impact. However, the field lacks a thoroughly validated instrument to assess digital change in schools, hindering empirical research and evidence-based policy. Addressing this gap, this working paper presents a novel methodological simulation that leverages AI-generated synthetic data to develop and refine an analytical protocol for the Digital Change in Schools Scale (DCSS). The original Digital Transformation Scale was adapted to the Chinese context via AI-assisted translation and expert review. Using the AITurk platform, iterative synthetic datasets were generated (n₁=200, n₂=400, n₃=600) to simulate the entire psychometric workflow. Exploratory Factor Analysis on the initial dataset refined the scale from 17 to 10 items, revealing a three-factor structure (digitization, digitalization, digital transformation). Subsequent Confirmatory Factor Analyses on independent samples confirmed the model's acceptable fit and internal consistency, although the digitization subscale showed reliability concerns. Crucially, this simulation does not constitute validation but serves as a rigorous proof-of-concept and feasibility study. It provides a pre-tested analytical framework and a hypothesized factor structure, offering researchers a efficient and cost-effective pipeline for future empirical validation with human subjects, thereby accelerating research into digital change in educational contexts。

Keywords: Digital change in schools; digitization; digitalization; digital transformation; AI-generated data; synthetic data simulation

Over the past two decades, the rapid evolution and progress of digital technologies, such as the Internet of Things (IoT), cloud computing, smart devices, digital media, virtual reality (VR), augmented reality (AR), robotics, and artificial intelligence (AI), have made these technologies an integral and essential part of social life (Abiteboul et al., 2015; Kushwaha, 2021). To respond, contemporary organizations in all sectors are required to transform the traditional workplace environment to enable people to perform duties and tasks innovatively, efficiently, and effectively by using digital tools and technologies (Eurofound, 2021; Matt et al., 2015). Schools are no exception. The application and integration of digital technologies in school contexts have become increasingly common in recent years (Mhlanga, 2024). Inevitably, school systems across the globe have experienced digital change, including digitization, digitalization, and digital transformation, to various degrees (Mukul & Büyük¨ozkan, 2023; Oliveira & de Souza, 2022).

In literature, there are different debates regarding the impacts of digital change on the quality of education. For instance, some researchers note that digital change, especially the integration of AR, VR, and AI into schooling, can personalize education, promote innovative learning, enhance learner engagement and motivation, and better equip children with the digital skills and literacy necessary for the digital economy (Haleem et al., 2022; Mukul & Büyük¨ozkan, 2023; OECD, 2023). Therefore, they advocate digital change in schools, moving what they call education 3.0 to education 4.0 as the future model of schooling, emphasizing personalized and adaptive learning, blended and hybrid learning, digital and immersive technologies, and data-driven education (Mukul & Büyük¨ozkan, 2023; Oliveira & de Souza, 2022). In contrast, other researchers have noted the difficulties and challenges in achieving successful digital change in schools (Nadrljanski et al., 2022; OECD, 2023). This is because the changes involve more than the application of digital technologies in teaching, learning, and school administration but also necessitate a transformation in infrastructure, policies, the culture of schools, and the competence of teachers in the effective integration of digital technologies into educational practices (Costa et al., 2021; Timotheou et al., 2022). Therefore, education researchers (e.g., Razak et al., 2023; Timotheou et al., 2022; Wohlfart & Wagner, 2023) have noted that digital technologies may not necessarily contribute to the quality of schooling if schools and teachers merely employ new technologies to support their work without paradigm shifts in teaching, learning, and administration.

Another ongoing debate concerns the impact of digital change on the teaching profession. Some researchers argue that the digital change has negative impacts on teacher well-being. For example, Mirrlees and Alvi (2014) argue that digital change can deskill teachers because these changes allow duties to be completed easily with technological assistance and to even be replaced by technologies. Deskilled teachers become vulnerable to external surveillance, leading to increased demands that they perceive as meaningless, making them more prone to stress and burnout (O'Neill, 2023). However, other researchers disagree. For example, Bryant et al. (2020) reported that digital technologies, such as AI, can increase teacher work efficiency, saving them approximately 13 hours per week while allowing them to reallocate 20–40% of their time toward student-focused activities. Instead of deskilling, digitization and digitalization can reskill or upskill teachers by offering opportunities to acquire new competencies and advance existing skill sets (Padmaja & Mukul, 2021).

Despite these ongoing debates, consensus remains elusive, partly due to the absence of a validated instrument to measure digital change in schools. While theoretical propositions and qualitative insights abound (Razak et al., 2023; Wohlfart & Wagner, 2023), the field lacks empirical rigor in quantifying the scope, scale, and differential impacts of digital change across diverse educational contexts. This gap not only restricts our understanding and knowledge regarding digital change in education but also obscures evidence-based policy-making and strategic leadership, effectively forcing administrators to spearhead digital initiatives without robust diagnostic instruments (Timotheou et al., 2022).

Thus, this working paper presents a novel methodological simulation aimed at developing and refining an analytical pipeline for the Digital Change in Schools Scale (DCSS). To bridge the gap between scale development and costly large-scale data collection, the study employed a multi-stage AI-assisted approach. First, the Digital Transformation Scale (Pettersson et al., 2024), which was used to assess digital change in schools in Sweden, was translated into Chinese using DeepSeek as a drafting tool, followed by expert evaluation by Chinese scholars to ensure cultural and conceptual relevance. Subsequently, the study utilized a synthetic data generation platform, i.e., AITurk, to produce iterative datasets (n=200 for EFA; n=400 & n=600 for sequential CFA). This platform was chosen because, according to Qin et al. (2024), AITurk can achieve about 93.2% accuracy of real human participants’ responses on online crowdsourcing platforms. This multi-stage AI-assisted approach allowed the study to simulate the entire psychometric workflow—from exploratory factor analysis to confirmatory modeling and criterion validity testing—in a controlled, computational environment. The results demonstrate a potential factor structure and provide a robustly tested analytical protocol. This simulation does not constitute validation with human subjects but serves as a critical proof-of-concept and a feasibility study. It provides a rigorously vetted hypothesis and a ready-made analytical framework for future researchers to efficiently test and validate the DCSS with empirical data from real educational contexts.

Digital Change in Schools: Digitization, Digitalization, and Digital Transformation

In school contexts, digital change is broadly defined as a process of reconfiguring social, cultural, organizational, and technological aspects of teaching and learning through the application and integration of digital technologies into teacher work (Pettersson et al., 2024). Nevertheless, Reis et al. (2020) and Vrana and Singh (2025) observe that different people can refer to different things. According to them, there are three common understandings of digital change in the literature, namely, digitization, digitalization, and digital transformation.

Digitization refers to the use of digital tools to support existing duties without fundamentally altering the nature of teacher work (Pettersson et al., 2024). For example, teachers ask students to submit homework via a digital platform and then grade it directly on the platform instead of using paper. Digitalization in education requires teachers to integrate digital technologies into their work, going beyond simply using these technologies, leading to qualitative changes in the mode of education (Pettersson et al., 2024). For example, a mathematics teacher is required to use an adaptive learning platform such as DreamBox to personalize teaching for each student by tailoring lessons to individual skill levels and offering targeted practice activities that address specific learning gaps. Digital transformation represents a deeper change in values, norms, and beliefs regarding digital initiatives in education, encouraging teachers to develop positive perceptions, feelings, and responses to digital initiatives (Gegenhuber et al., 2022). Accordingly, Pettersson et al. (2024) suggest that digitization, digitalization, and digital transformation represent three distinct but interconnected processes of digital change. Vrana and Singh (2025) view these processes as a continuum of digital change: digitization at one end focuses primarily on technological adoption, digitalization in the middle bridges the transition, and digital transformation at the other end encompasses broader sociocultural shifts in the work environment.

Understanding the distinctions between digitization, digitalization, and digital transformation is important for analyzing their differential impacts on education. For example, while all three processes may enhance the professional competence and well-being of teachers, they can operate through different mechanisms. Digitization can streamline teacher work through automation, freeing them to rest and pursue professional development (Bryant et al., 2020). Digitalization can foster reskilling (i.e., acquiring new skills and competence) and/or upskilling (i.e., enhancing existing skills or adding advanced competencies), leading to increased self-efficacy and professional identity (Padmaja & Mukul, 2021). Digital transformation can cultivate more positive attitudes toward digital tools, motivating teachers to learn and to integrate them; thus enhancing their confidence, commitment, and digital competences in teaching (Nworie, 2015). These conceptual distinctions are crucial for developing precise research frameworks to examine digital change in education, generating nuanced insights into their specific effects on the quality of education, and informing interventions that account for the unique characteristics and potential impacts of each process.

Digital Transformation Scale

Pettersson et al. (2024) recently developed the Digital Transformation Scale (DTS) to measure digital change in Sweden educational contexts. The DTS consists of 17 items measuring three dimensions, namely, digitization (5 items), digitalization (6 items), and digital transformation (6 items). According to their study, the internal consistency of all 17 items is 0.92, and the internal consistency for the three dimensions is between 0.87 and 0.91. This implies that the reliability of the DTS is acceptable. Moreover, the authors argue for the construct validity of DTS-based exploratory factor analysis.

However, the DTS has a critical limitation in validation. It relies solely on exploratory factor analysis (EFA) without confirmatory factor analysis (CFA). As Brown (2023) emphasizes, EFA is typically used in early scale development to identify latent structures, whereas CFA is employed in later stages to test hypothesized models grounded in theory and prior evidence. Unlike EFA, which is a data-driven, exploratory method, CFA is a theory-driven approach that evaluates predefined relationships between observed indicators (e.g., items) and latent variables (Thompson, 2004). Specifically, CFA requires researchers to specify the number of factors, the number of actors and the pattern of indicator–factor loading in advance, on the basis of theories, prior studies, and other parameters, such as those bearing on the independence or covariance of factors and indicator uniqueness (Harrington, 2009). Critically, CFA provides goodness-of-fit indices to assess how well the data align with the theoretical framework, whereas EFA cannot confirm whether the extracted factors are robust or artifacts of sampling bias (Koziol, 2023). According to Brown (2023), EFA alone cannot establish discriminant validity; for instance, whether the three dimensions of the DTS are truly distinct or merely methodological artifacts. Thus, without CFA, the construct validity of the DTS remains insufficiently supported, limiting its utility for research and practice.

Study 1: Scale Adaption and Initial Assessment of the DCSS

The purpose of Study 1 was to explore the factor structure of the DCSS and obtain initial evidence of construct validity.

AI-synthetic Participants

A synthetic sample of 200 junior high school teachers from Zhuhai city, China, was generated using AITurt in Study 1. The samples were specified to be 34.2% males and 65.8% female, with an average age of 34.6 years (SD = 8.78). In terms of highest educational qualification, the majority held a bachelor’s degree (69.8%), followed by a master’s degree (18.8%), a professional diploma (8.4%), and a doctoral degree (3.0%).

Scale adaptation

The DCSS was developed on the basis of the 17-item digital transformation scale developed by Pettersson et al. (2024). First, the items were translated into Chinese by the researcher with DeepSeek assistant. The Chinese version was subsequently evaluated by 10 experts with expertise in digital transformation within Chinese school settings to verify its face and content validity . On the basis of the expert feedback, all the items were revised to better fit the Chinese school context. Afterward, a panel of 50 active teachers in Zhuhai participated in reviewing the measurement items. The language of some items was refined to improve readability on the basis of their feedback. Finally, the DCSS contains 17 items with three dimensions, namely, digitization (5 items), digitalization (6 items), and digital transformation (6 items).

Data analysis

By using SPSS 30, EFA was performed to examine the latent structure and achieve dimensionality reduction.

Results

The suitability of the data for factor analysis was first evaluated through the Kaiser-Meyer-Olkin (KMO) test and Bartlett's test of sphericity prior to conducting EFA. According to the results, a KMO sampling adequacy value of 0.783 and a chi-square of Bartlett’s test of sphericity of 959.571 (df = 136; p < 0.01). Accordingly, the data were deemed accurate for conducting EFA (Watson, 2017).

On the basis of the KMO and Bartlett tests, a principal component factor analysis with varimax rotation was conducted to identify the underlying structure of the 17 items of the Digital change in School Scale (DCSS). The primary findings suggested removing 7 items due to cross-loading and low primary loading. After removing the 7 items, principal component analysis of the DCSS identified three factors meeting the eigenvalue >1 criterion, explaining a cumulative 54.69% of the variance. Moreover, scree plot examination showed a clear inflection point at the fourth component (with subsequent flattening), confirming the three-dimensional nature of the 10-item DCSS. Three components were extracted with eigenvalues exceeding 1.0: Factor 1 (2.96, 29.59%), Factor 2 (1.41, 14.11%), and Factor 3 (1.10, 10.99%), collectively explaining 54.69% of the variance.

By using the maximum variance method, the 10 items of the DCSS were positively rotated. As Table 2 shows, all rotated factor loadings exceeded the 0.4 threshold. Based on their constituent items, the factors were thematically labeled: digital transformation (Factor 1), digitalization (Factor 2), and digitization (Factor 3). Notably, D1 was across Factor 1 and Factor 3, but it was retained on Factor 3 because of its theoretical relevance to digitization, despite marginal cross-loading (λ = 0.467). This decision was supported by excerpt review.

Table 1 . Factor loadings from exploratory factor analysis

Illustrations are not included in the reading sample

Study 2: Establishing Construct Validity of the DCSS

Study 2 aimed to provide better evidence about the construct validity of the refined DCSS using an independent sample

AI-synthetic participants

For Study 2, a synthetic sample of 400 junior high school teachers from Guangzhou city, China, was generated by using AITurt. The sample was specified to be 24.7% males and 75.3% females, with an average age of 37.2 years (SD = 10.96). The majority held a bachelor’s degree (70.2%), followed by a master’s degree (20.2%), a professional diploma (7.3%), and a doctoral degree (2.4%).

Instrument

The 10 items of the refined DCSS were used to assess digital change in schools, measuring three dimensions, namely, digitization (3 items), digitalization (4 items), and digital transformation (3 items).

Data analysis

Confirmatory factor analysis (CFA), Cronbach’s α, and correlation were applied to examine the construct validity and internal consistency of the refined DCSS by using SPSS 30 and AMOS 30.

Results

To further examine the construct validity of the DCSS, CFA was conducted. The results showed that the fit indices were acceptable (χ² = 105.503, df = 32, p = 0.000, df = 32, χ² /df= 3.297, RMSEA = 0.080, GFI = 0.943, AGFI = 0.902), indicating that the 10-item DCSS had a good data fit when examined with the independent sample. Concerning the ranges of factor loadings, the CFA results suggested that all the items had moderate to high factor loading values for the corresponding factor, ranging from 0.45 to 0.75 (Chen, 2007; Hu & Bentler, 1999).

The overall internal reliability (Cronbach’s α) of the DCSS was 0.791. All three subscales had moderate values of Cronbach’s α, except digitization (Cronbach’s α = 0.542). The correlations between them ranged from 0.353 to 0.551 (Table 3). The results implied that the three factors had something in common but still varied in significant ways. The validity of the DCSS was assessed by using average variance extracted (AVE) and composite reliability (CR). The results showed that the AVEs of the three factors were 0.487, 0.281, and 0.389 and the CRs were 0.740, 0.744 and 0.441.

Table 2 . Factor loadings from confirmatory factor analysis

Illustrations are not included in the reading sample

Goodness-of-fit indices: χ² = 101.01, df = 32, p = 0.000, NFI = 0.90, RMSEA = 0.08, GFI = 0.95, AGFI = 0.92

Table 3. Means, standard deviation, and Collaboration between digital transformation, digitalization, and digitization in Study 2

Illustrations are not included in the reading sample

p < 0.001 (two-tailed)

Study 3: Further Evaluating the Measurement Model of DCSS

Study 3 aimed to further examine the construct validity of the DCSS in different models.

AI-synthetic participants

Study utilized a synthetically generated sample of 600 junior high school teachers from Beijing, China. The sample demographics, generated by AITurt, were 28.3% males and 71.7% females, with an average age of 36.2 years (SD = 10.25). The majority held a bachelor’s degree (69.5%), followed by a master’s degree (20.0%), a professional diploma (7.8%), and a doctoral degree (2.7%).

Data analysis

CFA, Cronbach’s α, and correlation were applied to examine the construct validity and internal consistency of the refined DCSS by using SPSS 30 and AMOS 30.

Instrument

The 10 items of the refined DCSS were used to assess digital change in schools, measuring three dimensions, namely, digitization (3 items), digitalization (4 items), and digital transformation (3 items).

Results

First, a first-order one-factor model (Model 1) with 10 items was tested by CFA. The results showed that Model 1 had significant misfit and thus not acceptable (χ² = 306.16, df = 35, p = .000, χ²/df = 8.747, RMSEA = 0.114, GFI = 0.900, AGFI = 0.842). Then, a first-order two-factor model (Model 2) was tested by CFA to cross check the results of Study 2 (Table 4). The results indicated that Model 2 was acceptable (χ² = 142.095, df = 32, p = .000, χ²/df = 4.440, RMSEA = 0.076, GFI = 0.958, AGFI = 0.958). Accordingly, the two-factor model was better than the one-factor model. For two-factor model, the factor loading for digital transformation ranged from 0.60 to 0.76, for digitalization ranged from 0.48 to 0.68, and for digitization ranged from 0.41 to 0.63.

The overall ten-item DCSS demonstrated good internal reliability (α = 0.773). However, the two subscales had acceptable but poor values of internal reliability (α = 0.617 and 0.654 respectively). The correlations between them ranged from 0.313 to 0.490 (Table 5). The results implied that the three factors had something in common but still varied in significant ways. The validity of the DCSS was assessed by using AVE and CR. The results showed that the AVEs of the three factors were 0.445, 0.359, and 0.275 and the CRs were 0.598, 0.688, and 0.431.

Table 4 . Factor loadings from confirmatory factor analysis

Illustrations are not included in the reading sample

Goodness-of-fit indices: χ χ² = 142.095, df = 32, p = .000, χ²/df = 4.440, RMSEA = 0.076, GFI = 0.958, AGFI = 0.958

Table 5. Means, standard deviation, and Collaboration between digital transformation, digitalization, and digitization in Study 3

Illustrations are not included in the reading sample

Conclusion

The growing integration of digital technologies such as smart devices, digital media, VR, AR, AI, and robotics has led education systems to undergo varying degrees of digital change, spanning digitization, digitalization, and digital transformation (Pettersson et al., 2024; Vrana & Singh, 2025). However, the impact of these changes on educational quality, including teaching, learning, and school administration, remains contested. Proponents argue that digital transformation enhances education by enabling personalized learning, fostering student motivation and digital competence (Haleem et al., 2022; Mukul & Büyük¨ozkan, 2023; OECD, 2023), and improving teacher efficiency and professional growth through upskilling (Bryant et al., 2020; Padmaja & Mukul, 2021). Conversely, critics caution that these changes may fail to improve educational quality or may even harm it due to factors such as inadequate school leadership, insufficient teacher digital literacy (Haleem et al., 2022; Razak et al., 2023; Timotheou et al., 2022; Wohlfart & Wagner, 2023), and the potential deskilling of teachers, which can erode morale and professionalism (Crowston & Bolici, 2024; Feenberg, 2001; Hughes, 2021; Mirrlees & Alvi, 2014; O'Neill, 2023).

Despite this debate, no consensus exists on the effects of digital change in school contexts, partly due to the lack of a validated instrument to measure such changes quantitatively. Without a robust scale, researchers cannot empirically assess the patterns, degrees, or outcomes of digital transformation in schools or examine their relationships with other variables. While some instruments, such as the DTS (Pettersson et al., 2024), have been proposed, their validity remains limited by the absence of CFA.

Therefore, this paper sets out to address this gap in the study of digital change in education by developing and testing an analytical protocol for Digital Change in Schools Scale (DCSS) by using AI-generated synthetic data. The multi-stage simulation, leveraging the AITurk platform, successfully identified a refined 10-item, three-factor structure (digitization, digitalization, and digital transformation) that aligns with established theoretical frameworks (Pettersson et al., 2024; Vrana & Singh, 2025). The iterative process of EFA and CFA across sequentially generated datasets demonstrated a stable factor structure with acceptable model fit indices, providing a tested analytical framework for future research.

Nevertheless, the findings suggest that the DCSS encounters several limitations. First, its construct validity received limited support. While the model fit indices from the CFA were acceptable, other key metrics for establishing construct validity were weak. For instance, the moderate to strong correlations between the three factors (ranging from 0.313 to 0.551) suggest that while they are distinct, they may not be sufficiently independent. This raises questions about whether respondents can clearly differentiate between the concepts of digitization, digitalization, and transformation in practice. Moreover, the AVE values for the factors were below the recommended threshold of 0.5 in most cases (e.g., as low as 0.275 in Study 3). This indicates that the items within each subscale share less common variance than the variance attributable to measurement error, weakening the evidence that they are all measuring the same underlying latent construct.

Second, internal reliability, especially digitization, subscale, was problematic and inconsistent across studies. In Study 2, Cronbach’s of digitization was unacceptably low (0.542), and while it improved in Study 3, it remained questionable (0.518). This suggests that the three items measuring digitization may not reliably capture a unified construct in practice or may be perceived ambiguously by respondents. This instability indicates a need to revisit the item formulation for this dimension in future validations with human data.

More importantly, it is imperative to reiterate the fundamental limitation of this work: the simulation does not constitute a validation of the DCSS. The findings are based on synthetic data, which, while highly correlated with human responses (Qin et al., 2024), cannot replicate the full spectrum of human cognition, bias, and lived experience (Hradec et al., 2024). The responses are simulations based on patterns in the AI's training data, meaning the demonstrated factor structure, reliability, and validity are computational proofs-of-concept rather than empirical evidence. The scale's performance must be confirmed with data from actual teachers to establish its true psychometric properties in real-world settings.

Despite of the limitations, the study has some methodological contributions. It presents a novel, cost-effective, and efficient approach to the preliminary stages of psychometric scale development. By utilizing AI-generated data, as Qin et al. (2024) suggest, this power has created a rigorously vetted hypothesis regarding the DCSS's factor structure and a pre-validated analytical protocol. This allows future researchers to deploy these tools with greater confidence when collecting empirical data from human subjects, thereby accelerating the validation process and reducing resource expenditure. Furthermore, the moderate inter-factor correlations support the conceptualization of digitization, digitalization, and digital transformation as distinct yet interrelated dimensions on a continuum of digital change, offering a nuanced tool for measuring these complex constructs.

Therefore, the critical next step is the empirical validation of the DCSS with data from actual educators. Future research must:

1. Collect large-scale data from diverse samples of teachers across different regions, school levels, and socioeconomic contexts in mainland China to test the generalizability of the factor structure identified here.
2. Establish stronger psychometric properties, including criterion and discriminant validity, by correlating DCSS scores with relevant variables such as digital leadership, teacher self-efficacy, and student outcomes.
3. Explore cross-cultural applicability by translating and validating the scale in other educational systems to understand digital changes in a global context.

In conclusion, this working paper provides a foundational, rather than a final, step toward a validated instrument. It offers a promising proof-of-concept for leveraging AI in methodological prototyping and delivers a well-tested analytical framework and a hypothesized model ready for empirical confirmation. By providing this toolkit, the paper hopes to facilitate more efficient and rigorous research into the critical phenomenon of digital change in schools, ultimately contributing to more informed policy and practice.

References

Abiteboul, S., André, B., & Kaplan, D. (2015). Managing your digital life. Communication of the ACM, 58 (5), 32-35. https://doi.org/10.1145/2670528

Brown, T. A. (2023). Confirmatory factor analysis. In R. H. Hoyle (Ed.), Handbook of structuraleEquation modeling (2nd ed., pp. 261-276). The Guilford Press.

Bryant, J., Heitz, C., Sanghvi, S., & Wagle, D. (2020). How artificial intelligence will impact K–12 teachers. McKinsey & Company.

Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 14 (3), 464-504. https://doi.org/10.1080/10705510701301834

Costa, P., Castaño-Muñoz, J., & Kampylis, P. (2021). Capturing schools' digital capacity: Psychological analysis of the SELFIE self-relfection tool. Computers & Education, 162, 104082. https://doi.org/10.1016/j.compedu.2020.104080

Crowston, K., & Bolici, F. (2024). Deskilling and upskilling with generative AI systems. Retrieved 24 December 2024, from https://crowston.syr.edu/node/1681

Eurofound. (2021). Anticipating and manageing the impact of change: Digitisation in the workplace. Publication Office of the European Union.

Feenberg, A. (2001). Whither educational technology? International Journal of Technology and Design Education, 11, 83-91. https://doi.org/10.1023/A:1011225903766

Gegenhuber, T., Logue, D., Hinings, C. R. B., & Barrett, M. (2022). Institutional perspectives on digital transformation. In T. Gegenhuber, D. Logue, & C. R. B. Hinings (Eds.), Digital transformation and institutional theory (pp. 1-32). Emerald.

Haleem, A., Javaid, M., Qadri, M. A., & Suman, R. (2022). Understanding the role of digital technologies in education: A review. Sustainable Operations and Computers, 3, 272-285. https://doi.org/10.1016/j.susoc.2022.05.004

Harrington, D. (2009). Confirmatory factor analysis. Oxford University Press.

Hradec, J., Di Leo, M., & A., K. (2024). AI generated synthetic data in policy applications. Retrieved 10 September 2025 from https://publications.jrc.ec.europa.eu/repository/handle/JRC138521

Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1-55.

Hughes, J. (2021). The deskilling of teaching and the case for intelligent tutoring systems. Journal of Ethics and Emerging Technologies, 31 (2), 1-16. https://doi.org/10.3390/10.55613/jeet.v31i2.90

Koziol, N. A. (2023). Confirmatory measurement models for dichotomous and ordered polytomous indicators. In E. H. Hoyle (Ed.), Hnadbook of structural equation modeling (2nd ed., pp. 296-315). Guilford Press.

Kushwaha, B. P. (2021). Paradgim shift in traditional lifestyle to digital lifestyle in Gen Z: . International Journal of Web Based Communities, 17 (4), 305-320. https://doi.org/10.1504/IJWBC.2021.119472

Matt, C., Hess, T., & Benlian, A. (2015). Digital transformation strategies. Business & Information Systems Engineering, 57, 339-343. https://doi.org/10.1007/s12599-015-0401-5

Mhlanga, D. (2024). Digital transformation of education, the limitations and prospects of introducing the fourth industrial revolution asynchronous online learning in emerging markets. Discover Education, 3, 32. https://doi.org/10.1007/s44217-024-00115-9

Mirrlees, T., & Alvi, S. (2014). Taylorizing academia, deskilling professors and automating higher education: The recent role of MOOCs. Journal for Critical Education Policy Studies, 12 (2), 45-73.

Mukul, E., & Büyük¨ozkan, G. (2023). Digital transformation in education: A systematic review of education 4.0. Technological Forecasting & Social Change, 194, 122664. https://doi.org/10.1016/j.techfore.2023.122664

Nadrljanski, D., Nadrljanski, M., & Pavlinović, M. (2022). Digitalization of education. In M. Ivanović, A. Klašnja-Milićević, & L. C. Jain (Eds.), Handbook on intelligent techniques in the educational process (pp. 17–39). Springer.

Nworie, J. (2015). Institutionalization of teaching and learning gains in higher education. Educational Technology, 55 (5), 21-28.

O'Neill, J. (2023). The degradation of teachers' work, loss of teachable moments, demise of democracy and ascendancy of surveillance capitalism in schooling. New Zealand Journal of Teachers' Work, 20 (2), 197-189. https://doi.org/10.24135/teacherswork.v20i2.607

OECD. (2023). OECD digital educaiton outlook 2023: Towards an effective digital education ecosystem. OEDC.

Oliveira, K. K. S., & de Souza, R. A. C. (2022). Digital transformation towards education 4.0. Informatics in Education, 21 (2), 283-309. https://doi.org/10.15388/infedu.2022.13

Padmaja, V., & Mukul, K. (2021). Upskilling and reskilling in the digital age. In S. L. Gupta, N. Kishor, N. Mishra, S. Mathur, & U. Gupta (Eds.), Trasforming higher education through digitalization: Insights, tools, and techniques (pp. 255-275). CRC Press.

Pettersson, F., Siljebo, J., Wolming, S., & Ferry, M. (2024). A validated questionnaire for measuring digitalization as sociocultural change in educational contexts. The International Journal of Information and Learning Technology, 41 (4), 359-370. https://doi.org/10.1108/IJILT-08-2023-0149

Qin, X., Huang, M., & Ding, J. (2024). AITurk: Using ChatGPT for social science research. SSRN Preprints.

Razak, N. A., Rasli, R. M., Subhan, S., Ahmad, N. A., & Malik, S. (2023). Systematic review on digital transformation among teachers in public schools. International Journal of Evaluation and Research in Education, 12 (2), 1059-1078. https://doi.org/10.11591/ijere.v12i2.24498

Reis, J., Amorim, M., Melão, N., Cohen, Y., & Rodrigues, M. (2020). Digitalization: A literature review and research agenda. In Z. Anisic, B. Lalic, & D. Gracanin (Eds.), Proceedings on 25th International Joint Conference on Industrial Engineering and Operations Management – IJCIEOM. Springer.

Thompson, B. (2004). Exploratory and confirmatory factor analysis: Understanding concepts and applications. American Psychological Association.

Timotheou, S., Miliou, O., Dimitriadis, Y., Villagrá Sobrino, S., Giannoutsou, N., Cachia, R., . . . Ioannou, A. (2022). Impacts of digital technologies on education and factors: Influencing schools' digital capacity and transformation: A literature review. Education and Information Technology, 28, 6695–6726. https://doi.org/10.1007/s10639-022-11431-8

Vrana, J., & Singh, R. R. (2025). Digitization, digitalization, digital transformation, and beyond. In N. Meyendorf, N. Ida, R. Singh, & J. Vrana (Eds.), Handbook of nondestructive evaluation 4.0 (pp. 1-26). Springer.

Watson, J. C. (2017). Establishing evidence for internal structure using exploratory factor analysis. Measurement and Evaluation in Counseling and Development, 50 (4), 232-238. https://doi.org/10.1080/07481756.2017.1336931

Wohlfart, O., & Wagner, I. (2023). Teachers' role in digitalizing education: An umbrella review. Education Technology Research Development, 71, 339-365. https://doi.org/10.1007/s11423-022-10166-0

Appendix 1: Items of Digital Change in Schools Scale

Digital Transformation

DT1. In our school, the application of information technology helps improve teaching quality. (在我校,信息技术应用助力提升教学质量)

DT2. In our school, information technology supports innovative teaching organization methods. (在我校,信息技术支持教学组织形式创新)

DT3. In our school, the use of information technology has become an important part of teaching and research activities. (在我校,信息技术应用成为教研活动的重要内容)

Digitalization

DL1. In our school, information technology has transformed traditional methods of assigning and grading homework. (在我校,信息技术改变了传统的作业布置和批改方式)

DL2. In our school, the application of information technology has given rise to new teaching methods. (在我校,信息技术应用催生了新的教学方式方法)

DL3. In our school, teachers regularly share their experiences in applying information technology to teaching. (在我校,教师会定期交流信息技术在教学中的应用心得)

DL4. In our school, a positive atmosphere has been fostered for actively exploring the integration of information technology and teaching. (在我校,形成了积极探索信息技术与教学融合的良好氛围)

Digitization

D1. In our school, we use digital systems to record student academic performance and growth data. (在我校,我们使用电子化系统记录学生学业表现和成长数据)

D2. In our school, we use educational resource-sharing platforms to access teaching reference materials. (在我校,我们使用教育资源共享平台获取教学参考资料)

D3. In our school, information technology helps teachers better understand student learning progress. (在我校,信息技术帮助教师更好地掌握学生学习情况)

[...]

Excerpt out of 14 pages  - scroll top

Buy now

Title: Leveraging AI-Generated Data for Factor Structure Simulation and Analytical Protocol Development of the Digital Change in Schools Scale (DCSS)

Scientific Study , 2025 , 14 Pages

Autor:in: Kwok Kuen Tsang (Author)

Pedagogy - General
Look inside the ebook

Details

Title
Leveraging AI-Generated Data for Factor Structure Simulation and Analytical Protocol Development of the Digital Change in Schools Scale (DCSS)
College
Education University of Hong Kong
Author
Kwok Kuen Tsang (Author)
Publication Year
2025
Pages
14
Catalog Number
V1617328
ISBN (PDF)
9783389150665
ISBN (Book)
9783389150672
Language
English
Tags
Digital change in schools digitization digitalization digital transformation AI-generated data synthetic data simulation
Product Safety
GRIN Publishing GmbH
Quote paper
Kwok Kuen Tsang (Author), 2025, Leveraging AI-Generated Data for Factor Structure Simulation and Analytical Protocol Development of the Digital Change in Schools Scale (DCSS), Munich, GRIN Verlag, https://www.grin.com/document/1617328
Look inside the ebook
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
Excerpt from  14  pages
Grin logo
  • Grin.com
  • Payment & Shipping
  • Contact
  • Privacy
  • Terms
  • Imprint