Free online reading
Student participation in classroom activities is considered as significant because it serves as an indicator of the progress of students' learning process. Student participation in the class also signal towards their interest in the course they have been enrolled with, which in turn indicates towards better academic progress. Thus, in majority of study institutions like schools and colleges student participation is highly encouraged in order to assess the quality, of course, teaching method, and the knowledge imparted to them. Through active participation in the class, the students also become a part of the entire education system and help the institutions in determining the successfulness of the system. The two commonly used tools of evaluating participation of students and the quality of their performance in classroom activities are RUFDATA and KIRKPATRICK. Both these tools come with their set of merits and demerits. Moreover, these tools are implemented at different levels of performance evaluation. While the RUFDATA tool is used at the initial stage of the organisational planning in order to determine the effectiveness of a learning module or a course on the students and their subsequent performances, the KIRKPATRICK model is used at the later stage in order to assess the quality of performance of the students in order to gauge how far they understood the knowledge imparted and how effective the teaching style has been to them. Hence, the educators should take an integrated approach for applying a combination of both the tolls for evaluation of student participation in classroom activities.
Student participation, performance evaluation, Higher Education, Models of Evaluation,
RUFDATA and KIRKPATRICK
Classroom participation is often referred to students speaking up in class through raising questions or engaging into discussions. It is one of the significant aspects of the teachinglearning process, aiming to develop the environment of student engagement (Czekański & Wolf 2013). Issues related to the participation of classroom teaching still lurk in despite application of various techniques, technologies and policies. One of the problems that affects student participation in the classroom is that large classroom sizes obstruct interactions between students and lecturers, and make the students feel demotivated to engage in classroom activities and complete their assessments on time.
Seating arrangement is also a concerning factor influencing student participation (Rocca 2010). Traditional row and columnar seating in classrooms often create a negative factor for students sitting at the back of the classroom because these students tend to be less engaged with the lecturers, and hence results in weakened interest in participating in classroom processes. Timing, social isolation, time to graduate and busy schedules are other factors that weaken the interest of students in part time/ evening / weekly classes participating in classroom activities. One of the demerits of studying part time is that students have lack of social contact with their peers. This includes personal benefits such as building networking.
In addition, it is known that course policies and the effect of participation of students on their final grade also influence student participation. The type of course also determines participation because students participate less in courses on arts and social science than in natural science and business management. Purely traditional form of teaching where the lecturer uses hardly any technology or media also affects students to participate in the classroom. Confidence of the students also serves as an essential factor that determines student participation in the classroom (Rocca 2010).
Therefore in such a scenario, this study aims to assess the way evaluation of progression rates for widening participation students in higher educational institutions can be improved using RUFDATA and KIRKPATRICK models through critical review of academic works by the conerned faculties and institution authorities . Such an evaluative study will enable its targeted readers like lecturers, personal tutors, academic staff and others to apply the relevant models , thereby evaluating the progression rate and its consequent improvement.
2.0 Understanding the Purpose of Evaluation
Evaluation is the process of evaluating, analysing, and understand the values and limitations of a specific performance based on certain set standards (International Center for Alcohol Policies 2012) and also for systematic assessment and improvement of the 'planning, implementation and effectiveness of programmes' (Rivza et al. 2015:p 646). In a way, evaluation of programmes and practice such as student participation leads to ensuring of accountability, productivity and quality in higher education system (Brence & Rivza 2012). Besides such a systematic and accurate evaluation process also provides opportunities for students and lecturers in developing the quality of teaching and improving quality of knowledge imparted to specific students by identifying the respective strengths and weaknesses (Zomorrodian & Mate¡ 2010; Brusoni et al.2014).
The specific evaluation aim of this research is to understand progression rate of student participation. The problem statement of the study is to find out if the students enjoy participating in the classroom delivery. Knowing this is essential for the purported study because it will help in understanding how to improve their participation rate in the education process and its consequent progression, as has been noted by UNESC09 1998):Kuh et al. (2006) & Information Policy Team ( 2012). According to these cited studies, active student participation in higher education leads to progress in their ability of expressing their inner thoughts, 'exercise their intellectual capacity' (UNESCO 1998), develop social responsibility and accountability and help in identifying and addressing issue that eventually lead to communal, national and global development.
Throughout the paper, an analysis is made based on a comparative study of RUFDATA and KIRKPATRICK models with the aims to understand which evaluation tool is ideal for increasing student participation rates and their on-time completion. As part of the analysis, some theories regarding evaluation of the participation rate and empirical review on similar topic are presented. An elaborate methodology section also follows that presents the theoretical framework of the study. In the following section, analysis of the data is done using making critical analysis of the findings with the purpose of accomplishing the aim of the paper.
3.0 Situating Section: An Evaluation Approach
Student participation in classroom processes and on-time completion results in equal empowerment of students in their educational institution. They also become part of the quality evaluation and preservation process of the institution (Dawood 2007). The process also serves as evaluator of the academic performance of students that serves in developing critical thinking and problem solving skill of students besides enhancing their academic knowledge (Hill 2007). Thus, since the present study purports to evaluate the significance of evaluation processes applied to record progression in participation rate of students and its improvement and on-time completion, a critical assessment of evaluation approaches of RUFADATA and Kirkpatrick has been undertaken.
The objective of the study lies in understanding the effectiveness of the evaluation processes applied by various educational institutions such as Post Compulsory Education ,higher private institutions, coaching and mentoring facilitators, lecturers and others associated within the system of higher education, taking RUFDATA and Kirkpatrick models into purview, thereby assisting them in implementing enhance models of evaluation. While primary significance of RUFDATA lies in systematic and concise evaluation for quality assurance process (Sherman 2016), Kirkpatrick should help evaluators or trainers in assessing which level should the process of evaluation be applied for analysing the performance of students (Phillips 1996; Watkins et al. 1998). Since evaluation models applied in higher education system has been undertaken, progression in student participation rate has been taken as the main core focus of the study, owing to its emerging importance in making the teaching and learning process meaningful, innovative and interesting. As part of the study, secondary data has been used for collecting evidence and data from existing sources comprising of evaluation policies, programmes, frameworks and research studies, keeping purview the target audience comprising of seats of higher education and associated academic.
4.0. Class Participation and Performance
In order to discuss the benefits of student participation in education, (Willms 2000) suggests that participation in classroom activities is integral because it leads towards the process of making students willingly take in the educational processes and improve their learning and development of their critical thinking. Similarly, Trowler (2010) indicates that participation of students in classroom activities is correlated with learning skills. Purposeful participation of students fosters greater learning because it enhances student engagement in the class and the learning process to which he or she is enrolled.
Educational institutions are supposed to provide safe, secured and supportive learning environment so that student participation can be maximised. The benefit of progress in student participation is not only limited to the process of making students comfortable with their course and enhancement of their learning skills but leads to enhancement in classroom efficiency and positive and considerable changes in the effectiveness of the academic courses. Progress in student participation in educational institutions results in improving the overall quality of schools as far as teaching style, curriculum contents, school environment and other such factors are academically managed and maintained. This is crucial as it has a major impact on the on the QAA reports of the institution.
Thus, student participation can be considered as an evaluating tool that assesses the way in which lecturers and non academic members approach learning and teaching in the attempt to improve the education process (Fletcher 2003). Moreover, raising participation rates of students in an institution is an indication of good pedagogic practice because it provides the school management assistance in yielding more accurate information on the areas of strengths and weaknesses of all the students so that adequate actions and policies can be implemented for eliminating these weaknesses and reinforce their strengthens (New Jersey Department of Education 2015).
Classroom participation is often encouraged by lecturers and mentors as an assessment tool for analysing the extent of learning of the students and increasing their ability to think critically and reflect on issues and problems that relate to the class and the lessons taught to them (UNSW Australia 2016). However, the time scheduled for completion of a teaching and learning session is time bound. Thus, the type of teaching done by a lecturer and the style in which classroom participation is allowed determine the completion of a curriculum within time (Logan & Geltner 2000; Pitchforth et al. 2012; Razafi et al. 2009). Therefore, in order to design a course curriculum that will be completed on time despite having enriched participation it is crucial to focus upon the principles underlying the assessment of performance in such classrooms. Assessment of performance should be based on clearly defined tasks rather than focusing upon the quantity of students' contribution to active learning.
Mandates should be developed for such classrooms based on participation depending on factors such as expectations from students for their class participation and criteria for evaluation of participation of individual members in group or team performance for letting the students know which sort of participations shall be rewarded (UNSW Australia 2016). In addition to this, in order to make the best of utilisation of time allotted for course completion simultaneously with encouraging student participation, the lecturers must make wise choices of the most effective instructional strategies while teaching, design classroom curriculum in such a way that it is clear, bound by time yet facilitate effective learning, and make optimum use of classroom management techniques (Evertson et al. 2006).
5.0 Evaluation Theories
leek Ajzen proposed the theory of Planned Behaviour to recognise and forecast an array of human behaviours in various age groups. This theory suggests that human behaviour depends upon factors such as attitudes, norms, perceived behavioural control and intensions. This theory finds wide application in educational institutions for various purposes (Weng et al. 2014). For instance, the theory also serves as evaluation tool for the students that help them in accessing the utility of a course in compliance to their career development (Ingram et al. 2000).
Discrete Choice Model, on the other hand, focuses on the institution attendance decision of students (Cabus & De Witte 2016). The model is popularly applied in institutions to estimate aspects such as the attendance of students in school (Reis 2013) and their social interaction with their peers as well as lecturers (Soetevent & Kooreman 2007).
Additionally, Constructivism Theory of Learning emphasises on the idea of learning by doing as an instrumental strategy for effective learning (Kirschner et al. 2006; Tobias & Duffy 2009). This theory is frequently used as an evaluation tool for teaching and learning in study institutions. According to this theory, the students are considered as the focal object and the lecturers serve as facilitators who ask questions to these students for accessing their learning outcome (The University of Sydney 2016). Thus, lecturers using this theory to guide students through educational processes are provided with the opportunity to interpret how far students have understood a course. This interpretation, in turn, affects the teaching method that should be used in future (Devlin 2010).
Lastly Vincent Tinto proposed the Tinto's integration framework that aims at understanding the reason for drop out of students from schools. The major purpose of this model is to increase sense of belongingness among the students with their educational institution(Brunsden et al. 2000). Thus, through Tinto model the educational institutions analyse student retention rates, reasons for drop outs (Towles et al. 1993; Rendón et al. 2000) and evaluation of the academic success of students (Kuh et al. 2006).
These theories mentioned above are thus seen to be used in academic institutions for evaluating student performance, participation and effectiveness of teaching. It is anticipated that background information on these established theories is supposed to assist in understanding the significance of RUFDATA and KIRKPATRICK evaluation tools as participation evaluation in reference with these theories.
This paper will focus on academic sources dealing with the evaluation policies, programmes and processes to check the progression of student participation rates . Hence , secondary data relevant to the issue at hand has been collected by exploration of online libraries and search platforms . Thus, various research studies and documents of policies and programmes on evaluation of student's participation rate has been explored, and critically appraised with the study topic. Later , the finding from each of the study have been discussed in an unbiased manner.
Subjectivism paradigm within ontological philosophical approach has been chosen as research philosophy in this assignment. The study emphasises on the phenomenon (participation) created by the student and lecturer perception and their consequent actions (evaluation methods and implementation of policies). It is believed that the perception on the progression of participation rate of students within teaching-learning environment of Higher Education and usage of evaluation models of RUFDATA and Kirkpatrick is based on the participants' (teachers, students, and others associated) interaction with the environment, therefore, subjectivism paradigm enabled to shape the course of the study along with the observations made. Further, an inductive approach has been taken as it considers information collected from secondary sources relevant to the study taken directly without further probe into them ( Thomas 2006 ).
The table below summaries some studies that have been symmetrically collected and reviewed for the purpose of this study
Abbildung in dieser Leseprobe nicht enthalten
Ethical considerations are important in academic research for making research ethically fair (Cooper et al. 2006; Emanuel et al. 2004). In order to maintain the ethical factors, reference to all the direct and indirect quotes in the research have been cited academically in order to abide by the ethical consideration of the research. As part of the paper an elaborate bibliography has also been provided at the end of the research with due acknowledgement to the date of publication.
Since this assignment aims at understanding the reason why evaluation tools are significant in evaluating student performance and which evaluation is tool is better for the purpose among RUFDATA and Kirkpatrick, a qualitative research approach has been applied owing to the satisfaction of the conditions of finding out the what, when, how, and where of a phenomenon or a happening (Attride-Stirling 2001 ;Bogdan & Bikien 1998; Denzin & Lincoln 2011).
Lastly, an analysis of the secondary data will be conducted through critical and systematic analysis of studies based on evaluation of participation in order to understand the comparative significance of RUFDATA and Kirkpatrick models within the academic environment as methods for evaluating progression rates for widening student participation to improve on time completion.
Therefore, attention has been given to select the most relevant studies and reports on application of RUFDATA and KIRKPATRICK model in academic institutions. Analysis of the quality of information obtained from these studies has been made in order to ensure that the data findings align with the proposed topic. At the end, all the findings from these selected sources have been brought together in an unbiased manner in order to interpret the conclusion of the data results so that a balanced and unbiased summary of the findings can be presented.
7.0 Data Analysis
The purpose of part of the section is to analyse the data collected so far on the student participation in classroom activities and on-time completion of teaching-learning session and comparative analysis of RUFDATA and KIRKPATRICK models for accessing student participation.
Scholars from different perspectives have analysed based student participation in classroom activities and significance of performance evaluation. The research findings of Dawood (2007) suggests that the participation of students and on-time completion are significant because they process also contribute towards bringing positive reforms in the system of education, quality enhancement along with student empowerment.
The research of Aquario (2009) on student participation agrees with the findings of Dawood (2007) by stating that students are considered as members of an educational system. Therefore, their respective participation in the classroom processes is considered essential in influencing the policies of the educational institution they belong to and the content of the course in which they are enrolled on. The research findings of Bembenutty (2009) also confirm that student participation develops the academic and critical thinking skills of students. However, the scholar also emphasises upon the role of evaluation as an important factor for assessing the quality of student performance. The scholar suggests that the major significance of the process of evaluation in academics is that it serves as the predictor of the performance of the students. Thus, the grades or scores obtained by students as a result of the evaluation process helps in assessing the academic achievement of the students in that particular course.
According to Rocca (2010), student participation in classroom has a number of benefits. For instance, the process of taking part in classroom discussions makes the students become active member of the educational process. When they take part in such sessions, their doubts get cleared much conveniently which makes them learn better and become better critical thinkers and develop self-confidence and hence, giving them more skills towards their assessments. However, the scholar strongly suggests that the lecturers should give emphasis upon evaluation of student performance and view participation through multidisciplinary activities that would provide these lecturers with a clear idea on the reasons why students participate or do not participate in classroom activities. The research of Zomorrodian and Mate¡ (2010) affirms with Rocca (2010) that student participation is beneficial for their progress and skill development. However, the researchers also highlight upon the role of evaluation of student performance by suggesting that the purpose of evaluation is to improve the quality of programme and quality of knowledge disclosed to students. Therefore, evaluation of performance is essential for identifying the strengths and weaknesses of students so that the weaknesses can be eliminated and strengths can be enhanced further. The research findings of Wright (2014) also confirm with the findings of Rocca that student participation leads to active engagement of students in classroom processes, which in turn lead to success of the students specially in academic sphere.
Such participation strategies are developed by lecturers for developing and improving the specific skills of individual students. However, the scholar additionally suggests that along with developing strategies for increasing classroom participation, it is essential for the lecturers to recognise the merit of performance evaluation for assessing the quality of student performance and facilitation of their performances. Thus, the scholarly findings of the researchers affirm that student participation has positive relationship with their career prospects and critical thinking abilities. Nevertheless, assessment of the quality of their performances is also necessary in order to interpret whether the knowledge and learning outcomes is following the intended objective. Report published by International Center for Alcohol Policies (2012) add further to the merit of student performance and significance of evaluation by presenting that the process of evaluation helps in comparing, analysing, and interpreting the advantages and disadvantages of the participation performance or merit of a subject based on certain set standards. Hence, simultaneous with encouraging student participation, evaluation of progression of student participation is important because it helps in understanding the degree of achievement of a subject or an individual in respect to certain fixed aims and objectives and expected results.
In context to the importance of evaluation of student participation, and consequent performance, two significant and commonly used evaluation models- KIRKPATRICK and RUFDATA have been critically evaluated and analysed using a mixture of sources, reports and studies. Thus the next section presents overview of KIRKPATIRCK AMD RUFDATA, their respective significances and their comparative merits and demerits, so that the utility of their integrative approach can be established in the study
7.1 KIRKPTRICK model
The KIRKPTRICK model is used for evaluating the quality of training outcome in respect to four levels of training outcome: reaction, learning, behaviour, and results.
THE KIRKPATRICK MODEL
To what degree participants react favorably to the learning event
Level 2: To what degree participants acquire the intended knowledge, skills
Learning and attitudes based on their participation in the learning event
To what degree participants apply what they learned during training when they are back on the job
To what degree targeted outcomes occur as a result of learning event(s) and subsequent reinforcement ®2010-2013 Kirkpatrick Partners, LLC. All rights reserved. Used with permission, visit kirkpatrickpartners.com tor more information.
Kirkpatrick Model of Evaluation
Sources: Donald Kirkpatrick (1996)
There are multiple benefits of this model (Bates 2004b; Tamkin et al. 2002). Based on this framework of evaluation of KIRKPATRICK model, Bates (2004c) suggests some of the advantages of this model over other models of evaluation like RUFDATA. The scholar indicates that this model addresses the needs of the individuals to be trained in the most personalised manner in order to gain clear understanding of the most applicable training evaluation in a systematic manner.
Secondly, this model suggests that level four is the most important one because information about level four outcomes is most valuable and descriptive as par as the effectiveness of training programmes are concerned. To be specific, information collected on training outcomes from level four help educational institutions in pulling the results of their professional activities which in turn contributes to organisational success.
Nonetheless, the most important benefit of the KIRKPATRICK model is that it serves in simplifying the complex process of training evaluation by providing the trainers a straightforward guide about the kinds of questions that may be asked to the trainees or attendees. Also, the model eliminates the need for measuring a complex network of factors connected with a training process. However, the research of Reiser and Dempsey (2011) critiques the KIRKPATRICK model by stating that the major disadvantage of this model is that it is incomplete in nature. The oversimplified perspective on training effectiveness that is proposed by the KIRKPATRICK model overlooks the significance of individual or contextual influences in evaluating training outcomes. The model does not take into account factors like organisational, individual, and training design and delivery related parameters that influence the effectiveness of performance based training programme before, during or after its implementation.
Thirdly, the model assumes that at each level of training outcome data are obtained that are more effective than the previous level. This idea has generated because the model gives most emphasis on level four and considers that most useful information about the effectiveness of participation enhancing training programs are obtained from this level. The reality is different because the weak theoretical relations in this model provide result in data that never seem to provide an adequate basis for this assumption. The research findings of Taylor et al. (2016) add further to the commentary of Reiser and Dempsey by indicating another limitation of this model.
According to these scholars, even though the general principle of KIRKPATRICK evaluation is to reduce biases in selection or targeting of the intervention group, in practice it never happens so owing of its impracticality. Rather, in practice, there is always some form of biasness in the selection process of the intervention group for evaluation. However, Sherman (2016) emphasises upon the applicability of RUFDATA and KIRKPATRICK models in order to bring out their differences. According to the scholar, the difference between KIRKPATRICK model and RUFDATA model lies in the manner which evaluators use them for various evaluation based tasks. The RUFDATA model is used as an evaluation tool mainly in cases where quality analysis of academic processes and framing of decisions regarding the course content and other related issues need to be done. On the other hand, KIRKPATRICK model is used for programme evaluation and prediction of the outcome of an activity. In short, the scholar attempts to indicate that while RUFDATA is generally implemented at the planning stage of the evaluation process, the KIRKPATRICK model is used at a matured stage for assessing the effect of a phenomenon.
During an evaluation, evaluators develop a system that involves spontaneous questioning. The aim of these questions is to address key practical dimensions of the evaluation and probe into the major aspects of the evaluation design. This series of questions is the acronym for RUFDATA (reasons and purposes, users, foci, data and evidence, audience, timing and agency. Thus, the basic characteristic of RUFDATA is that it appears in the form of a questionnaire comprising of a series of selected questions that assist in preliminary planning of the process of evaluation (Saunders 2000) . While discussing the merits of RUFDATA in the journal Beginning an Evaluation with RUFDATA: Theorising a Practical Approach to Evaluation Planning, Saunders (2000) states that the model helps in the process of formation of a series of decisions by framing evaluation at two levels.
RUDD ATA Framework of Evaluation
Source: Saunders, ( 2000 )
At one level, the policy statements are developed on the approach that is supposed to be adopted after overall progression of student participation evaluation. At the second level, evaluation of a specific activity is made. This dual level of assessment thus helps the evaluators in generating implicit decisions regarding planning and thinking about the evaluation. Saunders (2011) further emphasise upon the merits of RUFDATA while evaluating the quality of student participation in Higher Education in the book Setting The Scene: The Four Domains Of Evaluative Practice In Higher Education, where the scholar indicates that this evaluation tool is crucial for making procedural decisions regarding
evaluation planning. Through this tool, the evaluators are able to the reason and purpose of implementing specific evaluative practices, the uses of these practices, the central focus of each of these evaluative processes, the audience who are supposed to be benefited from these practices, the time span for which these evaluative practices should be implemented for obtaining best output.
Therefore, as the scholar remarks, RUFDATA is a highly interrogatory tool for evaluating activities and outcomes. It is an established model compare to KIRKPATRICK or other evaluative tool because it is an example of reification that is derived from the combined practices of a group of practiced evaluators.
In essence, the scholar aims to point out that RUFDATA scores over other tools of evaluation in making the evaluators aware of the knowledge based practices that are fundamental in initial planning of successful evaluation process that is lacking of any shortcomings and exactly specific.
Further, Asensio et al. (2006) also state that the tool also helps them in designing the questionnaires in such a way that takes the participations through a series of pre-planned activities evaluation plans that are in absolute matching with basic values about the project to be accomplished and the specific objectives that need to be achieved. They also remark that implementing RUFDATA during preliminary stages of evaluation helps in improvement of plans that have been already taken through stages like 'Enabling, Processing and Outcome' so that the project meets its objectives successfully. This tool further helps educational institution in becoming more reflective in its approach and respond to the needs of the students and lecturers more appropriately. Moreover, as each stages are planned separately since RUFDATA is implemented, so this makes the evaluators to attain flexibility and immediately respond to any changes and issues that might come up at any stage. Taylor et al. (2016) highlight on the typical structure of RUFDATA analysis forwarded by Saunders in Beginning an Evaluation with RUFDATA: Theorising a Practical Approach to Evaluation Planning to indicate towards some of its fundamental benefits. The scholars state that the first benefit of RUFDATA is that focuses on asking formative questions that are to the point, precise and very contextual. As a result, the evaluators become aware of the case specific better practices or implementation processes that can be executed after the evaluation process is done. If this advantage of RUFDATA is compared with KIRKPATRICK model then it will be found that in case of KIRKPATRICK model the questions that are asked at various levels of evaluation are summative in nature. As a result, they are likely to get modified when applied to a particular case.
The next benefit of RUFDATA is that it helps in understand the reason that results in a specific type of outcome. As the main cause behind an outcome becomes easy to identify through RUFDATA evaluation so it gives the evaluators the chance to modify this cause for making their preferred alterations in the outcome. This opportunity is not available in case of KIRKPATRICK evaluation.
Thirdly, the design of RUFDATA evaluation is much simplified and flexible in nature than the KIRKPATRICK model because in case of KIRKPATRICK firmer requirements of the design framework has to be followed by the evaluator for generating valid and standardised outcome evaluation conclusions from the questions that are asked.
However, Taylor et al. (2016) also critique RUFDATA on few aspects. For instance, the scholars find out that one of the major limitations of the RUFDATA evaluation is that even though it can identify the reason for an outcome; it lacks finding out or predicting if certain processes actually have the desired outcome. Secondly, even though it is thought that RUFDATA analysis is comparatively easier to perform and less complicated than KIRKPATRICK, but in reality it is not so. Rather, multiple components need to considered and captured by an evaluator for the process of implementation of an intervention.
This paper has made a comparative study on the comparative significance of RUFDATA and KIRKPATRICK tools of evaluation in educational institutions. While conducting the assessment, a brief background was developed by emphasising upon the significance of student performance and performance evaluation for the purpose of identifying the specific 'object' of the research.
In the next part, situating of the study was done by briefing about student participation, role of evaluation and significance of KIRIPATRICK and RUFDATA. Next, situating the paper regarding research literature was unertaken by analysing scholarly discussions on student participation in education, progress in participation rate and on-time completion of the classroom teaching and learning along with improving participation.
A mixture of theories in relation to evaluation of the participation rate has also been discussed in this part of the assessment. After providing elaborate information on the
chosen methodological framework, some critical analysis on scholarly perceptions related to significance of student participation and the significance of evaluation has been discussed. An elaborate critical analysis of scholarly commentary on the respective benefits and limitations of KIRKPATRICK and RUFDATA tools have also been highlighted.
Through the study, it has been found that both RURDATA and KIRKPATRICK are completely different tools of evaluation that execute entirely different types of evaluation. Both the tools of evaluations have their own entirely different types of evaluation. Both the tools of evaluations have their own sets of merits and limitations. . Moreover, there are differences between the tolls based on their applicability as well. Therefore, it is recommended that the evaluators should take an integrated approach and concentrate on applying both tools for evaluation on progression of student participation in higher education and their on time completion. To be more specific, as this paper clearly highlights with scholarly arguments that implementation of RUFDATA should be dome at the initial stages of evaluation planning and KIRKPARTCIK model should be implemented for evaluation of the outcome, expectations are that this study will help lecturers, mentors and academic and evaluators in getting the best results from the evolutions process if this specific process in followed
- Aquario, D., 2009. The active participation of students in teaching and evaluation processes within universities, INTECH Open Access Publisher.
- Asensio, M., Hodgson, V. & Saunders, M., 2006. Developing an inclusive approach to the evaluation of networked learning: the ELAC experience. In Proceedings of the Fifth International Conference on Networked Learning, pp. 10-12.
- Attride-Stirling, J., 2001. Thematic networks: an analytic tool for qualitative research. Qualitative Research, 1(3), pp.385-405.
- Bates, R., 2004. A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence. Evaluation and Program Planning, 27, pp.241-247.
- Bembenutty, H., 2009. Teaching Effectiveness, Course Evaluation, and Academic Performance: The Role of Academic Delay of Gratification. Journal of Advanced Academics, 20(2), pp.326355.
- Bogdan, R. & Bikien, s., 1998. Qualitative research in education. An introduction to theory and methods, Needham Heights, MA: Allyn & Bacon; A Viacom Company.
- Brunsden, V. et al., 2000. why do HE students drop out? A test of Tinto's model. Journal of further and higher education, 24(3), pp.301-310.
- Cabus, S.J. & De Witte, K., 2016. why do Students Leave Education Early? - Theory and Evidence on high school dropout rates,
- Cooper, D.R., Schindler, p.s. & Sun, J., 2006. Business Research Methods, McGraw Hill International.
- Czekański, K.E. & Wolf, Z.R., 2013. Encouraging and Evaluating class Participation. Journal of University Teaching & Learning Practice, 10(1), p.15.
- Dawood, S.S., 2007. student Participation In Quality Enhancement-Tools for student Empowerment, Kilakarai, Tamilnadu, India.
- Denzin, N.K. & Lincoln, Y.S., 2011. The SAGE Fiandbook of Qualitative Research, SAGE Publications.
- Department of Training and Education, 2016. student Participation. School Policy & Advisory Guide; Victoria state Government.
- Devlin, B.M., 2010. Effects of Students' multiple Intelligences on Participation Rate of Course Components in a Blended Secondary Family and Consumer Sciences Course. Iowa state University.
- Emanuel, E.J. et al., 2004. what makes clinical research in developing countries ethical? The benchmarks of ethical research. The Journal of infectious diseases, 189(5), pp.930-7.
- Evertson, C.M., Emmer, E.T. & Worsham, M.E., 2006. classroom Management for Elementary
Teachers Seventh Edition, Boston, USA: Pearson Education.
- Fletcher, A., 2003. Meaningful student Involvement A GUIDE TO INCLUSIVE SCHOOL CHANGE, Washington.
- Fuller, W.C., Manski, C.F. & Wise, D.A., 1982. New Evidence on the Economic Determinants of Post-secondary Schooling Choices. Journal of Human Resources,, pp.477-498.
- Gialdino, l.v. de, 2009. Ontological and Epistemological Foundations of Qualitative Research. Qualitative Social Research, 10(2).
- Golafshani, N., 2003. Understanding Reliability and Validity in Qualitative Research. The Qualitative Report, 8(4), pp.597 - 606. Available at: http://nsuworks.nova.edU/tqr/vol8/iss4/6 [Accessed May 7, 2016].
- Hill, T.M., 2007. Classroom Participation, West Point, NY,.
- Holden, M.T. & Lynch, p., 2004. choosing the Appropriate Methodology: Understanding Research Philosophy. The Marketing Review, 4(4), pp.397-409.
- Ingram, K.L. et al., 2000. Applying to Graduate School: A Test of the Theory of Planned Behavior. Journal of Social Behavior and Personality, 15(2), p.215.
- International Center for Alcohol Policies, 2012. what is evaluation,
- Kirschner, P.A., Sweller, J. & Clark, R.E., 2006. why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational psychologist, 41(2), pp.71-86.
- Kuh, G.D. et al., 2006. what matters to student success: A review of the literature,
- Logan, R. & Geltner, p., 2000. The Influence of Session Length On student Success,
- New Jersey Department of Education, 2015. Meeting Participation Targets for New Jersey State Assessments: Action Plan Development Guide, New Jersey.
- Phillips, J., 1996. How much is the training worth? Training and Development, 50(4), pp.20-24.
- Pitchforth, J. et al., 2012. Factors affecting timely completion of a PhD: a complex systems approach. Journal of the Scholarship of Teaching and Learning, 12(4), pp.124-135.
- Razafi, J. et al., 2009. Analysis of the Factors that Explain the Non-Completion of the Curriculum: A study of the Teaching Time in Primary Schools in Madagascar. Journal of International Cooperation in Education, 12(1), pp.89-105.
- Reis, H., 2013. Girls' school attendance: A Dynamic discrete choice structural approach,
- Reiser, R.A. & Dempsey, J. V., 2011. Trends and issues in instructional design and technology, Upper Saddle River, NJ: Pearson Merrill Prentice Hall.
- Rendón, L.I., Jalomo, R.E. & Nora, A., 2000. Theoretical considerations in the study of minority student retention in higher education. Reworking the student departure puzzle,, 1, pp.127156.
- Rocca, K.A., 2010. student Participation in the College classroom: An Extended
Multidisciplinary Literature Review. Communication Education, 59(2), p,185_213.
- Saunders, M., 2000. Beginning an Evaluation with RUFDATA: Theorizing a Practical Approach to Evaluation Planning. Evaluation, 6(1), pp.7-21.
- Saunders, M., 2011. Setting the scene: the four domains of evaluative practice in higher education,
- Sherman, P.D., 2016. Using RUFDATA to guide a logic model for a quality assurance process in an undergraduate university program. Evaluation and program planning, 55, pp.112-9.
- Soetevent, A.R. & Kooreman, p., 2007. A discrete-choice model with social interactions: with an application to high school teen behavior. Journal of Applied Econometrics, 22(3), pp.599624.
- Tamkin, p., Yarnall, J. & Kerrin, M., 2002. Kirkpatrick and Beyond: A review of models of training evaluation,
- Taylor, c. et al., 2016. Widening Access to Higher Education Evaluation Guidance,
- The University of Sydney, 2016. The University of SydneyFaculty of Education and Social Work,
- Thomas, D.R., 2006. A General Inductive Approach for Analyzing Qualitative Evaluation Data. American Journal of Evaluation, 27(2), pp.237-246.
- Tobias, s. & Duffy, T.M., 2009. Constructivist instruction: Success or failure?, Routledge.
- Towles, D., Towles, D.E. & Spencer, J., 1993. The Tinto Model As A Guide to Adult Education Retention Policy. Community Services CATALYST, 23(4).
- Train, K.E. & Winston, c., 2007. Vehicle choice behavior and the declining market share of US automakers. International Economic Review, 48(4), pp.1469-1496.
- Trowler, V., 2010. student engagement literature review,
- UNSW Australia, 2016. Grading class Participation,
- Watkins, R. et al., 1998. Kirkpatrick plus: Evaluation and continuous improvement with a community focus. Educational Technology Research and Development, 46(4), pp.90-96.
- Weng, c., Weng, A. & Tsai, K., 2014. Online Teaching Evaluation for Higher Quality Education: Strategies to Increase University Students' Participation. Turkish Online Journal of Educational Technology-TOJET, 13(4), pp.105-114.
- Willms, J.D., 2000. STUDENT ENGAGEMENT AT SCHOOL A SENSE OF BELONGING AND PARTICIPATION RESULTS FROM PISA 2000,
- Wright, J., 2014. Teaching Innovation Projects Participation in the classroom: Classification and Assessment. Teaching Innovation Projects, 4(1).
- Zomorrodian, A. & Matei, L, 2010. Program Evaluation: Its Significance and Priority for Shaping and Modification of Public Policies: A Comparative Analysis. In ASBBS 17th Annual Conference.
- Quote paper
- Ghanshym Koolmon (Author), 2016, An evaluation of how progression rates for widening participation students can be improved using RUFDATA and Kirkpatrick models to improve on time completion, Munich, GRIN Verlag, https://www.grin.com/document/442576