AAQEP Accreditation 2022
Appendix E: Evidence of Data Quality
Multiple Subject Credential Program
Quantitative Data Measure: Midterm & Final Fieldwork Evaluation (CREATe Rubric used formatively) | |
Description of Measure | CREATe is a locally developed observation tool that provides a common language for preservice teachers, coaches, faculty, mentors, and district administrators to orient their feedback in an actionable manner. Through synthesis of existing district observation tools, and by aligning this synthesis to the new Teacher Performance Expectations (2016 CCTC adopted TPEs), the CREATe Rubric consists of 14 items organized within the following four domains: 1) Positive Environment, 2) Instructional Design and Implementation, 3) Rigorous and Appropriate Content, and 4) Reflection-In-Action. Each of the 14 items is rated along a seven-point developmental continuum with the following rating categories: Unobserved, Attempting, Exploring, Emerging, Developing, Skillful, and Masterful. Each rating category has an anchor descriptor that operationalizes each of the 14 items with action-oriented, observable “look-fors.” |
Evidence (or plans) regarding validity | Rubric development began with close examination of the new state standards to ensure
that the rubric would measure the skills required of program completion. A team with
university and district representatives analyzed the six standards and 45 substandards
to identify those that are critical for novice teachers and could be directly observed
in a classroom setting. The selected observable standards were compared and synthesized,
with nearly 100% consensus across all constituents regarding which observable standards
should be represented on the rubric as a form of face validity. From this analysis,
14 rubric items were developed, aligned to 17 preservice substandards. Next, extant
district inservice rubrics were synthesized and incorporated into CREATe so that it
could be aligned and used across districts as a continuum of development from novice
to expert teacher. Different districts’ inservice rubrics varied in regard to complexity
and explicitness of how inservice standards were operationalized. These three district
inservice rubrics were coded and synthesized to develop evidence-based language for
common “look-fors” as descriptive anchors in the more advanced performance categories
of CREATe Rubric. Based on the analysis and synthesis of the three inservice district
rubrics, seven performance categories were developed for CREATe: five (unobserved,
attempting, exploring, emerging, developing) spanning the expected developmental trajectory
of preservice teachers and two (skillful, masterful) extending teacher development
into inservice practice. Integration of the district inservice rubrics into CREATe
explicitly bridges the instrument from preservice to inservice. CREATe 2019 Validation Study 1-page overview |
Evidence (or plans) regarding reliability |
All of our MS coaches have participated in CREATe Rubric training. 60% of the MS coaches (N15) are fully reliable, and 7 more are partially reliable in that they are half way through the process and on their way to being fully reliable. 3 Faculty in Residence and 3 Teachers in Residence along with additional leadership team members from our district partners and Fresno State’s Continuous Improvement Lead participated in CREATe training. Each residency hosted the trainings which also included 2 live 20-minute observations with placing the teacher on the continuum followed by immediate discussion and reflection using the rubric after visiting the classroom. This model of training became a monthly community of practice to build connections and support between the three residencies (Fresno Unified, Sanger Unified, Central Unified). We also ensure interrater reliability using a rigorous observer training protocol in which observers must pass a paired observation based on reliability criteria before collecting live data in the field. Once we were able to establish interrater reliability, we collaborated with a partner school district’s New Teacher Induction Program to examine the validity of the CREATe Rubric. For concurrent validity, we used The New Teacher Project (TNTP) Core Rubric, which has established validity and is widely used. Pairs of calibrated CREATe and TNTP Core observers were assigned randomly to simultaneously observe in pre-selected classrooms. A strong partnership with the partner district New Teacher Induction program was the foundation to initiating and completing this validation study; the induction program staff helped us to identify a sample of 28 first-year teachers. The 28 participating teachers included nine graduates of the focal STaR Residency EPP and 15 non-residency graduates. The 15 non-residency group were a mix of candidates from other EPPs as well as candidates from the focal EPP who were not in the STaR Residency program (completed other pathways). 24 teachers were successfully observed during the observation window, with 4 teachers excluded due to scheduling/observation conflicts. Observer data was compiled and merged to enable data analysis and compare performance across measures to determine validity, and to provide information regarding necessary changes and revisions needed for continuous improvement in the STaR Residency program. For the current validation study, we used a paired observational design to observe 24 first-year teachers over a six-week period in the Spring 2019 semester using two independent observation tools: CREATe (Yun & Bennett, 2018) and the Core Teaching Rubric (TNTP, 2017). The TNTP Core Rubric was chosen because its four dimensions are conceptually well-aligned to the four dimensions of CREATe. Data collected by the observers included observation notes and rating scores. Notes and scores were collected either on paper or electronically on a laptop. Notes for the TNTP Core Rubric were sent to TNTP for compilation and a score report was generated for each of the observed teachers. Notes and ratings for CREATe were collected using a CREATe Score Sheet; ratings were manually entered into a spreadsheet. Data sources also include communications between the research team and district personnel as well as project documents such as timelines, calendars, and meeting notes. Analyses include confirmatory factor analysis, Pearson correlation, document analysis, and qualitative coding. Document analysis also suggests that CREATe has face validity when compared with Core. The four dimensions of the CREATe were well aligned with the four TNTP dimensions: Culture of Learning - Positive Environment; Academic Ownership - Instructional Design and Implementation; Essential Content; Rigorous and Appropriate Content; Demonstration of Learning - Reflection-in-Action (Core; CREATe) (see Figure 1). Regarding concurrent validity, CREATe and Core are measuring performance similarly; average performance on the Core rubric was significantly correlated with average performance on CREATe (r=0.35, p<0.10). This correlation suggests that mean performance were consistent across the two instruments. When the teachers were ranked according to average Core and CREATe scores, the rankings aligned 100%. Confirmatory factor analysis (CFA) of CREATe suggests that the four proposed dimensions of the rubric hold up relatively well, with the exception of the Items 12 (Content Accessibility) and 13 (Interdisciplinary Integration). CREATe scores on 14 indicators, distributed across four dimensions. With the exception of the third dimension (which includes Items 12 and 13), a CFA and analysis of internal consistency both suggest that the other dimensions of the rubric hold together as outlined in the original design of the rubric (CFA, p<0.01; factor loadings excluding #12/#13; a>0.8). Overall findings suggest that CREATe differentiates performance trends across three of its four predetermined dimensions (internal consistency/reliability), and that overall performance trends on CREATe are consistent across a different observation instrument, suggesting that CREATe measures teacher performance relatively well. Qualitative coding indicated that qualitative feedback for CREATe provides more evidence about the lesson itself, while qualitative feedback for Core provides more evidence about the scoring decision. This finding suggests fidelity in use of CREATe as the protocol calls for time-stamped scripted notes of classroom activities. |
Evidence (or plans) regarding fairness |
Survey and Interview data with various stakeholders engaged with the CREATe Rubric have been conducted to better understand their experience with the measure. The CREATe Rubric has been well-received by district leadership and candidates. Coaches and faculty have been split on their reception of the CREATe rubric. Some of their concerns center on the length and cognitive load of the tool as well as disagreements around who is responsible for introducing the rubric to the credential students. When faculty who were the lead developers of the CREATe moved on from the program and coaches continued to question the utility of the CREATe Rubric, an assessment of all of the available rubrics were conducted. Additionally presentations and surveys to gather coach, faculty, and district feedback was administered. Based on this input, a decision was made in December 2020 to transition to the New Teacher Project (TNTP) Core rubric, which Chico State adapted to align with the CTC Standards. A Rubric Advisory Board had formed consisting of representatives from all three basic credential programs and an implementation timeline plus additional adaptations were made. However, by March 2020 California moved into its first COVID-19 lockdown putting the new formative rubric implementation timeline on hold. Of all the stakeholders, the candidates themselves have seemed to demonstrate the most positive responses. Candidates appreciate the explicit teacher behaviors in CREATe and their alignment to the TPEs. This helps candidates make the connections between the TPEs/CSTPs and their enacted practice. It also helps candidates to operationalize the concepts of developmentally appropriate practices, universal design for learning, culturally and linguistically sustaining practices, and inquiry. CREATe helps candidates construct what teaching looks like and focus on specific moves they can practice and improve. The quotes below demonstrate candidate perceptions of CREATe. “The Create Rubric worked well for me in my clinical practice. It was a great guideline in what I need to be striving for and if I was meeting my goals as a teacher. In my opinion, without a rubric, we wouldn't have anything to base our teaching on, or have anything to strive for. It also sets a great goal and framework as teachers in different categories.” -Teacher Candidate Spring 2018 “The use of the CREATe Rubric as a planning tool for preservice teachers is a great resource because it helps us guide our lesson planning and implementation of the different aspects of what as teachers we should be doing every single day.” -Teacher Candidate Fall 2018 One thing is evident, ALL candidates responded well to a formative rubric that provided common language to make the skills of teaching more visible so they could receive actionable feedback from their coaches. This makes moving toward initial implementation of the new rubric ever more pressing. |
Quantitative Data Measure: Fresno Assessment of Student Teachers II (FAST II) | |
Description of Measure | FAST II consists of two projects: the Site Visitation Project (SVP) is completed during initial student teaching (EHD 178) and the Teaching Sample Project (TSP) is completed during final student teaching (EHD 170). The SVP assesses teacher candidates’ ability to plan, implement, and evaluate instruction. The three parts of the project include (1) Planning: planning documentation for a single lesson incorporating state-adopted content standards and English language development, (2) Implementation: an in-person observation and videotaping of the teaching of the lesson, (3) Reflection: a review of the entire video, selection of a 3- to 5-minute video segment, and a written evaluation of the lesson. (TPE 1.1, 1.3, 1.5, 1.8, 2.2, 2.6, 3.1, 3.2, 3.3, 3.5, 4.1, 4.2, 4.7, 6.1). The Teaching Sample Project assesses teacher candidates’ ability to (a) identify the context of the classroom, (b) plan and teach a series of at least five cohesive lessons with a focus on content knowledge and literacy, (c) assess students’ learning related to the unit, (d) document their teaching and their students’ learning, and (e) reflect on the effectiveness of their teaching. Teacher candidates document how they are addressing the needs of all their students in the planning, teaching, and assessing of the content. (TPE 1.5, 1.6, 1.8, 2.1, 2.3, 2.6, 3.1, 3.2, 3.3, 4.1, 4.3, 4.4, 4.7, 5.1, 5.2, 5.5, 5.8, 6.1, 6.3, 6.5). |
Evidence (or plans) regarding validity | The SVP assesses the candidate’s ability to plan, implement and reflect upon instruction. Each of these abilities is assessed with performance tasks: the lesson plan (planning), teaching the lesson (implementation) and self-evaluation of the lesson (reflect upon instruction). In order to assess the teaching performance expectations (TPE) the tasks each have a rubric which share the same categories: subject specific pedagogy, applying knowledge of students and student engagement. The categories are rated on a 4-point scale (1-does not meet expectations, 2-meets expectations, 3-meets expectations at a high level, 4-exceeds expectations). The wording in the rubrics is adapted to each of the three specific tasks. Data from the FAST indicate that students are developing the competencies that are essential to effective classroom teaching practice. |
Evidence (or plans) regarding reliability | Every 2 years, a psychometric analysis of the Site Visitation Project (SVP) is performed. Our most recent analysis found that of the 15% of the SVPs that were double scored, 70% gave the same score and 100% were within +/-1. 94.7% agreed on the determination of whether the SVP should pass or not. |
Evidence (or plans) regarding fairness | To monitor equity, the three subtests and the final score were examined as part of our psychometric analysis in regards to differences based on students’ ethnicity, gender, whether the student first language was English, the students’ self-rated degree of English language fluency on a 5-point Likert scale, and self-reported disability. In an effort to examine scoring equity, a series of non-parametric statistical tests were calculated to determine whether significant differences in scoring corresponded to students’ demographic characteristics. When examining the three subtests only one comparison showed statistically significant differences, the self-rated degree of English language fluency in the observation task. The statistical analyses for disability were not conducted, because of a very small sample size of 2 students self-reporting a disability. The scores were tabulated and inspected, all scores were passing. |
Evidence regarding Trustworthiness | Developed over a number of years with the support of the Renaissance Group and a Title II grant, the FAST addresses each of California’s TPEs. Each assessment is scored by at least two faculty members, including the university coach assigned to mentor the teacher candidate. Mandatory calibration sessions are held annually, and all scorers must participate in the norming process each year. The inter-rater reliability is higher than the norm for such assessments. Moreover, students who fail the assessment have the opportunity to revise and resubmit. |
Quantitative Data Measure: Journey Mapping | |
Description of Measure | Journey maps help understand candidate’s experiences but they provide unique insights as they document individual candidates’ experiences over time by anchoring them to memorable emotional highs and lows during their time in the program (Rains, 2017). The Journey Map is collected as a reflective, in-class activity facilitated by an instructor on record or a member from the program research team. As a result, journey maps offer opportunities for multi-levels of reflective purpose: at the individual candidate level, at the individual instructor level who can reflect on their class of candidates as a whole, and at the program level where trends can be identified across time. |
Evidence (or plans) regarding validity | The journey map measures what is intended to measure in that it captures students' experiences of program milestones and their own memorable emotional highs and lows in the program in a way that privileges a qualitative approach. |
Evidence (or plans) regarding reliability | n/a |
Evidence (or plans) regarding fairness | The journey maps are collected during the last two weeks of the final phase of the program offering all students the opportunity to participate in providing feedback on their lived experience in the program. It can be administered either face to face or virtually. |
Evidence regarding Trustworthiness | Journey maps are coded to identify emergent themes pointing to potential areas for program improvement. After reading through a sample of the journey maps to become familiar with the data, a focused coding scheme was developed to help us look for similar information across all of the candidates maps while also noting new themes as they emerged. The focused coding scheme also allowed us to engage in interrater reliability practices. Two coders would first meet to discuss what they see in the same journey map, then they would analyze the map using the scheme and then discuss where the analysis was confirming and disconfirming to then determine how to interpret the scheme moving forward. The coders then analyze a set of five journey maps from the same cohort and meet to again compare and discuss their analysis. Once the coders saw more alignment in their coding practices then they would code a full cohort. Then, a third coder who has also been trained to use the coding scheme would code 10% of the other coders’ maps to compare the analysis as one way to strengthen the trustworthiness of the journey map data. The focused coding scheme allowed us to calculate counts of candidates' experiences while we also paid close attention and still documented their experiences from a nuanced, qualitative perspective. Events were coded as either negative or positive. Program highlights were also identified. |
Quantitative Data Measure: CSU Educator Quality Center Completer Survey | |
Description of Measure | The California State University’s Education Quality Center (EdQ) oversees the administration of a completer-survey to exiting candidates of all CSU teacher-preparation programs. The survey is available year-round and campuses are encouraged to make completion of the survey a component of graduates’ final paperwork. The survey contains items asking about candidates’ perceptions of various aspects of the preparation program and the field placement experience. Campuses have access to annual results from the survey by utilizing the EdQ Dashboard. Results can be disaggregated by various measures including campus, year of completion, respondent race/ethnicity, and type of credential. Note: the CTC also distributes a Credential Program Completer Survey which gives an overall view of CA Educator Preparation Programs. |
Evidence (or plans) regarding validity | Used systemwide, the survey serves as a valid measure of program completers’ perceptions of the teacher preparation program because it asks questions directly aligned with the California Teacher Performance Expectations and California Standards for the Teaching Profession. Additionally, the survey’s content is tailored to the type of program each respondent completed, making the content valid for each individual. For example, the survey for a Single Subject English program completer contains an item about how well the program prepared them to develop students' understanding and use of academic language and vocabulary whereas the survey for a Single Subject Social Science program completer contains an item about how well the program prepared them to develop students' Historical Interpretation skills. All program completers respond to items asking about their preparation of general pedagogical skills, such as their perception of how well the program prepared them to differentiate instruction in the classroom. In this way, the survey is a valid measure of completers’ perceptions of the program. |
Evidence (or plans) regarding reliability | Uncertainty about evaluation findings comes from two principal sources: the number of evaluation participants and the extent of their concurrence with each other. The evaluation findings become increasingly certain to the extent that the questions are answered by increasing numbers of program completers and their employment supervisors. Each year the data set yields the percent of respondents who gave specified answers to each item and includes reliability estimates in the form of confidence intervals based on the number of respondents and the concurrence or homogeneity of responses. The CSU Deans of Education grouped together questions into "composites" (e.g., Preparing for Equity and Diversity in Education) for a more reliable interpretation. The reliability for the composite scores for the system and the individual campuses generally range from 0 to 2 percentage points at the 90% confidence level. |
Evidence (or plans) regarding fairness |
The existence of this CSU-wide service allows each campus to track the effects of program changes designed to improve performance. Because the instrument was designed and is implemented systemwide with graduates throughout the state, we believe it is a fair and trustworthy measure. Fresno State has initiated a college-wide data summit to consider the findings of this statewide survey and triangulate them with campus data, including the percentage of First Generation students, access to resources like scholarships, and culture and context of the cohorts in which prospective teachers are placed. Through this triangulation process, we are able to determine the alignment of the finding from the survey with our other measures, further assuring us of the survey’s trustworthiness as an instrument. In the process, we are also able to inform the impact on program changes on our own students with respect to the unique diversity of culture and needs in the Central Valley. |
Quantitative Data Measure: CCTC Employer Survey | |
Description of Measure |
Beginning in 2019, the CTC sends an annual survey to employers of recent completers of all educators preparation programs in the state. The goal of the survey is to compile evidence statewide about the extent to which K-12 educators are prepared for their most important responsibilities. The survey includes 11 items. The first five of these ask basic demographic details about where the new educator was prepared and about the responding employer. The remaining six items are aligned with the California Teaching Performance Expectations. The results of the survey are disaggregated for CSUs, UCs, Private, and Local Education Agencies. 79% of respondents are principals. Of the 766 employers who responded to the survey in 2018-2019 (the last year for which data are available), 53% employed are recent graduates from a CSU teacher preparation program. |
Evidence (or plans) regarding validity |
Used statewide, the survey serves as a valid measure of employers' perceptions of how well educator preparation programs prepared alumni for their first year. Additionally, items included in the survey are directly aligned with the California Teacher Performance Expectations. All employers respond to items asking about the preparation of new teachers in general pedagogical skills, such as their perception of how well the program prepared new teachers to differentiate instruction in the classroom. In this way, the survey is a valid measure of employers’ perceptions of the program. |
Evidence (or plans) regarding reliability |
Uncertainty about evaluation findings comes from two principal sources: the number of evaluation participants and the extent of their concurrence with each other. The evaluation findings become increasingly certain to the extent that the questions are answered by increasing numbers of program completers and their employment supervisors. Each year the data set yields the percent of respondents who gave specified answers to each item and includes reliability estimates in the form of confidence intervals based on the number of respondents and the concurrence or homogeneity of responses. |
Evidence (or plans) regarding fairness |
Surveys are sent to all administrators within the state, giving all the opportunity to share their perspective on the educator preparation program their new teachers attended. In this way, the survey does not discriminate in who is invited to respond and whose voice is heard. Additionally, employers have a three-month window (October 1-December 31 annually) in which to complete the survey, providing ample opportunity to respond. The existence of this CCTC-wide service allows each preparation program to track the effects of program changes designed to improve performance. Because the instrument was designed and is implemented to employers throughout the state, we believe it is a fair and trustworthy measure. |
Quantitative Data Measure: CSU Year One Completer Survey | |
Description of Measure |
The California State University’s Education Quality Center (EdQ) oversees the administration of a survey of all individuals who completed a CSU teacher-preparation programs after their first year on the job. The survey is administered annually April through July. In April, the EdQ Center emails an initial survey invitation to all completers of MS-SSES Credential Programs serving as first-year teachers in public schools, charter schools, or private schools in all locations. Follow-up reminders are sent every two weeks throughout the duration of the survey window. In addition to asking questions about the completer’s demographics and educational background, the survey also contains items to capture data about the school where the completer is employed. Additionally, the survey includes items asking about candidates’ perceptions of various aspects of the preparation program and the field placement experience. Campuses have access to annual results from the survey by utilizing the EdQ Dashboard. Results can be disaggregated by various measures including campus, year of completion, respondent race/ethnicity, and type of credential. Note: the CTC also distributes a Credential Program Completer Survey which gives an overall view of CA Educator Preparation Programs. |
Evidence (or plans) regarding validity |
Used systemwide, the survey serves as a valid measure of graduates' perceptions of how well the teacher preparation program prepared them for their first-year of teaching because it asks questions directly aligned with the California Teacher Performance Expectations and California Standards for the Teaching Profession. Additionally, the survey’s content is tailored to the type of program each respondent completed, making the content valid for each individual. For example, the survey for a Single Subject English teachers contains an item about how well the program prepared them to develop students' understanding and use of academic language and vocabulary whereas the survey for a Single Subject Social Science teacher contains an item about how well the program prepared them to develop students' Historical Interpretation skills. Similarly, surveys sent to teachers with Multiple Subjects credentials or Educational Specialist credentials respond to items directly aligned to standards associated with their credentials. All graduates respond to items asking about their preparation of general pedagogical skills, such as their perception of how well the program prepared them to differentiate instruction in the classroom. In this way, the survey is a valid measure of completers’ perceptions of the program. |
Evidence (or plans) regarding reliability |
Uncertainty about evaluation findings comes from two principal sources: the number of evaluation participants and the extent of their concurrence with each other. The evaluation findings become increasingly certain to the extent that the questions are answered by increasing numbers of program completers and their employment supervisors. Each year the data set yields the percent of respondents who gave specified answers to each item and includes reliability estimates in the form of confidence intervals based on the number of respondents and the concurrence or homogeneity of responses. The CSU Deans of Education grouped together questions into "composites" (e.g., Preparing for Equity and Diversity in Education) for a more reliable interpretation. The reliability for the composite scores for the system and the individual campuses generally range from 0 to 2 percentage points at the 90% confidence level. |
Evidence (or plans) regarding fairness |
Data were not constructed with bias, and data show positive predictive value (statistical parity) among groups and support equalized odds. The existence of this CSU-wide service allows each campus to track the effects of program changes designed to improve performance. Because the instrument was designed and is implemented systemwide with completers throughout the state, we believe it is a fair and trustworthy measure. Fresno State has initiated a college-wide data summit to consider the findings of this statewide survey and triangulate them with campus data, including the percentage of First Generation students, access to resources like scholarships, and culture and context of the cohorts in which prospective teachers are placed. Through this triangulation process, we are able to determine the alignment of the finding from the survey with our other measures, further assuring us of the survey’s trustworthiness as an instrument. In the process, we are also able to inform the impact on program changes on our own students with respect to the unique diversity of culture and needs in the Central Valley. |
Quantitative Data Measure: Pre and Post Dispositions Survey | |
Description of Measure | Complementing the 23 CSU Completer Exit Survey of graduates, the Pre-Post Dispositions Survey is administered at the beginning and end of each fieldwork course (EHD 178 and EHD 170). The California Commission on Teacher Credentialing (CCTC) requires all candidates to demonstrate personality and character traits that satisfy the standards of the teaching profession through a 9-item measure of these traits. Thus, Fresno State developed this Teacher Commitment Statement consisting of six professional dispositions teacher candidates complete as part of their entrance requirements and when they complete their credential program. |
Evidence (or plans) regarding validity | Items included within the commitment statement align with the dispositions CTC requires of candidates credentialed to teach within the state. It should be noted that commitment statements rely upon self reported dispositions, which can be inaccurately represented. |
Evidence (or plans) regarding reliability | Reliability of the candidate’s commitment is dependent upon self reported dispositions that can be inaccurately represented. |
Evidence (or plans) regarding fairness | The commitment statement is intended to reinforce the values of fairness among the candidates as well as an expectation of non-biased dispositions of the candidates toward students of all backgrounds, languages, cultures, and experiences. In the instance when incoming candidates or exiting graduates perform aberrantly on this commitment statement, they are identified, counseled, and advised about their pursuit of the profession. In the rare instance of candidates not agreeing with the necessary commitments, they may resubmit. |
Quantitative Data Measure: Course grades/Candidate Performance in Courses | |
Description of Measure | Credential candidates must maintain a 3.00 GPA in all credential courses with no individual grade lower than a “C”. Any grade listed as “I”, “IC”, “WU”, “NC”, “D”, or “F” does not meet Fresno State’s credential program requirements. |
Evidence (or plans) regarding validity | Course grades of ‘C’ or ‘Credit’ are required for program completion. Therefore, with
few exceptions, all candidates must complete and receive a ‘C’ grade or better for
the articulated courses. Faculty hired to teach within the programs are considered
experts in their specific field and all have relevant experience for the content of
the courses which they are teaching. CTC requires a 3.0 GPA for the preliminary credential. |
Evidence (or plans) regarding reliability | Requirements in courses stay relatively consistent over time since they are aligned with the CCTC program standards. In addition, the courses are staffed by the same faculty, on the whole. |
Evidence (or plans) regarding fairness | Course grades are required at the end of the course. With few exceptions, all program completers must complete and receive a grade for courses taken. For each course, specific details are provided within course syllabi about requirements for course assignments and to earn a passing grade. |
Quantitative Data Measure: Sample Narrative Community Context Assignment | |
Description of Measure | For this assignment, teacher candidates conducted a thorough examination of the community, the school, and the specific classroom in which they are completing their fieldwork. Teacher candidates are required to research the community in which they serve to better understand and support the students in the classroom in which they complete their fieldwork experience. Candidates are also required to develop an understanding of the school in which they work, the intricacies of the school, the demographics of the school, as well as the aspects that make the school special. Finally, teacher candidates get to know the students in their classroom on a deeper level, and determine how to best meet the needs of all of their students. Students are instructed to use asset-based perspectives when referring to each aspect of the context. |
Evidence (or plans) regarding validity | The data source measures the content it intends to measure through a focused, predetermined coding scheme focused on asset-based or strengths driven perspectives (Green & Haines, 2011) and knowledge of specific cultural capital that students, families, and communities contribute to the classroom and community (Yosso, 2005). The codersare instructors who teach the inquiry and puzzle of practice series with a deep knowledge of the literature used to develop the focused coding scheme. The coders review their codes together to norm the coding process to help increase the overall validity of the process. |
Evidence (or plans) regarding reliability | This is a new measure and reliability will be assessed to determine whether the focused coding scheme is applied consistently by the trained raters over time. |
Evidence (or plans) regarding fairness | Student work from phase 2 and 3 of the inquiry and puzzle of practice series is randomly selected for the sample. |
Quantitative Data Measure: New Teacher Goals for Continuous Improvement | |
Description of Measure |
New teachers are introduced to and provided support in using new tools to help them set goals to focus their continuous improvement efforts. This new teacher development practice is part of the teacher evaluation process which is recognized as a desirable method to achieve the improvement of instruction, to identify skills and abilities that contribute to the success of the educational program, and to redirect skills and abilities that do not result in optimum student growth. The goals of the evaluation are as follows:
|
Evidence (or plans) regarding validity | District Instructional Coaches are trained to support new teachers in developing an evaluation plan where they set goals for continuous improvement during the first six weeks of the academic year. Overtime, the district instructional coaches introduce the new teachers to tools aimed at helping them improve their instructional practice. They log how often a new teacher uses various tools. |
Evidence (or plans) regarding reliability | District induction coaches are trained to use the observation rubric. Norming and calibration is consistently checked for during induction coach meetings. |
Evidence (or plans) regarding fairness | The process is governed by the new teacher’s collective bargaining agreement which offers the teachers protections and clearly defined processes. |
Quantitative Data Measure: Formative & Summative New Teacher Evaluation | |
Description of Measure |
Fresno Unified is the third-largest school district in California, educating just over 73,000 K-12 students in the 2019-2020 academic year (CDE, n.d.). Of those students, 68% identified as Latinx, 10% identified as Asian, 9% identified as White, 8% identified as Black, and 2% identified with two or more races, and 60% of those children are dual-language learners (CDE Dataquest, n.d.). Additionally, 85.9% of the students in the local school district receive free or reduced meals (CDE, n.d.). Fresno Unified also consistently places the largest number of Multiple Subject Credential Candidates with experienced mentor teachers in their district for their clinical experience (N 73; 26%). Moreover, Fresno Unified hires the majority of our MS program completers; 60-70 new hires each year. For these reasons, we launched a plan in March 2020 where Fresno Unified agreed to strengthen their data collection systems so that we could receive their teacher induction program data. The Formative & Summative New Teacher Evaluation is recognized as a desirable method to achieve the improvement of instruction, to identify skills and abilities that contribute to the success of the educational program, and to redirect skills and abilities that do not result in optimum student growth. The goals of the evaluation are as follows:
District Instructional Coaches are trained and calibrated to use the district’s formative and summative evaluation rubric that is aligned with the California Standards for the Teaching Profession (CSTP), which is also aligned with the Teaching Performance Expectations that serve as a guide to our program’s curriculum and clinical experiences. New teachers are formally observed twice per year. The first formal observation takes place by the end of November and it is formative in nature. A full lesson is observed and followed up by a debrief within five days from the observation. The second formal observation takes place by the end of May and it is summative in nature. |
Evidence (or plans) regarding validity | Tool developed by the New Teachers Center that measures employees’ strengths and areas of improvement over time. |
Evidence (or plans) regarding reliability | District induction coaches are trained to use the observation rubric. Norming and calibration is consistently checked for during induction coach meetings. |
Evidence (or plans) regarding fairness | The process is governed by the new teacher’s collective bargaining agreement which offers the teachers protections and clearly defined processes. |
Quantitative Data Measure: New Teacher Professional Learning Participation | |
Description of Measure | Fresno Unified is the third largest school district in California, educating just over 73,000 K-12 students in the 2019-2020 academic year (CDE, n.d.). Of those students, 68% identified as Latinx, 10% identified as Asian, 9% identified as White, 8% identified as Black, and 2% identified with two or more races, and 60% of those children are dual-language learners (CDE Dataquest, n.d.). Additionally, 85.9% of the students in the local school district receive free or reduced meals (CDE, n.d.). Fresno Unified also consistently places the largest number of Multiple Subject Credential Candidates with experienced mentor teachers in their district for their clinical experience (N 73; 26%). Moreover, Fresno Unified hires the majority of our MS program completers; 60-70 new hires each year. For these reasons, we launched a plan in March 2020 where Fresno Unified agreed to strengthen their data collection systems we could have opportunities to receive their teacher induction program data. New teachers are invited to participate in four-hour professional learning sessions twice a month on Saturdays. During these sessions new teachers are working collaboratively with their colleagues to examine their practice as well as discuss how to apply new practices in their classrooms. New teacher attendance to the professional learning sessions is an indicator of their ability to collaborate with various colleagues, learn from each other, and share knowledge and resources with each other. |
Evidence (or plans) regarding validity | Attendance data measures the frequency in which new teachers participate in Saturday PL which is what we intend to measure by this data. |
Evidence (or plans) regarding reliability | New teachers are reminded to sign-in through an electronic form which then links to the Teacher Development Database. Teachers are accustomed to the routine of signing in to prove their participation in PL. |
Evidence (or plans) regarding fairness | All new teachers are invited to participate at no-cost. |
Bilingual Authorization Program
Quantitative Data Measure: BAP completer survey | |
Description of Measure | The BAP completer survey has four (4) sections: past education, experience in credential, experience in BAP, and employment. The main purpose of this survey is to attain feedback from recent graduates regarding their experiences in the BAP and credential programs. |
Evidence (or plans) regarding validity | The first draft of this survey was created and disseminated in spring 2021 and plans to achieve validity for this survey include workshopping in the BAP advisory committee so that all stakeholders, who represent district partners, clinical practice coaches, counselors, multiple subject credential program coordinator, liberal studies chair, and faculty, can assist and ensure that state bilingual authorization standards, AAQEP standards, teacher performance expectations (TPEs), and other related standards and measures used to assess bilingual authorization programs are met. |
Evidence (or plans) regarding reliability | The plan to achieve reliability for this survey includes solidifying a final draft to disseminate every year. During the 2021-2022 academic year, the BAP advisory committee will work on several drafts of this survey to ensure everything from questions asked, to wording, are agreed upon by all stakeholders. It will also be workshopped for feedback among non-advisory stakeholders, like current students, to make sure that respondents understand questions clearly. This will ensure that the survey results will not vary too drastically each time data is collected. In addition, once this survey is dessimmated, reliability for this instrument will be confirmed by analyzing the number of respondents and the extent that responses are answered similarly. For example, the percentage of specified answers to each question will be analyzed in order to estimate reliability by using confidence intervals. |
Evidence (or plans) regarding fairness | Fairness for this survey will be established because it will be disseminated among program graduates upon completion who have completed all the same requirements as indicated by state bilingual authorization standards, moreover, the rigorous engagement of the program advisory committee in creating and finalizing this survey further strengthens its fairness measure. |
Quantitative Data Measure: BAP employer survey | |
Description of Measure | The current rough draft of the employer survey has two (2) sections, district context, and teacher performance. The main purpose of this survey is to get feedback from employers that have hired BAP graduates in their district/school. Focus is on graduate preparedness as indicated by AAQEP standard 1a-f. |
Evidence (or plans) regarding validity | The plan to achieve validity for this draft survey includes workshopping in the BAP advisory committee so that all stakeholders, who represent district partners, clinical practice coaches, counselors, multiple subject credential program coordinator, liberal studies chair, and faculty, can assist and ensure that state bilingual authorization standards, AAQEP standards, teacher performance expectations (TPEs), and other related standards and measures used to assess bilingual authorization programs are met. |
Evidence (or plans) regarding reliability | The plan to achieve reliability for this survey includes solidifying a final draft to disseminate every year. During the 2021-2022 academic year, the BAP advisory committee will work on several drafts of this survey to ensure everything from questions asked, to wording, are agreed upon by all stakeholders. It will also be workshopped for feedback among non-advisory stakeholders, like current students, to make sure that respondents understand questions clearly. This will ensure that the survey results will not vary too drastically each time data is collected. In addition, once this survey is dessimmated, reliability for this instrument will be confirmed by analyzing the number of respondents and the extent that responses are answered similarly. For example, the percentage of specified answers to each question will be analyzed in order to estimate reliability by using confidence intervals. |
Evidence (or plans) regarding fairness | Fairness for this survey will be established because it will be disseminated among district partners and additional known districts that hire BAP graduates, moreover, the rigorous engagement of the program advisory committee, which include employing districts, in creating and finalizing this survey further strengthens its fairness measure. |
Quantitative Data Measure: BAP alumni survey | |
Description of Measure | The current rough draft of the alumni survey has two (2) sections, employment, and professional competence and growth. The main purpose of this survey is to follow up with BAP graduates one (1+) years after completing the program to determine if they are employed as bilingual/dual immersion teachers as well as assessing the maintenance of AAQEP professional standards 1a-f. |
Evidence (or plans) regarding validity | The plan to achieve validity for this draft survey includes workshopping in the BAP advisory committee so that all stakeholders, who represent district partners, clinical practice coaches, counselors, multiple subject credential program coordinator, liberal studies chair, and faculty, can assist and ensure that state bilingual authorization standards, AAQEP standards, teacher performance expectations (TPEs), and other related standards and measures used to assess bilingual authorization programs are met. |
Evidence (or plans) regarding reliability | The plan to achieve reliability for this survey includes solidifying a final draft to disseminate every year. During the 2021-2022 academic year, the BAP advisory committee will work on several drafts of this survey to ensure everything from questions asked, to wording, are agreed upon by all stakeholders. It will also be workshopped for feedback among non-advisory stakeholders, like current students, to make sure that respondents understand questions clearly. This will ensure that the survey results will not vary too drastically each time data is collected. In addition, once this survey is dessimmated, reliability for this instrument will be confirmed by analyzing the number of respondents and the extent that responses are answered similarly. For example, the percentage of specified answers to each question will be analyzed in order to estimate reliability by using confidence intervals. |
Evidence (or plans) regarding fairness | Fairness for this survey will be established because it will be disseminated among program graduates upon completion who have completed all the same requirements as indicated by state bilingual authorization standards, moreover, the rigorous engagement of the program advisory committee in creating and finalizing this survey further strengthens its fairness measure. |
Quantitative Data Measure: LEE 136 Spanish Lesson Plan | |
Description of Measure | A key assignment from the LEE 136 course for the Spanish BAP pathway is the creation of a Spanish lesson plan that encompasses the tenets of culturally and linguistically sustaining pedagogy around a central instructional focus or theme and at least one children’s text. The central focus takes into account knowledge of students’ language development, backgrounds, interests, and learning levels that might further influence students’ thinking and learning. This lesson assignment also embodies the four (4) learning goals of the course, which include: 1) the impact of language and culture on teaching and learning in the elementary school, 2) language acquisition theory, socio-cultural context in teaching and instructional strategies for Emergent Bilinguals in the classroom, and 3) strategies to promote student success, including achievement of Common Core state-adopted content and English Language Development (ELD) standards. |
Evidence (or plans) regarding validity | The lesson plan rubric demonstrates alignment between CCSS and language objectives, as well as the learning tasks, and assessments that are related to an identifiable theme or topic. |
Evidence (or plans) regarding reliability | Assignment and rubric were created by program faculty. All program faculty who teach
this course use this assignment and rubric. Moving forward, program faculty will meet to calibrate the rubric to ensure all faculty are using the rubric to assess this assignment in the same way. |
Evidence (or plans) regarding fairness | All students have access to the assignment requirements and rubric ahead of time. |
Quantitative Data Measure: LEE 136 Literacy, Language, and Culture Story/Project | |
Description of Measure | To further develop their learning about learners, in LEE 136, candidates complete a literacy, language, and culture project with a focal student. In this assignment, candidates identify one focal student to observe and get to know, preferably a child who is labeled Emergent Bilingual (also called English Language Learner (ELL)). Candidates work alongside their focal student and learn about her/his interests, literacy, language, and culture. They are required to take detailed notes about their conversations after all interactions and then discuss the student’s profile during class time. At the end of the semester, candidates prepare a shadowbox with their student and submit a 1-page, single-spaced narrative about the student. |
Evidence (or plans) regarding validity | The focus of this course is teaching content in Spanish. We believe this assignment is valid for this focus because it asks the candidate to work with an emergent bilingual student, speaking with them in Spanish to build a relationship with the student, learn about her/his interests, and create the project together. |
Evidence (or plans) regarding reliability | Assignment and rubric were created by program faculty. All program faculty who teach
this course use this assignment and rubric. Moving forward, program faculty will meet to calibrate the rubric to ensure all faculty are using the rubric to assess this assignment in the same way. |
Evidence (or plans) regarding fairness | All students have access to the assignment requirements and rubric ahead of time. |
Single Subject Credential Program
Quantitative Data Measure: Fresno Assessment of Student Teachers II (FAST II) | |
Description of Measure | FAST II consists of two projects: the Site Visitation Project (SVP) is completed during initial student teaching (EHD 178) and the Teaching Sample Project (TSP) is completed during final student teaching (EHD 170). The SVP assesses teacher candidates’ ability to plan, implement, and evaluate instruction. The three parts of the project include (1) Planning: planning documentation for a single lesson incorporating state-adopted content standards and English language development, (2) Implementation: an in-person observation and videotaping of the teaching of the lesson, (3) Reflection: a review of the entire video, selection of a 3- to 5-minute video segment, and a written evaluation of the lesson. (TPE 1.1, 1.3, 1.5, 1.8, 2.2, 2.6, 3.1, 3.2, 3.3, 3.5, 4.1, 4.2, 4.7, 6.1). The Teaching Sample Project assesses teacher candidates’ ability to (a) identify the context of the classroom, (b) plan and teach a series of at least five cohesive lessons with a focus on content knowledge and literacy, (c) assess students’ learning related to the unit, (d) document their teaching and their students’ learning, and (e) reflect on the effectiveness of their teaching. Teacher candidates document how they are addressing the needs of all their students in the planning, teaching, and assessing of the content. (TPE 1.5, 1.6, 1.8, 2.1, 2.3, 2.6, 3.1, 3.2, 3.3, 4.1, 4.3, 4.4, 4.7, 5.1, 5.2, 5.5, 5.8, 6.1, 6.3, 6.5). |
Evidence (or plans) regarding validity | The SVP assesses the candidate’s ability to plan, implement and reflect upon instruction. Each of these abilities is assessed with performance tasks: the lesson plan (planning), teaching the lesson (implementation) and self-evaluation of the lesson (reflect upon instruction). In order to assess the teaching performance expectations (TPE) the tasks each have a rubric which share the same categories: subject specific pedagogy, applying knowledge of students and student engagement. The categories are rated on a 4-point scale (1-does not meet expectations, 2-meets expectations, 3-meets expectations at a high level, 4-exceeds expectations). The wording in the rubrics is adapted to each of the three specific tasks. Data from the FAST indicate that students are developing the competencies that are essential to effective classroom teaching practice. |
Evidence (or plans) regarding reliability | Every 2 years, a psychometric analysis of the Site Visitation Project (SVP) is performed. Our most recent analysis found that, of the 15% of the SVPs that were double scored, 70% gave the same score and 100% were within +/-1. 94.7% agreed on the determination of whether the SVP should pass or not. |
Evidence (or plans) regarding fairness | To monitor equity, the three subtests and the final score were examined as part of our psychometric analysis in regards to differences based on students’ ethnicity, gender, whether the student first language was English, the students’ self-rated degree of English language fluency on a 5-point Likert scale, and self-reported disability. In an effort to examine scoring equity, a series of non-parametric statistical tests were calculated to determine whether significant differences in scoring corresponded to students’ demographic characteristics. When examining the three subtests only one comparison showed statistically significant differences, the self-rated degree of English language fluency in the observation task. The statistical analyses for disability were not conducted, because of a very small sample size of 2 students self-reporting a disability. The scores were tabulated and inspected, all scores were passing. |
Evidence regarding Trustworthiness | Developed over a number of years with the support of the Renaissance Group and a Title II grant, the FAST addresses each of California’s TPEs. Each assessment is scored by at least two faculty members, including the university coach assigned to mentor the teacher candidate. Mandatory calibration sessions are held annually, and all scorers must participate in the norming process each year. The inter-rater reliability is higher than the norm for such assessments. Moreover, students who fail the assessment have the opportunity to revise and resubmit. |
Quantitative Data Measure: CSU Educator Quality Center Completer Survey | |
Description of Measure | The California State University’s Education Quality Center (EdQ) oversees the administration of a completer-survey to exiting candidates of all CSU teacher-preparation programs. The survey is available year-round and campuses are encouraged to make completion of the survey a component of graduates’ final paperwork. The survey contains items asking about candidates’ perceptions of various aspects of the preparation program and the field placement experience. Campuses have access to annual results from the survey by utilizing the EdQ Dashboard. Results can be disaggregated by various measures including campus, year of completion, respondent race/ethnicity, and type of credential. |
Evidence (or plans) regarding validity | Used systemwide, the survey serves as a valid measure of program completers’ perceptions of the teacher preparation program because it asks questions directly aligned with the California Teacher Performance Expectations and California Standards for the Teaching Profession. |
Evidence (or plans) regarding reliability | Uncertainty about evaluation findings comes from two principal sources: the number
of evaluation participants and the extent of their concurrence with each other. The
evaluation findings become increasingly certain to the extent that the questions
are answered by increasing numbers of program completers and their employment supervisors.
Each year the data set yields the percent of respondents who gave specified answers
to each item and includes reliability estimates in the form of confidence intervals
based on the number of respondents and the concurrence or homogeneity of responses. The CSU Deans of Education grouped together questions into "composites" (e.g., Preparing for Equity and Diversity in Education) for a more reliable interpretation. The reliability for the composite scores for the system and the individual campuses generally range from 0 to 2 percentage points at the 90% confidence level. |
Evidence (or plans) regarding fairness/trustworthiness | The existence of this CSU-wide service allows each campus to track the effects of
program changes designed to improve performance. Because the instrument was designed
and is implemented systemwide with graduates throughout the state, we believe it
is a fair and trustworthy measure. Fresno State has initiated a college-wide data summit to consider the findings of this statewide survey and triangulate them with campus data, including the percentage of First Generation students, access to resources like scholarships, and culture and context of the cohorts in which prospective teachers are placed. Through this triangulation process, we are able to determine the alignment of the finding from the survey with our other measures, further assuring us of the survey’s trustworthiness as an instrument. In the process, we are also able to inform the impact on program changes on our own students with respect to the unique diversity of culture and needs in the Central Valley. |
Quantitative Data Measure: CSU Educator Quality Center Employer Survey | |
Description of Measure | Until 2018, the CSU distributed an employer survey to employers of recent graduates
of CSU teacher preparation programs. Like the CSU completer survey, the employer survey
items were tailored to the type of preparation program from the new teacher completed
(Multiple Subject, Single Subject-Math, Single Subject-English, Education Specialist,
etc.). Survey items target the following areas: Engaging and Supporting all Students in Learning, Creating and Maintaining Effective Environments for Student Learning, Understanding and Organizing Subject Matter for Student Learning, Planning Instruction and Designing Learning Experiences for All Students, Assessing Students for Learning, Developing as a Professional Educator, and an overall assessment of how well prepared graduates of the institution are to be teachers. The results are disaggregated by campus and allow campuses to evaluate their effectiveness from the perspective of employers of completers. |
Evidence (or plans) regarding validity | Used systemwide, the survey serves as a measure of employers' perceptions of how well
programs prepared their completers for their first-year of teaching. Items on the
survey are aligned with the California Teacher Performance Expectations and California
Standards for the Teaching Profession. All employers respond to items asking about their preparation of the new teachers’ general pedagogical skills, such as their perception of how well the program prepared them to differentiate instruction in the classroom. In this way, the survey is a valid measure of employers’ perceptions of the program. |
Evidence (or plans) regarding reliability | Uncertainty about evaluation findings comes from two principal sources: the number
of evaluation participants and the extent of their concurrence with each other. The
evaluation findings become increasingly certain to the extent that the questions
are answered by increasing numbers of program completers and their employment supervisors.
Each year the data set yields the percent of respondents who gave specified answers to each item and includes reliability estimates in the form of confidence intervals based on the number of respondents and the concurrence or homogeneity of responses. The CSU Deans of Education grouped together questions into "composites" (e.g., Preparing for Equity and Diversity in Education) for a more reliable interpretation. The reliability for the composite scores for the system and the individual campuses generally range from 0 to 2 percentage points at the 90% confidence level. |
Evidence (or plans) regarding fairness/trustworthiness | Data were not constructed with bias, and data show positive predictive value (statistical
parity) among groups and support equalized odds. The existence of this CSU-wide service allows each campus to track the effects of program changes designed to improve performance. Because the instrument was designed and is implemented systemwide with completers throughout the state, we believe it is a fair and trustworthy measure. Fresno State has initiated a college-wide data summit to consider the findings of this statewide survey and triangulate them with campus data, including the percentage of First Generation students, access to resources like scholarships, and culture and context of the cohorts in which prospective teachers are placed. Through this triangulation process, we are able to determine the alignment of the finding from the survey with our other measures, further assuring us of the survey’s trustworthiness as an instrument. In the process, we are also able to inform the impact on program changes on our own students with respect to the unique diversity of culture and needs in the Central Valley. |
Quantitative Data Measure: CSU Educator Quality Center Year One Completer Survey | |
Description of Measure | The California State University’s Education Quality Center (EdQ) oversees the administration
of a survey of all individuals who completed a CSU teacher-preparation programs after
their first year on the job. The survey is administered annually April through July.
In April, the EdQ Center emails an initial survey invitation to all completers of
MS-SSES Credential Programs serving as first-year teachers in public schools, charter
schools, or private schools in all locations. Follow-up reminders are sent every
two weeks throughout the duration of the survey window. In addition to asking questions about the completer’s demographics and educational background, the survey also contains items to capture data about the school where the completer is employed. Additionally, the survey includes items asking about candidates’ perceptions of various aspects of the preparation program and the field placement experience. Campuses have access to annual results from the survey by utilizing the EdQ Dashboard. Results can be disaggregated by various measures including campus, year of completion, respondent race/ethnicity, and type of credential. Note: the CTC also distributes a Credential Program Completer Survey which gives an overall view of CA Educator Preparation Programs. |
Evidence (or plans) regarding validity | Used systemwide, the survey serves as a valid measure of graduates' perceptions of
how well the teacher preparation program prepared them for their first-year of teaching
because it asks questions directly aligned with the California Teacher Performance
Expectations and California Standards for the Teaching Profession. Additionally, the survey’s content is tailored to the type of program each respondent completed, making the content valid for each individual. For example, the survey for a Single Subject English teachers contains an item about how well the program prepared them to develop students' understanding and use of academic language and vocabulary whereas the survey for a Single Subject Social Science teacher contains an item about how well the program prepared them to develop students' Historical Interpretation skills. Similarly, surveys sent to teachers with Multiple Subjects credentials or Educational Specialist credentials respond to items directly aligned to standards associated with their credentials. All graduates respond to items asking about their preparation of general pedagogical skills, such as their perception of how well the program prepared them to differentiate instruction in the classroom. In this way, the survey is a valid measure of completers’ perceptions of the program. |
Evidence (or plans) regarding reliability | Uncertainty about evaluation findings comes from two principal sources: the number
of evaluation participants and the extent of their concurrence with each other. The
evaluation findings become increasingly certain to the extent that the questions
are answered by increasing numbers of program completers and their employment supervisors.
Each year the data set yields the percent of respondents who gave specified answers to each item and includes reliability estimates in the form of confidence intervals based on the number of respondents and the concurrence or homogeneity of responses. The CSU Deans of Education grouped together questions into "composites" (e.g., Preparing for Equity and Diversity in Education) for a more reliable interpretation. The reliability for the composite scores for the system and the individual campuses generally range from 0 to 2 percentage points at the 90% confidence level. |
Evidence (or plans) regarding fairness/trustworthiness | Data were not constructed with bias, and data show positive predictive value (statistical
parity) among groups and support equalized odds. The existence of this CSU-wide service allows each campus to track the effects of program changes designed to improve performance. Because the instrument was designed and is implemented systemwide with completers throughout the state, we believe it is a fair and trustworthy measure. Fresno State has initiated a college-wide data summit to consider the findings of this statewide survey and triangulate them with campus data, including the percentage of First Generation students, access to resources like scholarships, and culture and context of the cohorts in which prospective teachers are placed. Through this triangulation process, we are able to determine the alignment of the finding from the survey with our other measures, further assuring us of the survey’s trustworthiness as an instrument. In the process, we are also able to inform the impact on program changes on our own students with respect to the unique diversity of culture and needs in the Central Valley. |
Quantitative Data Measure: CCTC Program Completer Survey | |
Description of Measure | Beginning in 2018, the CTC has administered a survey that is completed by completers of all credential programs. The survey, administered between September 1 and December 31, examines the effectiveness of individual educator preparation programs approved to operate in California. In 2019-2020, 97% of the 4717 individuals who completed a Single Subject Credential program in California responded to the survey. Of those who responded, 37.7% completed their credential at a California State University. |
Evidence (or plans) regarding validity | Used throughout the state, the survey serves as a valid measure of completers' perceptions
of how well the teacher preparation program prepared them. Items included on the survey
are directly aligned with the California Teacher Performance Expectations and California
Standards for the Teaching Profession. Additionally, the survey’s content is tailored to the type of program each respondent completed, making the content valid for each individual. For example, the survey for a Single Subject English teachers contains an item about how well the program prepared them to develop students' understanding and use of academic language and vocabulary whereas the survey for a Single Subject Social Science teacher contains an item about how well the program prepared them to develop students' Historical Interpretation skills. Similarly, surveys sent to teachers with Multiple Subjects credentials or Educational Specialist credentials respond to items directly aligned to standards associated with their credentials. All completers respond to items asking about their preparation of general pedagogical skills, such as their perception of how well the program prepared them to differentiate instruction in the classroom. In this way, the survey is a valid measure of completers’ perceptions of the program. |
Evidence (or plans) regarding reliability | Uncertainty about evaluation findings comes from two principal sources: the number
of evaluation participants and the extent of their concurrence with each other. The
evaluation findings become increasingly certain to the extent that the questions are
answered by increasing numbers of program completers and their employment supervisors.
Each year the data set yields the percent of respondents who gave specified answers to each item and includes reliability estimates in the form of confidence intervals based on the number of respondents and the concurrence or homogeneity of responses. The CSU Deans of Education grouped together questions into "composites" (e.g., Preparing for Equity and Diversity in Education) for a more reliable interpretation. The reliability for the composite scores for the system and the individual campuses generally range from 0 to 2 percentage points at the 90% confidence level. |
Evidence (or plans) regarding fairness | Data were not constructed with bias, and data show positive predictive value (statistical
parity) among groups and support equalized odds. The existence of this statewide service allows each program to track the effects of program changes designed to improve performance. Because the instrument was designed and is implemented with completers throughout the state, we believe it is a fair and trustworthy measure. Fresno State has initiated a college-wide data summit to consider the findings of this statewide survey and triangulate them with campus data, including the percentage of First Generation students, access to resources like scholarships, and culture and context of the cohorts in which prospective teachers are placed. Through this triangulation process, we are able to determine the alignment of the finding from the survey with our other measures, further assuring us of the survey’s trustworthiness as an instrument. In the process, we are also able to inform the impact on program changes on our own students with respect to the unique diversity of culture and needs in the Central Valley. |
Quantitative Data Measure: Pre and Post Dispositions Survey | |
Description of Measure | Complementing the 23 CSU Completer Exit Survey of graduates, the Teacher Candidate Commitment is an instrument administered at the beginning and end of each fieldwork course (EHD 155A and EHD 155B). The California Commission on Teacher Credentialing (CCTC) requires all candidates to demonstrate personality and character traits that satisfy the standards of the teaching profession through a 9-item measure of these traits. Thus, Fresno State developed this commitment statement that students complete as part of their entrance requirements and when they complete their credential program. |
Evidence (or plans) regarding validity | Items included within the commitment statement align with the dispositions CTC requires of candidates credentialed to teach within the state. It should be noted that commitment statements rely upon self reported dispositions, which can be inaccurately represented. |
Evidence (or plans) regarding reliability | Reliability of the candidate’s commitment is dependent upon self reported dispositions that can be inaccurately represented. |
Evidence (or plans) regarding fairness | The commitment statement is intended to reinforce the values of fairness among the candidates as well as an expectation of non-biased dispositions of the candidates toward students of all backgrounds, languages, cultures, and experiences. In the instance when incoming candidates or exiting graduates perform aberrantly on this commitment statement, they are identified, counseled, and advised about their pursuit of the profession. In the rare instance of candidates not agreeing with the necessary commitments, they may resubmit. |
Quantitative Data Measure: Midterm and Final Fieldwork Evaluations | |
Description of Measure | Description: The Mid-term and Final Evaluation rubrics are locally developed observation
tools that provide a common language for preservice teachers, coaches, and mentors
to orient their feedback in an actionable manner. Each rubric is aligned to the CCTC
adopted Teacher Performance Expectations (TPEs). TPEs are divided amongst four different
rubrics: the EHD 155A Midterm Evaluation, the EHD 155A Final Evaluation, the EHD 155B Midterm Evaluation, and the EHD 155B Final Evaluation. Coaches and Mentor Teachers work together to evaluate the teacher candidate’s performance.
Teacher candidates are rated on a 4 point likert scale: 1=Does Not Meet Expectations
2= Meets Expectations 3=Meets Expectations at a High Level 4=Exceeds Expectations. In addition to assessments related to the TPEs, in all the evaluations, the teacher candidate is assessed in 6 professional competencies related to professional behaviors at the school site. The coach and the mentor teacher have the opportunity to give qualitative feedback in written form, and the teacher candidate has the opportunity to respond. A last step is for the University Coach to assess whether the teacher candidate should continue in the program (at the midterm)/complete the program or not (final). |
Evidence (or plans) regarding validity | Rubric development began with close examination of the TPEs to ensure that the rubric would measure the skills required of program completion. By aligning the evaluation rubrics directly with the TPEs, our assessments reflect the standards identified by the Commission on Teacher Credentialing as essential for new teachers to possess. Content validity of the measures used to evaluate the success of the Single Subject Credential Program candidates is established through connections to the California Teaching Performance Expectations and supported by faculty-developed coursework rubrics that address similar if not the same content. Content validity is also established through the FAST, a state adopted teacher certification exam developed by Fresno State, and approved by the State of California. Both the in-house measures and the stated adopted measure ensure that our candidates have had sufficient content knowledge to be effective in the classroom once they earn their preliminary teaching credential. |
Evidence (or plans) regarding reliability | Throughout California, the TPEs are the standard measurement for teacher candidates. Within Fresno State, all university coaches who supervise student teaching participate in an orientation session and regular meetings with the Coordinator of the Single Subject Program. Reliability is established through the solicitation of program judgments from its program graduates, P-12 schools (cooperating teachers, school administrators, mentor teachers, and induction programs). |
Evidence (or plans) regarding fairness | This observation rubric focuses on the TPEs to help observers and teacher candidates adopt principles of good teaching. We believe that this simple but comprehensive tool will better serve the needs of our teacher candidates, creating opportunities for specific feedback that will be more easily digested and internalized. Fairness is ensured through the inclusion and equitable treatment of all individuals through the equal allocation of time, resources, and materials to all those involved. |
Evidence regarding Trustworthiness | The TPEs are all areas essential to high-quality instruction. The language used within the rubric is clear and direct and provides effective feedback for teacher candidates. Trustworthiness is established through coursework audit trails. These audit trails highlight every step of data analysis that is made in order to provide a rationale for the programmatic decisions made by the coursework faculty. There are also occasions when the coursework faculty ask another course-like faculty member to perform an inquiry audit in order to ensure that the qualitative findings are consistent and could be repeated. |
Quantitative Data Measure: Course grades/Candidate Performance in Courses (CI 152 Learning Theories Application; CI 152 Learning Theories Description; SpEd 158 Universal Design for Learning Assignment; LEE 156 Discussions) | |
Description of Measure | Credential candidates must maintain a 3.00 GPA in all credential courses with no individual grade lower than a “C”. Any grade listed as “I”, “IC”, “WU”, “NC”, “D”, or “F” does not meet Fresno State’s credential program requirements. |
Evidence (or plans) regarding validity | The content of all key assignments aligns with course foci and with CCTC program standards. Faculty hired to teach within the programs are considered experts in their specific field and all have relevant experience for the content of the courses which they are teaching. Course grades of ‘C’ or ‘Credit’ are required for program completion. Therefore, with few exceptions, all candidates must complete and receive a ‘C’ grade or better for the articulated courses. |
Evidence (or plans) regarding reliability | Requirements in courses stay relatively consistent over time since they are aligned with the CCTC program standards. In addition, the courses are staffed by the same faculty, on the whole. |
Evidence (or plans) regarding fairness | Course grades are required at the end of the course. With few exceptions, all program
completers must complete and receive a grade for courses taken. For each course, specific
details are provided within course syllabi about requirements for course assignments
and to earn a passing grade. To ensure fairness, program faculty will analyze assignment instructions to make sure expectations are clear. Where discrepancies exist between what we intend candidates to do and what they understand, we will revise instructions to ensure they are clear to all. We will also be sure that clear details are provided about how the assignment will be assessed, including specific rubrics and, whenever possible, samples of previous students' successful work. |
Agricultural Specialist Credential Program
Quantitative Data Measure: Midterm and Final Fieldwork Evaluations | |
Description of Measure | The Mid-term and Final Evaluation rubrics are locally developed observation tools
that provide a common language for preservice teachers, coaches, and mentors to orient
their feedback in an actionable manner. Each rubric is aligned to the CCTC adopted
Teacher Performance Expectations (TPEs). TPEs are divided amongst four different rubrics:
the EHD 155A Midterm Evaluation, the EHD 155A Final Evaluation, the EHD 155B Midterm Evaluation, and the EHD 155B Final Evaluation. Coaches and Mentor Teachers work together to evaluate the teacher candidate’s performance.
Teacher candidates are rated on a 4 point likert scale: 1=Does Not Meet Expectations
2= Meets Expectations 3=Meets Expectations at a High Level 4=Exceeds Expectations. In addition to assessments related to the TPEs, in all the evaluations, the teacher candidate is assessed in 6 professional competencies related to professional behaviors at the school site. The coach and the mentor teacher have the opportunity to give qualitative feedback in written form, and the teacher candidate has the opportunity to respond. A last step is for the University Coach to assess whether the teacher candidate should continue in the program (at the midterm)/complete the program or not (final). |
Evidence (or plans) regarding validity | Rubric development began with close examination of the TPEs to ensure that the rubric would measure the skills required of program completion. By aligning the evaluation rubrics directly with the TPEs, our assessments reflect the standards identified by the Commission on Teacher Credentialing as essential for new teachers to possess. Content validity of the measures used to evaluate the success of the Single Subject Credential Program candidates is established through connections to the California Teaching Performance Expectations and supported by faculty-developed coursework rubrics that address similar if not the same content. Content validity is also established through the FAST, a state adopted teacher certification exam developed by Fresno State, and approved by the State of California. Both the in-house measures and the stated adopted measure ensure that our candidates have had sufficient content knowledge to be effective in the classroom once they earn their preliminary teaching credential. |
Evidence (or plans) regarding reliability | Throughout California, the TPEs are the standard measurement for teacher candidates. Within Fresno State, all university coaches who supervise student teaching participate in an orientation session and regular meetings with the Coordinator of the Single Subject Program. Reliability is established through the solicitation of program judgments from its program graduates, P-12 schools (cooperating teachers, school administrators, mentor teachers, and induction programs). |
Evidence (or plans) regarding fairness | This observation rubric focuses on the TPEs to help observers and teacher candidates adopt principles of good teaching. We believe that this simple but comprehensive tool will better serve the needs of our teacher candidates, creating opportunities for specific feedback that will be more easily digested and internalized. Fairness is ensured through the inclusion and equitable treatment of all individuals through the equal allocation of time, resources, and materials to all those involved. |
Evidence regarding Trustworthiness | The TPEs are all areas essential to high-quality instruction. The language used within the rubric is clear and direct and provides effective feedback for teacher candidates. Trustworthiness is established through coursework audit trails. These audit trails highlight every step of data analysis that is made in order to provide a rationale for the programmatic decisions made by the coursework faculty. There are also occasions when the coursework faculty ask another course-like faculty member to perform an inquiry audit in order to ensure that the qualitative findings are consistent and could be repeated. |
Quantitative Data Measure: CI 161 Curriculum Project Scores | |
Description of Measure |
For a chosen agricultural course (other than Ag Core Curriculum) students develop a Course Outline, Unit Outline, and at least three consecutive lesson plans.
|
Evidence (or plans) regarding validity | Scores are calculated based on students’ successful completion and the quality of all the required assignment components. Because the assignment is specifically linked to course content, students’ grades serve as a valid measure of mastery of this content. |
Evidence (or plans) regarding reliability | This assignment is evaluated and scored by one instructor each semester utilizing the same scoring criteria as used in previous years. In an effort to ensure the reliability of scores, the instructor refers to these benchmark examples prior to scoring the assignment. |
Evidence (or plans) regarding fairness | To ensure fairness, we will analyze assignment instructions to make sure expectations are clear. Where discrepancies exist between what we intend candidates to do and what they understand, we will revise instructions to ensure they are clear to all. We will also be sure that clear details are provided about how the assignment will be assessed, including specific rubrics and, whenever possible, samples of previous students' successful work. |
Quantitative Data Measure: Occupational Experience Form (T-14) | |
Description of Measure | On the Occupational Experience Form (T-14) each candidate is required to record their agricultural occupational experience hours. Candidates are required to have at least 3,000 hours to qualify for the Ag Specialist credential. |
Evidence (or plans) regarding validity | Each candidate submits their completed form to the California Dept. of Education Ag. Education State Staff representative for the San Joaquin Region who maintains an office on our campus. The Regional State Staff representative reviews the Occupational Experience Forms while interviewing each candidate. The purpose of the interview is to verify the validity and accuracy of the form data. |
Evidence (or plans) regarding reliability | Each Occupational Experience Form is reviewed and interviews are conducted by the same person who is a qualified representative from the Calif. Department of Education’s Ag. Education Division. |
Evidence (or plans) regarding fairness | This form includes all the various segments of the agricultural industry, plus candidates are provided an “other” category to ensure that all candidates’ experiences can be recorded and contribute toward the 3,000 hr. requirement. |
Quantitative Data Measure: CI 161 MicroTeaching Presentation Rubric | |
Description of Measure | In CI 161, candidates complete a micro-teaching assignment in which they plan a lesson and then teach it to their peers. The rubric used to score the micro-teaching derived from the teaching evaluation rubric which was used in the past to evaluate our student teachers. This form was developed by the Kremen School of Education. The rubric includes criteria related to the lesson presentation, teacher/student interaction, classroom management and the overall quality of the presentation. |
Evidence (or plans) regarding validity | The instrument/rubric was originally developed by faculty in the Kremen School of Education and later reviewed by faculty in Agricultural Education to ensure it serves as a valid measurement of student’s performance in their MicroTeaching. |
Evidence (or plans) regarding reliability | A post-hoc analysis of the 2020 data was conducted documenting the reliability of the 19 item rubric. The analysis yielded a Cronbach’s alpha coefficient of .96, leading us to conclude that the instrument is highly reliable. |
Evidence (or plans) regarding fairness | To ensure fairness, our faculty have reviewed the rubric to ensure it aligns with the course outcomes and fairly measures students’ performance. |
Quantitative Data Measure: AGRI 281 Project Scores | |
Description of Measure | Students complete a study on a selected problem in Ag. Ed. either during the student
teaching assignment or otherwise. The special problem should be relevant to the community
and the Ag. Education program. Students define the problem, delimit the scope of the problem and prepare the material, in such a manner that it will be acceptable for publication, partial fulfillment of graduate credit for master's program and/or continued study on a thesis. |
Evidence (or plans) regarding validity | Scores are calculated based on students’ successful completion and the quality of all the required assignment components. Because the assignment is specifically linked to course content, students’ grades serve as a valid measure of mastery of this content. |
Evidence (or plans) regarding reliability | This assignment is evaluated and scored by the instructor each semester utilizing the same scoring criteria as used in previous years. In an effort to ensure the reliability of scores, the instructor refers to these benchmark examples prior to scoring the assignment. |
Evidence (or plans) regarding fairness | To ensure fairness, we will analyze assignment instructions to make sure expectations are clear. Where discrepancies exist between what we intend candidates to do and what they understand, we will revise instructions to ensure they are clear to all. We will also be sure that clear details are provided about how the assignment will be assessed, including specific rubrics and, whenever possible, samples of previous students' successful work. |
Quantitative Data Measure: Program Completer Follow Up Survey | |
Description of Measure | Every five years a survey is conducted of program completers from the previous five cohorts to evaluate their perceptions of the program, specifically their level of technical and professional preparation. |
Evidence (or plans) regarding validity | The instrument is aligned with the CCTC Subject Matter and Professional standards areas. Prior to administering the instrument, it is reviewed by a panel of experts (Ag. Education faculty) to ensure face and content validity. |
Evidence (or plans) regarding reliability | A post-hoc analysis of the 2021 data was conducted documenting the reliability of the 17 item technical and professional knowledge/skill scale which is the primary measurement tool. The analysis yielded a Cronbach’s alpha coefficient of .88, leading us to conclude that the instrument is highly reliable. |
Evidence (or plans) regarding fairness | The instrument is aligned with the CCTC Subject Matter and Professional standards areas. Prior to administering the instrument, it is reviewed by a panel of experts (Ag. Education faculty) to ensure fairness and inclusivity of all participants. |
Quantitative Data Measure: Employer Follow Up Survey | |
Description of Measure | Every five years a survey is administered to the school site administrators of completers from our previous five cohorts to evaluate the administrator’s perceptions of the completer’s level of technical and professional preparation. |
Evidence (or plans) regarding validity | The instrument is aligned with the CCTC Subject Matter and Professional standards areas. Prior to administering the instrument, it is reviewed by a panel of experts (Ag. Education faculty) to ensure face and content validity. |
Evidence (or plans) regarding reliability | A post-hoc analysis of the 2021 data was conducted documenting the reliability of the 15 item technical and professional preparation scale which is the primary measurement tool. The analysis yielded a Cronbach’s alpha coefficient of .96, leading us to conclude that the instrument is highly reliable. |
Evidence (or plans) regarding fairness | The instrument is aligned with the CCTC Subject Matter and Professional standards areas. Prior to administering the instrument, it is reviewed by a panel of experts (Ag. Education faculty) to ensure fairness and inclusivity of all participants. |
Quantitative Data Measure: California Agricultural Teachers’ Induction Program (CATIP) Individual Induction Plan (IDP) Self-Assessment | |
Description of Measure | The California Agricultural Teachers’ Induction Program (CATIP) Individual Induction Plan (IDP) Self-Assessment is administered to new teachers beginning the CATIP Induction program. The assessment serves as a baseline measurement for new teachers to identify their strengths and develop a plan for improvement in weak areas. |
Evidence (or plans) regarding validity | The instrument is aligned with the technical and professional standards areas in agricultural education. Prior to administering the instrument, it is reviewed by a panel of experts (Ag. Education faculty) to ensure face and content validity. |
Evidence (or plans) regarding reliability | A post-hoc analysis of the 2018-2020 data was conducted documenting the reliability of the 18 item technical and professional knowledge/skill scale which is the primary measurement tool. The analysis yielded a Cronbach’s alpha coefficient of .92, leading us to conclude that the instrument is highly reliable. |
Evidence (or plans) regarding fairness | This instrument has been reviewed by a panel of experts representing a variety of agricultural education programs/institutions to ensure that it serves as a valid and fair assessment. |
Quantitative Data Measure: EHD 155A Professional Competencies & EHD 155B Exit Evaluation of Professional Objectives | |
Description of Measure | The EHD 155A Professional Competencies & EHD 155B Exit Evaluation of Professional Objectives contain professional competencies required of candidates for the Agriculture Specialist Credential. As each objective is accomplished, the approximate date of accomplishment should be filled in and initialed for verification by someone in a position to evaluate the achievement of that objective. The only people who may verify the accomplishment of these objectives are California State University,Fresno faculty, cooperating master teachers and the administrators of the cooperating schools. |
Evidence (or plans) regarding validity | The instrument is aligned with the CCTC Ag Specialist Professional standards areas. Prior to administering the instrument, it is reviewed by a panel of experts (Ag. Education faculty and advisory committee members) to ensure face and content validity. |
Evidence (or plans) regarding reliability | This measurement is a calculation on each of the students completed objectives, making it a completely objective scoring thus removing nearly all threat of error in scoring and making it a highly reliable measurement. |
Evidence (or plans) regarding fairness | The instrument is aligned with the CCTC Ag Specialist Professional standards areas. Prior to administering the instrument, it is reviewed by a panel of experts (Ag. Education faculty and advisory committee members) to ensure it is a valid and fair measurement. |
Quantitative Data Measure: Fresno Assessment of Student Teachers II (FAST II) | |
Description of Measure | FAST II consists of two projects: the Site Visitation Project (SVP) is completed during initial student teaching (EHD 155A) and the Teaching Sample Project (TSP) is completed during final student teaching (EHD 155B). The SVP assesses teacher candidates’ ability to plan, implement, and evaluate instruction. The three parts of the project include (1) Planning: planning documentation for a single lesson incorporating state-adopted content standards and English language development, (2) Implementation: an in-person observation and videotaping of the teaching of the lesson, (3) Reflection: a review of the entire video, selection of a 3- to 5-minute video segment, and a written evaluation of the lesson. (TPE 1.1, 1.3, 1.5, 1.8, 2.2, 2.6, 3.1, 3.2, 3.3, 3.5, 4.1, 4.2, 4.7, 6.1). The Teaching Sample Project assesses teacher candidates’ ability to (a) identify the context of the classroom, (b) plan and teach a series of at least five cohesive lessons with a focus on content knowledge and literacy, (c) assess students’ learning related to the unit, (d) document their teaching and their students’ learning, and (e) reflect on the effectiveness of their teaching. Teacher candidates document how they are addressing the needs of all their students in the planning, teaching, and assessing of the content. (TPE 1.5, 1.6, 1.8, 2.1, 2.3, 2.6, 3.1, 3.2, 3.3, 4.1, 4.3, 4.4, 4.7, 5.1, 5.2, 5.5, 5.8, 6.1, 6.3, 6.5). |
Evidence (or plans) regarding validity | The SVP assesses the candidate’s ability to plan, implement and reflect upon instruction. Each of these abilities is assessed with performance tasks: the lesson plan (planning), teaching the lesson (implementation) and self-evaluation of the lesson (reflect upon instruction). In order to assess the teaching performance expectations (TPE) the tasks each have a rubric which share the same categories: subject specific pedagogy, applying knowledge of students and student engagement. The categories are rated on a 4-point scale (1-does not meet expectations, 2-meets expectations, 3-meets expectations at a high level, 4-exceeds expectations). The wording in the rubrics is adapted to each of the three specific tasks. Data from the FAST indicate that students are developing the competencies that are essential to effective classroom teaching practice. |
Evidence (or plans) regarding reliability | Every 2 years, a psychometric analysis of the Site Visitation Project (SVP) is performed. Our most recent analysis found that of the 15% of the SVPs that were double scored, 70% gave the same score and 100% were within +/-1. 94.7% agreed on the determination of whether the SVP should pass or not. |
Evidence (or plans) regarding fairness | To monitor equity, the three subtests and the final score were examined as part of our psychometric analysis in regards to differences based on students’ ethnicity, gender, whether the student first language was English, the students’ self-rated degree of English language fluency on a 5-point Likert scale, and self-reported disability. In an effort to examine scoring equity, a series of non-parametric statistical tests were calculated to determine whether significant differences in scoring corresponded to students’ demographic characteristics. When examining the three subtests only one comparison showed statistically significant differences, the self-rated degree of English language fluency in the observation task. The statistical analyses for disability were not conducted, because of a very small sample size of 2 students self-reporting a disability. The scores were tabulated and inspected, all scores were passing. |
Evidence regarding Trustworthiness | Developed over a number of years with the support of the Renaissance Group and a Title II grant, the FAST addresses each of California’s TPEs. Each assessment is scored by at least two faculty members, including the university coach assigned to mentor the teacher candidate. Mandatory calibration sessions are held annually, and all scorers must participate in the norming process each year. The inter-rater reliability is higher than the norm for such assessments. Moreover, students who fail the assessment have the opportunity to revise and resubmit. |
Quantitative Data Measure: Pre and Post Dispositions Survey | |
Description of Measure | Complementing the 23 CSU Completer Exit Survey of graduates, the Teacher Candidate Commitment is an instrument administered at the beginning and end of each fieldwork course (EHD 155A and EHD 155B). The California Commission on Teacher Credentialing (CCTC) requires all candidates to demonstrate personality and character traits that satisfy the standards of the teaching profession through a 9-item measure of these traits. Thus, Fresno State developed this commitment statement that students complete as part of their entrance requirements and when they complete their credential program. |
Evidence (or plans) regarding validity | Items included within the commitment statement align with the dispositions CTC requires of candidates credentialed to teach within the state. It should be noted that commitment statements rely upon self reported dispositions, which can be inaccurately represented. |
Evidence (or plans) regarding reliability | Reliability of the candidate’s commitment is dependent upon self reported dispositions that can be inaccurately represented. |
Evidence (or plans) regarding fairness | The commitment statement is intended to reinforce the values of fairness among the candidates as well as an expectation of non-biased dispositions of the candidates toward students of all backgrounds, languages, cultures, and experiences. In the instance when incoming candidates or exiting graduates perform aberrantly on this commitment statement, they are identified, counseled, and advised about their pursuit of the profession. In the rare instance of candidates not agreeing with the necessary commitments, they may resubmit. |
Quantitative Data Measure: Multicultural/International & Plant 105 Course grades | |
Description of Measure | Candidates for the Ag. Specialist credential must fulfill the Subject Matter Competency Requirement for program entry, which requires either the state’s CSET exams or the completion of our Subject Matter Waiver option. The Waiver program consists of fulfilling our Ag. Education bachelor’s degree requirements which includes a Multicultural/International course and also General Education Integration area B. Area IB is typically fulfilled by students completing the only agricultural course in that area - Plant 105. |
Evidence (or plans) regarding validity | Faculty hired to teach the Plant 105 and M/I courses are considered experts in their specific field and have relevant educational training and experience for the content of the courses which they are teaching. Their expertise in the field demonstrates their ability to develop valid assessments to calculate the students’ overall grades. Course assignments and expectations are aligned with the essential course content. |
Evidence (or plans) regarding reliability | Requirements in courses stay relatively consistent over time since they are aligned with our university’s general education and graduation requirements. |
Evidence (or plans) regarding fairness | Course grades are required at the end of the course. With few exceptions, all program completers must complete and receive a grade for courses taken. For each course, specific details are provided within course syllabi about requirements for course assignments and to earn a passing grade. |
Educational Specialist Credential Program
Quantitative Data Measure: Student Teaching Placement Demographics, 2019-2021 | |
Description of Measure | The Placement Demographics table consists of information on the placement of each Education Specialist student teacher and intern enrolled in clinical practice for a designated semester. A table is kept created and archived each semester by the Office of Clinical Practice. The table includes the candidate’s district of placement, the school site, the name and email addresses of the mentor/master teacher and the site administrator, the University Coach assigned to the candidate, the placement grade levels, and the type of special education placement (e.g., RSP, SDC, adult transition, center-based, inclusion, etc.). |
Evidence (or plans) regarding validity | The table is prepared and updated through the semester based on information provided by the candidate on the clinical application, the receiving district, the school site, the Placements Coordinator, the University Coach, and the Program Coordinator. Since the table information is received from various sources, the Office of Clinical Practice and the Placements Coordinator cross-check and verify the information before entering it onto the table. These two additional sources cross-checking and verifying information on the table throughout the semester would indicate that the information derived from the table are valid. |
Evidence (or plans) regarding reliability | The candidate, the district receiving the candidate, Office of Clinical Practice, the Placements Coordinator, the University Coach and the Credential Coordinator verify the information on the table. Further, the Credential Coordinator verifies that the placement of each Education Specialist candidate matches both the credential specialization that each candidate has chosen and the credentialing of the mentor teacher, i.e., that Mild/Moderate candidates are placed in Mild/Moderate settings with credentialed Mild/Moderate (or equivalent) mentor teachers and that Moderate/Severe candidates are placed in Moderate/Severe settings with credentialed Moderate/Severe (or equivalent) mentor teachers. Note: California changes the title of special education credentials with each new set of standards and credential programs, thus a candidate may be placed in a classroom with a mentor teacher with a like credential of a different title. |
Evidence (or plans) regarding fairness | All candidates enrolled in clinical courses each semester are required to submit a clinical application within the application window in the Tk20 system the semester prior to the current placement. All candidates are required to enter accurate personal information. District placement personnel also provide the Office of Clinical Practice with accurate information regarding the receiving school site, mentor teacher, and administrator. The data derived are accessible, interoperable, and reusable indicating a high level of fairness. |
Evidence regarding trustworthiness | The information provided to the Office of Clinical Practice are based on the same criteria and selection process. One point person is designated from each district or county office of education to provide and monitor the placements for the district. Likewise, our Placements Coordinator, Dr. Mercado, is the designated pont person form the university who works with the districts to secure the best placement for each candidate, change the candidate’s placement if needed, and to collaborate with each district. One voice from each entity ensures that the information shared are accurate, to the best of each one’s ability. |
Quantitative Data Measure: Program Alumni Survey (pilot) | |
Description of Measure | The pilot survey consisted of survey items addressing educational specialist pathways, phase status, and topics/areas/skills needed in the field on which students would have liked to have had more time. The purpose of the survey was to support program changes and improvement of course offerings to support student development. |
Evidence (or plans) regarding validity | The survey items addressed student perceptions of need in multiple areas of special education in order to inform the special education credential program about how to better balance student need and certification requirements. The survey measured what it intended to measure, suggesting the data acquired are valid. |
Evidence (or plans) regarding reliability | Since the survey is being piloted (one semester of data so far), there is not enough data to support reliability. |
Evidence (or plans) regarding fairness | Survey items consist of basic demographic information regarding credential type and phase in the program, four-point Likert-type items prompting candidates to choose levels of need based on topic areas within the field, and an open-ended prompt for candidates to offer suggestions. The data are findable, accessible, interoperable, and reusable, which suggests a high level of fairness. |
Evidence regarding trustworthiness | The validity of the data suggests trustworthiness; however, since reliability of the instrument has yet to be established, the trustworthiness of the data remains in question. |
Quantitative Data Measure: Functional Behavioral Assessment (FBA) and Behavior Intervention Plan (BIP) | |
Description of Measure | The purpose of the FBA/BIP plan is to develop candidate skill in conducting an FBA and creating a BIP for one student identified as having challenging behaviors. The assignment consists of three phases: 1) Conducting and FBA, 2) Writing a BIP, and 3) Implementation of the BIP/ intervention, and is scored using the FBA-BIP rubric. The total point value of the assignment is 110. |
Evidence (or plans) regarding validity | The FBA phase is based on well-established practices within the field of Special Education, and students are scored based on the quality of the FBA. Likewise, the BIP phase is based on well-established practices within the field, and students are scored on the quality of the BIP as well as implementation of the plan, data collection and analysis, and reflection on the plan itself. Since the rubric is based on standards of common practice within the field, this would indicate scores derived from the rubric are valid. |
Evidence (or plans) regarding reliability | Average scores across four sections of the FBA/BIP assignment in Fall 2020 indicate a range of 94 to 104. Though a class average score of 94 is the lower end ‘A’, the overall class averages for the FBA/BIP assignment remain consistent across all sections, meaning that all class averages resulted in the grade of ‘A’ on the assignment, though individual scores did fall below the class average. Since all course sections indicated class averages that were consistent between each other with low variability, the data derived from the scoring rubric were reliable. |
Evidence (or plans) regarding fairness | Everyone in the course (SPED 125) must complete the FBA/BIP assignment, and all assignments are scored in the same way. All the necessary steps in conducting an FBA and writing out a BIP are covered in the course prior to students engaging with focus subjects. The rubric allows room for error, both by those who score the rubric as well as for assignments scored by the rubric. The data derived are accessible, interoperable, and reusable indicating a high level of fairness. |
Evidence regarding trustworthiness | The scoring rubric is based on standard practices within the field. The rubric measures what it says it measures, and as a result, the scores are considered valid. Additionally, consistent averages across course sections for the same assignments indicate good reliability of scores. These suggest a high rate of trustworthiness in the data. |
Quantitative Data Measure: Midterm and Final Fieldwork Evaluations | |
Description of Measure | Although at present we use an observation form available in TK20, we are in the process of transitioning to the New Teacher Project (TNTP) Core rubric, which Chico State adapted to align with the CTC Standards. |
Evidence (or plans) regarding validity | We are currently in the process of adopting the New Teacher Project (TNTP) Core Rubric, as adapted by Chico State to align with the CTC Standards. We selected this version of the rubric because it was specifically adapted to measure the standards required by the CTC for teacher preparation, making it a valid tool. |
Evidence (or plans) regarding reliability | The TNTP Core has been field-tested and adopted by universities throughout the United States. We also will be able to compare our results with Chico State, one of our sister CSU campuses. Within Fresno State, all university coaches who supervise student teaching will attend one 2-hour training on using the formative rubric per semester followed up by norming activities during monthly coach learning community sessions. Note: The initial implementation plan was slowed down due to COVID-19. |
Evidence (or plans) regarding fairness | This observation rubric was developed with four areas to help observers and teacher candidates focus on essential pillars of good teaching. We believe that this simple but comprehensive tool will better serve the needs of our teacher candidates, creating opportunities for specific feedback that will be more easily digested and internalized. TNTP Core also was developed with the foundational belief that all students can learn “rigorous material, regardless of socioeconomic status.” Kremen shares this belief. |
Evidence regarding trustworthiness | The four focal areas of the TNTP Core Rubric—culture of learning, essential content, academic ownership, and demonstration of learning—are all areas essential to high-quality instruction. The language used within the rubric is clear and direct and provides effective feedback for teacher candidates. Additionally, as a nationally-used tool, the rubric has been used across contexts and grade levels, demonstrating its versatility. As we are currently in the process of adopting this tool, we are also in the process of providing professional development for all coaches to ensure that it is in a consistent way across programs. |
Quantitative Data Measure: Earned Grades of Course or Assignment | |
Description of Measure | Earned grades of a course or assignment |
Evidence (or plans) regarding validity | Aligned to course objectives and CCTC guidelines of mandated course content for expected
Education Specialist outcomes. Aligned to course and assignment rubrics. |
Evidence (or plans) regarding reliability | Criteria for grade determination aligned to course rubrics used by all course instructors. Course expectations and rubrics published in the syllabus. |
Evidence (or plans) regarding fairness | All students have access to the grading rubric ahead of time and are based on criteria stated in the syllabus. Any exceptions that differ from syllabus are offered to all students. Students can access grades over the semester and can access the instructor if questions occur. |
Evidence regarding trustworthiness | Grade measures should align with scores from other evaluation tools. |
Quantitative Data Measure: Fresno Assessment of Student Teachers II (FAST II) | |
Description of Measure | FAST II consists of two projects: the Site Visitation Project (SVP) is completed during initial student teaching (EHD 178) and the Teaching Sample Project (TSP) is completed during final student teaching (EHD 170). The SVP assesses teacher candidates’ ability to plan, implement, and evaluate instruction. The three parts of the project include (1) Planning: planning documentation for a single lesson incorporating state-adopted content standards and English language development, (2) Implementation: an in-person observation and videotaping of the teaching of the lesson, (3) Reflection: a review of the entire video, selection of a 3- to 5-minute video segment, and a written evaluation of the lesson. (TPE 1.1, 1.3, 1.5, 1.8, 2.2, 2.6, 3.1, 3.2, 3.3, 3.5, 4.1, 4.2, 4.7, 6.1). The Teaching Sample Project assesses teacher candidates’ ability to (a) identify the context of the classroom, (b) plan and teach a series of at least five cohesive lessons with a focus on content knowledge and literacy, (c) assess students’ learning related to the unit, (d) document their teaching and their students’ learning, and (e) reflect on the effectiveness of their teaching. Teacher candidates document how they are addressing the needs of all their students in the planning, teaching, and assessing of the content. (TPE 1.5, 1.6, 1.8, 2.1, 2.3, 2.6, 3.1, 3.2, 3.3, 4.1, 4.3, 4.4, 4.7, 5.1, 5.2, 5.5, 5.8, 6.1, 6.3, 6.5). |
Evidence (or plans) regarding validity | The SVP assesses the candidate’s ability to plan, implement and reflect upon instruction. Each of these abilities is assessed with performance tasks: the lesson plan (planning), teaching the lesson (implementation) and self-evaluation of the lesson (reflects upon instruction). In order to assess the teaching performance expectations (TPE) the tasks each have a rubric which share the same categories: subject specific pedagogy, applying knowledge of students and student engagement. The categories are rated on a 4-point scale (1-does not meet expectations, 2-meets expectations, 3-meets expectations at a high level, 4-exceeds expectations). The wording in the rubrics is adapted to each of the three specific tasks. Data from the FAST indicate that students are developing the competencies that are essential to effective classroom teaching practice. |
Evidence (or plans) regarding reliability | Every 2 years, a psychometric analysis of the Site Visitation Project (SVP) is performed. Our most recent analysis found that of the 15% of the SVPs that were double scored, 70% gave the same score and 100% were within +/-1. 94.7% agreed on the determination of whether the SVP should pass or not. |
Evidence (or plans) regarding fairness | To monitor equity, the three subtests and the final score were examined as part of our psychometric analysis in regards to differences based on students’ ethnicity, gender, whether the student first language was English, the students’ self-rated degree of English language fluency on a 5-point Likert scale, and self-reported disability. In an effort to examine scoring equity, a series of non-parametric statistical tests were calculated to determine whether significant differences in scoring corresponded to students’ demographic characteristics. When examining the three subtests only one comparison showed statistically significant differences, the self-rated degree of English language fluency in the observation task. The statistical analyses for disability were not conducted, because of a very small sample size of 2 students self-reporting a disability. The scores were tabulated and inspected, all scores were passing. |
Evidence regarding trustworthiness | Developed over a number of years with the support of the Renaissance Group and a Title II grant, the FAST addresses each of California’s TPEs. Each assessment is scored by at least two faculty members, including the university coach assigned to mentor the teacher candidate. Mandatory calibration sessions are held annually, and all scorers must participate in the norming process each year. The inter-rater reliability is higher than the norm for such assessments. Moreover, students who fail the assessment have the opportunity to revise and resubmit. |
Quantitative Data Measure: CSU Year-One Completer Survey | |
Description of Measure | The California State University’s Education Quality Center (EdQ) oversees the administration
of a survey of all individuals who completed a CSU teacher-preparation programs after
their first year on the job. The survey is administered annually April through July.
In April, the EdQ Center emails an initial survey invitation to all completers of
MS-SS-ES Credential Programs serving as first-year teachers in public schools, charter
schools, or private schools in all locations. Follow-up reminders are sent every two
weeks throughout the duration of the survey window. In addition to asking questions about the completer’s demographics and educational background, the survey also contains items to capture data about the school where the completer is employed. Additionally, the survey includes items asking about candidates’ perceptions of various aspects of the preparation program and the field placement experience. Campuses have access to annual results from the survey by utilizing the EdQ Dashboard. Results can be disaggregated by various measures including campus, year of completion, respondent race/ethnicity, and type of credential. Note: the CTC also distributes a Credential Program Completer Survey which gives an overall view of CA Educator Preparation Programs. |
Evidence (or plans) regarding validity | Used systemwide, the survey serves as a valid measure of graduates' perceptions of
how well the teacher preparation program prepared them for their first-year of teaching
because it asks questions directly aligned with the California Teacher Performance
Expectations and California Standards for the Teaching Profession. Additionally, the survey’s content is tailored to the type of program each respondent completed, making the content valid for each individual. For example, the survey for a Single Subject English teachers contains an item about how well the program prepared them to develop students' understanding and use of academic language and vocabulary whereas the survey for a Single Subject Social Science teacher contains an item about how well the program prepared them to develop students' Historical Interpretation skills. Similarly, surveys sent to teachers with Multiple Subjects credentials or Educational Specialist credentials respond to items directly aligned to standards associated with their credentials. All graduates respond to items asking about their preparation of general pedagogical skills, such as their perception of how well the program prepared them to differentiate instruction in the classroom. In this way, the survey is a valid measure of completers’ perceptions of the program. |
Evidence (or plans) regarding reliability | Uncertainty about evaluation findings comes from two principal sources: the number
of evaluation participants and the extent of their concurrence with each other. The
evaluation findings become increasingly certain to the extent that the questions
are answered by increasing numbers of program completers and their employment supervisors.
Each year the data set yields the percent of respondents who gave specified answers to each item and includes reliability estimates in the form of confidence intervals based on the number of respondents and the concurrence or homogeneity of responses. The CSU Deans of Education grouped together questions into "composites" (e.g., Preparing for Equity and Diversity in Education) for a more reliable interpretation. The reliability for the composite scores for the system and the individual campuses generally range from 0 to 2 percentage points at the 90% confidence level. |
Evidence (or plans) regarding fairness/trustworthiness | Data were not constructed with bias, and data show positive predictive value (statistical
parity) among groups and support equalized odds. The existence of this CSU-wide service allows each campus to track the effects of program changes designed to improve performance. Because the instrument was designed and is implemented systemwide with completers throughout the state, we believe it is a fair and trustworthy measure. Fresno State has initiated a college-wide data summit to consider the findings of this statewide survey and triangulate them with campus data, including the percentage of First Generation students, access to resources like scholarships, and culture and context of the cohorts in which prospective teachers are placed. Through this triangulation process, we are able to determine the alignment of the finding from the survey with our other measures, further assuring us of the survey’s trustworthiness as an instrument. In the process, we are also able to inform the impact on program changes on our own students with respect to the unique diversity of culture and needs in the Central Valley. |
Quantitative Data Measure: CSU Teacher Credential Program Completer Survey | |
Description of Measure | The California State University’s Education Quality Center (EdQ) oversees the administration of a completer-survey to exiting candidates of all CSU teacher-preparation programs. The survey is available year-round and campuses are encouraged to make completion of the survey a component of graduates’ final paperwork. The survey contains items asking about candidates’ perceptions of various aspects of the preparation program and the field placement experience. Campuses have access to annual results from the survey by utilizing the EdQ Dashboard. Results can be disaggregated by various measures including campus, year of completion, respondent race/ethnicity, and type of credential. Note: the CTC also distributes a Credential Program Completer Survey which gives an overall view of CA Educator Preparation Programs. |
Evidence (or plans) regarding validity | Used systemwide, the survey serves as a valid measure of program completers’ perceptions of the teacher preparation program because it asks questions directly aligned with the California Teacher Performance Expectations and California Standards for the Teaching Profession. Additionally, the survey’s content is tailored to the type of program each respondent completed, making the content valid for each individual. For example, the survey for a Single Subject English program completer contains an item about how well the program prepared them to develop students' understanding and use of academic language and vocabulary whereas the survey for a Single Subject Social Science program completer contains an item about how well the program prepared them to develop students' Historical Interpretation skills. All program completers respond to items asking about their preparation of general pedagogical skills, such as their perception of how well the program prepared them to differentiate instruction in the classroom. In this way, the survey is a valid measure of completers’ perceptions of the program. |
Evidence (or plans) regarding reliability | Uncertainty about evaluation findings comes from two principal sources: the number of evaluation participants and the extent of their concurrence with each other. The evaluation findings become increasingly certain to the extent that the questions are answered by increasing numbers of program completers and their employment supervisors. Each year the data set yields the percent of respondents who gave specified answers to each item and includes reliability estimates in the form of confidence intervals based on the number of respondents and the concurrence or homogeneity of responses. The CSU Deans of Education grouped together questions into "composites" (e.g., Preparing for Equity and Diversity in Education) for a more reliable interpretation. The reliability for the composite scores for the system and the individual campuses generally range from 0 to 2 percentage points at the 90% confidence level. |
Evidence (or plans) regarding fairness/trustworthiness |
The existence of this CSU-wide service allows each campus to track the effects of
program changes designed to improve performance. Because the instrument was designed
and is implemented systemwide with graduates throughout the state, we believe it
is a fair and trustworthy measure. |
Quantitative Data Measure: SPED 246 Intervention Project | |
Description of Measure |
The Intervention Project, an assignment in SPED 246: Specialized Academic Instruction for Students with Mild/Moderate Disabilities. The majority of our students (84%) are in the Mild/Moderate pathway and this course is in the final phase of both the credential and master’s level coursework. In this course, candidates learn:
The Intervention Project is a culminating experience that requires candidates to focus on and provide specialized academic instruction to one or more students with disabilities with whom they work and who is/are struggling to learn, remember, and apply information that is taught in the general education and/or special education setting. |
Evidence (or plans) regarding validity | Assignment and rubric align with student learning outcomes and CCTC standards (See accreditation documents) Moving forward, program faculty will create new rubrics to ensure the assignment and rubric are in line with the new CCTC Education Specialist requirements and standards during Fall 2021/Spring 2022. |
Evidence (or plans) regarding reliability | |
Evidence (or plans) regarding fairness/trustworthiness | We believe that the measure is fair and trustworthy because both the assignments and the rubric were created by program faculty who are familiar with the goals of the course. Additionally, the assignment and rubric are used consistently for all students enrolled in the SPED 146 course, regardless of instructor. |
Quantitative Data Measure: SPED 145 Instructional Plan Assignment | |
Description of Measure | The Instructional Plan assignment is to have students create a lesson plan using principles of differentiated instruction and universal design. The plan must also contain considerations for individualized accommodations/modifications for students, as well as a reflection regarding the planning process. |
Evidence (or plans) regarding validity | Assignment(s) align with student learning outcomes and CCTC standards (See accreditation documents) |
Evidence (or plans) regarding reliability | Assignment and rubric were created by program faculty. All program faculty who teach
this course use this assignment and rubric. Moving forward, program faculty will create new rubrics to meet the new CCTC Education Specialist requirements and standards during Fall 2021/Spring2022 |
Evidence (or plans) regarding fairness/trustworthiness | All students have access to the assignment requirements and rubrics ahead of time./Analysis is reviewed during peer reviews for all courses. |
Quantitative Data Measure: SPED 145, Individualized Education Program Assignment | |
Description of Measure | The purpose of the Present Levels and Annual Goals for the Individualized Educational Program is to prepare students for the nuts and bolts of the job of the special education teacher. In this assignment, students are given raw data regarding a student with disabilities. They write the present levels for performance, recommend potential accommodations/modifications, and write five (5) annual goals. |
Evidence (or plans) regarding validity | Assignment(s) align with student learning outcomes and CCTC standards (See accreditation documents) |
Evidence (or plans) regarding reliability | Assignment and rubric were created by program faculty. All program faculty who teach
this course use this assignment and rubric. Moving forward, program faculty create new rubrics to meet the new CCTC Education Specialist requirements and standards during Fall 2021/Spring2022 |
Evidence (or plans) regarding fairness/trustworthiness | All students have access to the assignment requirements and rubrics ahead of time/Analysis
is reviewed during peer reviews for all courses |
Quantitative Data Measure: SPED 125 Classroom Management Plan Assignment | |
Description of Measure | Student teacher candidates in the creation and development of positive learning and work environments is in the creation of a Classroom Management Plan. The goal of the Classroom Management Plan is to create a meaningful, active instructional environment where rules, routines, and expectations are clear, where more attention is given to desired behavior than to inappropriate behavior, and where inappropriate behavior is managed with systematically, consistently, and equitably. Students will complete a Classroom Management Plan according to the following steps: 1) develop a statement of purpose, 2) develop classroom rules, 3) develop classroom routines and teaching methods, 4) develop an action plan. |
Evidence (or plans) regarding validity | Assignment(s) align with student learning outcomes and CCTC standards (See accreditation documents) |
Evidence (or plans) regarding reliability | Assignment and rubric were created by program faculty. All program faculty who teach
this course use this assignment and rubric. Moving forward, program faculty create new rubrics to meet the new CCTC Education Specialist requirements and standards during Fall 2021/Spring2022 |
Evidence (or plans) regarding fairness/trustworthiness | All students have access to the assignment requirements and rubrics ahead of time/Analysis is reviewed during peer reviews for all courses |
Quantitative Data Measure: SPED 219, Collaboration Assignment | |
Description of Measure | Completion of the Collaboration Assignment in SPED 219 (Communication and Collaborative Partnerships). SPED 219 (Effective Communication and Collaborative Partnerships) is a course required for all candidates in Special Education. The focus of this course is on the development of materials, strategies and skills for individuals on the educational team to effectively and positively work with students with a range of disabilities. |
Evidence (or plans) regarding validity | Assignment(s) align with student learning outcomes and CCTC standards (See accreditation documents) |
Evidence (or plans) regarding reliability | Assignment and rubric were created by program faculty. All program faculty who teach
this course use this assignment and rubric. Moving forward, program faculty create new rubrics to meet the new CCTC Education Specialist requirements and standards during Fall 2021/Spring2022 |
Evidence (or plans) regarding fairness/trustworthiness | All students have access to the assignment requirements and rubrics ahead of time/Analysis is reviewed during peer reviews for all courses |
Quantitative Data Measure: Post-Dispositions Survey | |
Description of Measure | Candidates evaluate their own progress on six broad professional dispositions in each of their three clinical experiences through the Pre- and Post- Dispositions Survey. The professional dispositions include Reflection, Critical Thinking, Professional Ethics, Valuing Diversity, Collaboration, and Life-long Learning. Each of the six dispositions is subdivided into descriptors on which candidates self-assess their progress. The Post-Dispositions Survey data are collected in candidates’ culmination clinical experience. This data provides our program with the candidates’ perception of their progress on some of the behaviors required for successful professional practice. The data collected from our data system, Tk20, was available only for fall 2019, spring 2019, spring 2020 and spring 2021. The reason that data is available for only those semesters is unknown, however, it may have to do with a change in the Tk20 binder format and forms. |
Evidence (or plans) regarding validity | Assignment(s) align with student learning outcomes and CCTC standards (See accreditation documents) |
Evidence (or plans) regarding reliability | Assignment and rubric were created by program faculty. All program faculty who teach
this course use this assignment and rubric. Moving forward, program faculty create new rubrics to meet the new CCTC Education Specialist requirements and standards during Fall 2021/Spring2022 |
Evidence (or plans) regarding fairness/trustworthiness | All students have access to the assignment requirements and rubrics ahead of time/Analysis is reviewed during peer reviews for all courses |