Skip to main content Skip to main navigation Skip to footer content

AAQEP Accreditation 2022

Appendix E: Evidence of Data Quality

Educational Specialist Credential Program

Quantitative Data Measure: Student Teaching Placement Demographics, 2019-2021
Description of Measure The Placement Demographics table consists of information on the placement of each Education Specialist student teacher and intern enrolled in clinical practice for a designated semester. A table is kept created and archived each semester by the Office of Clinical Practice. The table includes the candidate’s district of placement, the school site, the name and email addresses of the mentor/master teacher and the site administrator, the University Coach assigned to the candidate, the placement grade levels, and the type of special education placement (e.g., RSP, SDC, adult transition, center-based, inclusion, etc.). 
Evidence (or plans) regarding validity The table is prepared and updated through the semester based on information provided by the candidate on the clinical application, the receiving district, the school site, the Placements Coordinator, the University Coach, and the Program Coordinator. Since the table information is received from various sources, the Office of Clinical Practice and the Placements Coordinator cross-check and verify the information before entering it onto the table. These two additional sources cross-checking and verifying information on the table throughout the semester would indicate that the information derived from the table are valid.
Evidence (or plans) regarding reliability The candidate, the district receiving the candidate, Office of Clinical Practice, the Placements Coordinator, the University Coach and the Credential Coordinator verify the information on the table. Further, the Credential Coordinator verifies that the placement of each Education Specialist candidate matches both the credential specialization that each candidate has chosen and the credentialing of the mentor teacher, i.e., that Mild/Moderate candidates are placed in Mild/Moderate settings with credentialed Mild/Moderate (or equivalent) mentor teachers and that Moderate/Severe candidates are placed in Moderate/Severe settings with credentialed Moderate/Severe (or equivalent) mentor teachers. Note: California changes the title of special education credentials with each new set of standards and credential programs, thus a candidate may be placed in a classroom with a mentor teacher with a like credential of a different title.
Evidence (or plans) regarding fairness All candidates enrolled in clinical courses each semester are required to submit a clinical application within the application window in the Tk20 system the semester prior to the current placement.  All candidates are required to enter accurate personal information. District placement personnel also provide the Office of Clinical Practice with accurate information regarding the receiving school site, mentor teacher, and administrator. The data derived are accessible, interoperable, and reusable indicating a high level of fairness.
Evidence regarding trustworthiness The information provided to the Office of Clinical Practice are based on the same criteria and selection process. One point person is designated from each district or county office of education to provide and monitor the placements for the district. Likewise, our Placements Coordinator, Dr. Mercado, is the designated pont person form the university who works with the districts to secure the best placement for each candidate, change the candidate’s placement if needed, and to collaborate with each district. One voice from each entity ensures that the information shared are accurate, to the best of each one’s ability. 

Quantitative Data Measure: Program Alumni Survey (pilot)
Description of Measure The pilot survey consisted of survey items addressing educational specialist pathways, phase status, and topics/areas/skills needed in the field on which students would have liked to have had more time. The purpose of the survey was to support program changes and improvement of course offerings to support student development. 
Evidence (or plans) regarding validity The survey items addressed student perceptions of need in multiple areas of special education in order to inform the special education credential program about how to better balance student need and certification requirements. The survey measured what it intended to measure, suggesting the data acquired are valid.
Evidence (or plans) regarding reliability Since the survey is being piloted (one semester of data so far), there is not enough data to support reliability.
Evidence (or plans) regarding fairness Survey items consist of basic demographic information regarding credential type and phase in the program, four-point Likert-type items prompting candidates to choose levels of need based on topic areas within the field, and an open-ended prompt for candidates to offer suggestions. The data are findable, accessible, interoperable, and reusable, which suggests a high level of fairness.
Evidence regarding trustworthiness The validity of the data suggests trustworthiness; however, since reliability of the instrument has yet to be established, the trustworthiness of the data remains in question. 

Quantitative Data Measure: Functional Behavioral Assessment (FBA) and Behavior Intervention Plan (BIP) 
Description of Measure The purpose of the FBA/BIP plan is to develop candidate skill in conducting an FBA and creating a BIP for one student identified as having challenging behaviors. The assignment consists of three phases: 1) Conducting and FBA, 2) Writing a BIP, and 3) Implementation of the BIP/ intervention, and is scored using the FBA-BIP rubric. The total point value of the assignment is 110. 
Evidence (or plans) regarding validity The FBA phase is based on well-established practices within the field of Special Education, and students are scored based on the quality of the FBA. Likewise, the BIP phase is based on well-established practices within the field, and students are scored on the quality of the BIP as well as implementation of the plan, data collection and analysis, and reflection on the plan itself. Since the rubric is based on standards of common practice within the field, this would indicate scores derived from the rubric are valid.
Evidence (or plans) regarding reliability Average scores across four sections of the FBA/BIP assignment in Fall 2020 indicate a range of 94 to 104. Though a class average score of 94 is the lower end ‘A’, the overall class averages for the FBA/BIP assignment remain consistent across all sections, meaning that all class averages resulted in the grade of ‘A’ on the assignment, though individual scores did fall below the class average. Since all course sections indicated class averages that were consistent between each other with low variability, the data derived from the scoring rubric were reliable. 
Evidence (or plans) regarding fairness Everyone in the course (SPED 125) must complete the FBA/BIP assignment, and all assignments are scored in the same way. All the necessary steps in conducting an FBA and writing out a BIP are covered in the course prior to students engaging with focus subjects. The rubric allows room for error, both by those who score the rubric as well as for assignments scored by the rubric. The data derived are accessible, interoperable, and reusable indicating a high level of fairness.
Evidence regarding trustworthiness The scoring rubric is based on standard practices within the field. The rubric measures what it says it measures, and as a result, the scores are considered valid. Additionally, consistent averages across course sections for the same assignments indicate good reliability of scores. These suggest a high rate of trustworthiness in the data.

Quantitative Data Measure: Midterm and Final Fieldwork Evaluations
Description of Measure Although at present we use an observation form available in TK20, we are in the process of transitioning to the New Teacher Project (TNTP) Core rubric, which Chico State adapted to align with the CTC Standards. 
Evidence (or plans) regarding validity We are currently in the process of adopting the New Teacher Project (TNTP) Core Rubric, as adapted by Chico State to align with the CTC Standards. We selected this version of the rubric because it was specifically adapted to measure the standards required by the CTC for teacher preparation, making it a valid tool.
Evidence (or plans) regarding reliability The TNTP Core has been field-tested and adopted by universities throughout the United States. We also will be able to compare our results with Chico State, one of our sister CSU campuses. Within Fresno State, all university coaches who supervise student teaching will attend one 2-hour training on using the formative rubric per semester followed up by norming activities during monthly coach learning community sessions. Note: The initial implementation plan was slowed down due to COVID-19.   
Evidence (or plans) regarding fairness This observation rubric was developed with four areas to help observers and teacher candidates focus on essential pillars of good teaching. We believe that this simple but comprehensive tool will better serve the needs of our teacher candidates, creating opportunities for specific feedback that will be more easily digested and internalized. TNTP Core also was developed with the foundational belief that all students can learn “rigorous material, regardless of socioeconomic status.” Kremen shares this belief. 
Evidence regarding trustworthiness The four focal areas of the TNTP Core Rubric—culture of learning, essential content, academic ownership, and demonstration of learning—are all areas essential to high-quality instruction. The language used within the rubric is clear and direct and provides effective feedback for teacher candidates. Additionally, as a nationally-used tool, the rubric has been used across contexts and grade levels, demonstrating its versatility. As we are currently in the process of adopting this tool, we are also in the process of providing professional development for all coaches to ensure that it is in a consistent way across programs.

Quantitative Data Measure: Earned Grades of Course or Assignment
Description of Measure Earned grades of a course or assignment
Evidence (or plans) regarding validity Aligned to course objectives and CCTC guidelines of mandated course content for expected Education  Specialist outcomes.
Aligned to course and assignment rubrics.
Evidence (or plans) regarding reliability Criteria for grade determination aligned to course rubrics used by all course instructors.
Course expectations and rubrics published in the syllabus.
Evidence (or plans) regarding fairness All students have access to the grading rubric ahead of time and are based on criteria stated in the syllabus. Any exceptions that differ from syllabus are offered to all students. Students can access grades over the semester and can access the instructor if questions occur.
Evidence regarding trustworthiness Grade measures should align with scores from other evaluation tools.

Quantitative Data Measure: Fresno Assessment of Student Teachers II (FAST II)
Description of Measure FAST II consists of two projects: the Site Visitation Project (SVP) is completed during initial student teaching (EHD 178) and the Teaching Sample Project (TSP) is completed during final student teaching (EHD 170). The SVP assesses teacher candidates’ ability to plan, implement, and evaluate instruction. The three parts of the project include (1) Planning: planning documentation for a single lesson incorporating state-adopted content standards and English language development, (2) Implementation: an in-person observation and videotaping of the teaching of the lesson, (3) Reflection: a review of the entire video, selection of a 3- to 5-minute video segment, and a written evaluation of the lesson. (TPE 1.1, 1.3, 1.5, 1.8, 2.2, 2.6, 3.1, 3.2, 3.3, 3.5, 4.1, 4.2, 4.7, 6.1). The Teaching Sample Project assesses teacher candidates’ ability to (a) identify the context of the classroom, (b) plan and teach a series of at least five cohesive lessons with a focus on content knowledge and literacy, (c) assess students’ learning related to the unit, (d) document their teaching and their students’ learning, and (e) reflect on the effectiveness of their teaching. Teacher candidates document how they are addressing the needs of all their students in the planning, teaching, and assessing of the content. (TPE 1.5, 1.6, 1.8, 2.1, 2.3, 2.6, 3.1, 3.2, 3.3, 4.1, 4.3, 4.4, 4.7, 5.1, 5.2, 5.5, 5.8, 6.1, 6.3, 6.5). 
Evidence (or plans) regarding validity The SVP assesses the candidate’s ability to plan, implement and reflect upon instruction.  Each of these abilities is assessed with performance tasks: the lesson plan (planning), teaching the lesson (implementation) and self-evaluation of the lesson (reflects upon instruction).  In order to assess the teaching performance expectations (TPE) the tasks each have a rubric which share the same categories: subject specific pedagogy, applying knowledge of students and student engagement. The categories are rated on a 4-point scale (1-does not meet expectations, 2-meets expectations, 3-meets expectations at a high level, 4-exceeds expectations). The wording in the rubrics is adapted to each of the three specific tasks. Data from the FAST indicate that students are developing the competencies that are essential to effective classroom teaching practice.  
Evidence (or plans) regarding reliability Every 2 years, a psychometric analysis of the Site Visitation Project (SVP) is performed. Our most recent analysis found that of the 15% of the SVPs that were double scored, 70% gave the same score and 100% were within +/-1. 94.7% agreed on the determination of whether the SVP should pass or not. 
Evidence (or plans) regarding fairness To monitor equity, the three subtests and the final score were examined as part of our psychometric analysis in regards to differences based on students’ ethnicity, gender, whether the student first language was English, the students’ self-rated degree of English language fluency on a 5-point Likert scale, and self-reported disability.  In an effort to examine scoring equity, a series of non-parametric statistical tests were calculated to determine whether significant differences in scoring corresponded to students’ demographic characteristics. When examining the three subtests only one comparison showed statistically significant differences, the self-rated degree of English language fluency in the observation task. The statistical analyses for disability were not conducted, because of a very small sample size of 2 students self-reporting a disability. The scores were tabulated and inspected, all scores were passing.
Evidence regarding trustworthiness Developed over a number of years with the support of the Renaissance Group and a Title II grant, the FAST addresses each of California’s TPEs. Each assessment is scored by at least two faculty members, including the university coach assigned to mentor the teacher candidate. Mandatory calibration sessions are held annually, and all scorers must participate in the norming process each year. The inter-rater reliability is higher than the norm for such assessments. Moreover, students who fail the assessment have the opportunity to revise and resubmit.

Quantitative Data Measure: CSU Year-One Completer Survey
Description of Measure The California State University’s Education Quality Center (EdQ) oversees the administration of a survey of all individuals who completed a CSU teacher-preparation  programs after their first year on the job. The survey is administered annually April through  July. In April, the EdQ Center emails an initial survey invitation to all completers of MS-SS-ES Credential Programs serving as first-year teachers in public schools, charter schools, or private schools in all locations. Follow-up reminders are sent every two weeks throughout the duration  of the survey window. 
In addition to asking questions about the completer’s demographics and educational  background, the survey also contains items to capture data about the school where the completer is employed. Additionally, the survey includes items asking about candidates’ perceptions of various aspects of the preparation program and the field placement experience.  Campuses have access to annual results from the survey by utilizing the EdQ Dashboard. Results can be disaggregated by various measures including campus, year of completion,  respondent race/ethnicity, and type of credential. Note: the CTC also distributes a Credential Program Completer Survey which gives an overall view of CA Educator Preparation Programs. 
Evidence (or plans) regarding validity Used systemwide, the survey serves as a valid measure of graduates' perceptions of how well the teacher preparation program prepared them for their first-year of teaching because it asks questions directly aligned with the California Teacher Performance Expectations and California Standards for the Teaching Profession.  
Additionally, the survey’s content is tailored to the type of program  each respondent completed, making the content valid for each  individual. For example, the survey for a Single Subject English  teachers contains an item about how well the program prepared them to develop students' understanding and use of academic language and vocabulary whereas the survey for a Single Subject Social Science  teacher contains an item about how well the program prepared them to develop students' Historical Interpretation skills. Similarly, surveys sent  to teachers with Multiple Subjects credentials or Educational Specialist credentials respond to items directly aligned to standards associated with their credentials. 
All graduates respond to items asking about their preparation of general pedagogical skills, such as their perception of how well the program prepared them to differentiate instruction in the classroom. In this way, the survey is a valid measure of completers’ perceptions of the  program.
Evidence (or plans) regarding reliability Uncertainty about evaluation findings comes from two principal  sources: the number of evaluation participants and the extent of their  concurrence with each other. The evaluation findings become  increasingly certain to the extent that the questions are answered by  increasing numbers of program completers and their employment  supervisors.  
Each year the data set yields the percent of respondents who gave specified answers to each item and includes reliability estimates in the form of confidence intervals based on the number of respondents and  the concurrence or homogeneity of responses. The CSU Deans of  Education grouped together questions into "composites" (e.g., Preparing for Equity and Diversity in Education) for a more reliable  interpretation. The reliability for the composite scores for the system  and the individual campuses generally range from 0 to 2 percentage  points at the 90% confidence level. 
Evidence (or plans) regarding fairness/trustworthiness Data were not constructed with bias, and data show positive predictive value (statistical parity) among groups and support equalized odds. 
The existence of this CSU-wide service allows each campus to track  the effects of program changes designed to improve performance. Because the instrument was designed and is implemented systemwide  with completers throughout the state, we believe it is a fair and  trustworthy measure. 
Fresno State has initiated a college-wide data summit to consider the findings of this statewide survey and triangulate them with campus data, including the percentage of First Generation students, access to resources like scholarships, and culture and context of the cohorts in which prospective teachers are placed. Through this triangulation process, we are able to determine the alignment of the finding from the survey with our other measures, further assuring us of the survey’s trustworthiness as an instrument. In the process, we are also able to inform the impact on program changes on our own students with respect to the unique diversity of culture and needs in the Central  Valley. 

Quantitative Data Measure: CSU Teacher Credential Program Completer Survey
Description of Measure The California State University’s Education Quality Center (EdQ) oversees the  administration of a completer-survey to exiting candidates of all CSU teacher-preparation  programs. The survey is available year-round and campuses are encouraged to make completion  of the survey a component of graduates’ final paperwork. The survey contains items asking  about candidates’ perceptions of various aspects of the preparation program and the field  placement experience. Campuses have access to annual results from the survey by utilizing the EdQ Dashboard. Results can be disaggregated by various measures including campus, year of completion, respondent race/ethnicity, and type of credential. Note: the CTC also distributes a Credential Program Completer Survey which gives an overall view of CA Educator Preparation Programs. 
Evidence (or plans) regarding validity Used systemwide, the survey serves as a valid measure of program completers’ perceptions of the teacher preparation program because it  asks questions directly aligned with the California Teacher Performance  Expectations and California Standards for the Teaching Profession. Additionally, the survey’s content is tailored to the type of program each respondent completed, making the content valid for each individual. For  example, the survey for a Single Subject English program completer contains an item about how well the program prepared them to develop  students' understanding and use of academic language and vocabulary whereas the survey for a Single Subject Social Science program  completer contains an item about how well the program prepared them  to develop students' Historical Interpretation skills. All program  completers respond to items asking about their preparation of general  pedagogical skills, such as their perception of how well the program prepared them to differentiate instruction in the classroom. In this way, the survey is a valid measure of completers’ perceptions of the program.
Evidence (or plans) regarding reliability Uncertainty about evaluation findings comes from two principal sources: the number of evaluation participants and the extent of their  concurrence with each other. The evaluation findings become  increasingly certain to the extent that the questions are answered by  increasing numbers of program completers and their employment  supervisors. Each year the data set yields the percent of respondents who  gave specified answers to each item and includes reliability estimates in  the form of confidence intervals based on the number of respondents  and the concurrence or homogeneity of responses. The CSU Deans of  Education grouped together questions into "composites" (e.g., Preparing  for Equity and Diversity in Education) for a more reliable interpretation.  The reliability for the composite scores for the system and the individual campuses generally range from 0 to 2 percentage points at the 90%  confidence level. 
Evidence (or plans) regarding fairness/trustworthiness

The existence of this CSU-wide service allows each campus to track the  effects of program changes designed to improve performance. Because  the instrument was designed and is implemented systemwide with  graduates throughout the state, we believe it is a fair and trustworthy measure. 
Fresno State has initiated a college-wide data summit to consider the findings of this statewide survey and triangulate them with campus data, including the percentage of First Generation students, access to  resources like scholarships, and culture and context of the cohorts in  which prospective teachers are placed. Through this triangulation  process, we are able to determine the alignment of the finding from the survey with our other measures, further assuring us of the survey’s trustworthiness as an instrument. In the process, we are also able to  inform the impact on program changes on our own students with respect  to the unique diversity of culture and needs in the Central Valley. 

Quantitative Data Measure:  SPED 246 Intervention Project
Description of Measure

The Intervention Project, an assignment in SPED 246:  Specialized Academic Instruction for Students with Mild/Moderate Disabilities. The majority of our students (84%) are in the Mild/Moderate pathway and this course is in the final phase of both the credential and master’s level coursework. In this course, candidates learn: 

  • Appropriate methodology for the development, monitoring, and coordination of the Individualized Education Program (IEP)
  • Methods for transition planning for students in grades TK-adult
  • An array of research-based strategies that address specialized academic instruction for students with diverse learning needs, including emergent bilingual learners. 

The Intervention Project is a culminating experience that requires candidates to focus on and provide specialized academic instruction to one or more students with disabilities with whom they work and who is/are struggling to learn, remember, and apply information that is taught in the general education and/or special education setting.

Evidence (or plans) regarding validity Assignment and rubric align with student learning outcomes and CCTC standards (See accreditation documents)
Moving forward, program faculty will create new rubrics to ensure the assignment and rubric are in line with the new CCTC Education Specialist requirements and standards during Fall 2021/Spring2022.
Evidence (or plans) regarding reliability  
Evidence (or plans) regarding fairness/trustworthiness We believe that the measure is fair and trustworthy because both the assignments and the rubric were created by program faculty who are familiar with the goals of the course. Additionally, the assignment and rubric are used consistently for all students enrolled in the SPED 146 course, regardless of instructor. 

Quantitative Data Measure: SPED 145 Instructional Plan Assignment
Description of Measure The Instructional Plan assignment is to have students create a lesson plan using principles of differentiated instruction and universal design. The plan must also contain considerations for individualized accommodations/modifications for students, as well as a reflection regarding the planning process.
Evidence (or plans) regarding validity Assignment(s) align with student learning outcomes and CCTC standards (See accreditation documents)
Evidence (or plans) regarding reliability Assignment and rubric were created by program faculty. All program faculty who teach this course use this assignment and rubric. 
Moving forward, program faculty will create new rubrics to meet the new CCTC Education Specialist requirements and standards during Fall 2021/Spring2022
Evidence (or plans) regarding fairness/trustworthiness All students have access to the assignment requirements and rubrics ahead of time./Analysis is reviewed during peer reviews for all courses.

Quantitative Data Measure: SPED 145, Individualized Education Program Assignment
Description of Measure The purpose of the Present Levels and Annual Goals for the Individualized Educational Program is to prepare students for the nuts and bolts of the job of the special education teacher. In this assignment, students are given raw data regarding a student with disabilities. They write the present levels for performance, recommend potential accommodations/modifications, and write five (5) annual goals. 
Evidence (or plans) regarding validity Assignment(s) align with student learning outcomes and CCTC standards (See accreditation documents)
Evidence (or plans) regarding reliability Assignment and rubric were created by program faculty. All program faculty who teach this course use this assignment and rubric. 
Moving forward, program faculty create new rubrics to meet the new CCTC Education Specialist requirements and standards during Fall 2021/Spring2022
Evidence (or plans) regarding fairness/trustworthiness All students have access to the assignment requirements and rubrics ahead of time/Analysis is reviewed during peer reviews for all courses

Quantitative Data Measure: SPED 125 Classroom Management Plan Assignment 
Description of Measure Student teacher candidates in the creation and development of positive learning and work environments is in the creation of a Classroom Management Plan. The goal of the Classroom Management Plan is to create a meaningful, active instructional environment where rules, routines, and expectations are clear, where more attention is given to desired behavior than to inappropriate behavior, and where inappropriate behavior is managed with systematically, consistently, and equitably. Students will complete a Classroom Management Plan according to the following steps: 1) develop a statement of purpose, 2) develop classroom rules, 3) develop classroom routines and teaching methods, 4) develop an action plan.
Evidence (or plans) regarding validity Assignment(s) align with student learning outcomes and CCTC standards (See accreditation documents)
Evidence (or plans) regarding reliability Assignment and rubric were created by program faculty. All program faculty who teach this course use this assignment and rubric. 
Moving forward, program faculty create new rubrics to meet the new CCTC Education Specialist requirements and standards during Fall 2021/Spring2022
Evidence (or plans) regarding fairness/trustworthiness All students have access to the assignment requirements and rubrics ahead of time/Analysis is reviewed during peer reviews for all courses

Quantitative Data Measure: SPED 219, Collaboration Assignment 
Description of Measure Completion of the Collaboration Assignment in SPED 219 (Communication and Collaborative Partnerships). SPED 219 (Effective Communication and Collaborative Partnerships) is a course required for all candidates in Special Education. The focus of this course is on the development of materials, strategies and skills for individuals on the educational team to effectively and positively work with students with a range of disabilities.
Evidence (or plans) regarding validity Assignment(s) align with student learning outcomes and CCTC standards (See accreditation documents)
Evidence (or plans) regarding reliability Assignment and rubric were created by program faculty. All program faculty who teach this course use this assignment and rubric. 
Moving forward, program faculty create new rubrics to meet the new CCTC Education Specialist requirements and standards during Fall 2021/Spring2022
Evidence (or plans) regarding fairness/trustworthiness All students have access to the assignment requirements and rubrics ahead of time/Analysis is reviewed during peer reviews for all courses

Quantitative Data Measure: Post-Dispositions Survey
Description of Measure Candidates evaluate their own progress on six broad professional dispositions in each of their three clinical experiences through the Pre- and Post- Dispositions Survey. The professional dispositions include Reflection, Critical Thinking, Professional Ethics, Valuing Diversity, Collaboration, and Life-long Learning. Each of the six dispositions is subdivided into descriptors on which candidates self-assess their progress. The Post-Dispositions Survey data are collected in candidates’ culmination clinical experience. This data provides our program with the candidates’ perception of their progress on some of the behaviors required for successful professional practice. The data collected from our data system, Tk20, was available only for fall 2019, spring 2019, spring 2020 and spring 2021. The reason that data is available for only those semesters is unknown, however, it may have to do with a change in the Tk20 binder format and forms. 
Evidence (or plans) regarding validity Assignment(s) align with student learning outcomes and CCTC standards (See accreditation documents)
Evidence (or plans) regarding reliability Assignment and rubric were created by program faculty. All program faculty who teach this course use this assignment and rubric. 
Moving forward, program faculty create new rubrics to meet the new CCTC Education Specialist requirements and standards during Fall 2021/Spring2022
Evidence (or plans) regarding fairness/trustworthiness All students have access to the assignment requirements and rubrics ahead of time/Analysis is reviewed during peer reviews for all courses

Appendix E: