Skip to main content Skip to main navigation Skip to footer content

AAQEP Accreditation 2022

Standard 3: Aspect E

Preparation programs ensure that candidates, upon completion, are ready to engage in professional practice, to adapt to a variety of professional settings, and to grow throughout their careers. Effective program practices include: consistent offering of coherent curricula; high-quality quality, diverse clinical experiences; dynamic, mutually beneficial partnerships with stakeholders; and comprehensive and transparent quality assurance processes informed by trustworthy evidence. Each aspect of the program is appropriate to its context and to the credential or degree sought.


Internal Audit Process:
Various stakeholders of the Multiple Subject Program (e.g. Kremen Advisors, Kremen Admissions Analyst, Fresno Assessment of Student Teachers (FAST) Coordinator, Office of Clinical Practice Program Assistant, Credential Analyst, Professors-in-Residence) are contacted via email by the program coordinator to learn about their role in supporting a sample of credential students who completed the program based on pathway and variation in length of time in which is took them to complete the program (Appendix D). The review is specific to understanding the students' experience from the start to finish of the program, and includes drawing on artifacts and data sources to provide a process map of a program completer’s time in the program:  

  • Recruitment Process (Kremen Advisors and Professors-in-Residence): when, where, why, how. Dates of info sessions they attended, for example. Any relevant info about their applications to the credential program.
  • Admissions (Admissions Analyst and Program Coordinator): What process and criteria were used? Credential program application material (were there any anomalies in terms of their admissions? Delays, missing items, etc.)
  • Advising (Kremen Advisors, Program Coordinator, Professors-in-Residence, Admissions Analyst): Expectations for advancing through the program (were advising sheets submitted, for example?)
  • Monitoring (Kremen advisors, Program Coordinator, OCP Program Assistant, and FAST Coordinator): What criteria were used to monitor along the way? (Benchmarks like observation/evaluation dates and scores, FAST dates and scores)
  • Final Clinical Placement (OCP Program Assistant): Were criteria met? How? (Include placements, partnerships, coaches)
  • Completion Check (Credential Analyst): What process was followed? Were criteria met? IDP submissions, credential applications
  • Completer Follow-Up (Program Coordinator, Program Faculty who teach in the final phase of the program, Credential Analyst): Who gets contacted post- completion? Did that happen? Did follow up occur during credential application process, for example. 

In addition, Program Completer feedback about their experience in the program is collected annually through three processes: 1) When the credential analyst provides the preliminary credential application information to all students completing the program, the program requires the completion of the CSU Program Completer Survey as part of their preliminary credential application. 2) One year following the CSU Program Completer Survey, the MS program Coordinator emails all Credential Program Completers from the prior year a request to complete the CSU 1-Year-Out Survey including why their feedback is essential to continuous improvement efforts. 3) During the final two weeks of the program, program faculty who teach LEE 169S, Inquiry and Puzzles of Practice C, build in a reflective activity where the students are asked to provide feedback on their experience in the program by completing a journey map. The Journey Map gathers information about the program completer’s perceptions of the major milestones while in the program and the extent to which they felt positively or negatively about those experiences. 

Outcome from Audit Process:
Although the admissions criteria and process is straightforward and organized in a way to signal to applicants that certain requirements take longer than others to complete on the website and during informational sessions, the candidates find the application to be daunting to complete because there are 12 requirements/sections to complete including related expenses to certain admissions requirements. The audit further reveals that choosing a pathway to become a teacher requires additional advising (Student 1 and Student 2) and that depending on the chosen pathway the admissions process and ongoing advising support vary (Student 2). The audit also confirms that students who start the program and then need to pause may re-enter the program eight years after they initially started, requiring additional, differentiated support as well. The program website, specifically tailored recruitment emails for Liberal Studies undergraduate students, and the admissions analyst, Renee Flores, are vital in the recruitment and admissions aspects to the program. 

The audit demonstrates that the program is effective is providing credential students with placement school sites that provide them with opportunities to learn and grow within a diverse school setting relative to: 1) race, ethnicity of the students; 2) number of students on free and reduced lunch plan; 3) languages spoken by the students, including English learners and 4) the inclusiveness of the school for students with disabilities and the process for students to receive additional services. Additionally, the program provides its students with experienced mentor teachers and clinical coaches who mentor them through the program as the mentors and coaches in the sample met the program’s definition for quality and they completed all of their responsibilities role (e.g., clinical practice agreement, monitoring clinical hours, mid-final formative performance assessment, and completing six formal observations per semester of student teaching as well as providing actionable feedback to the credential students.   

The development and maintenance of the program website and Office of Clinical Practice website is important to student advising from the start to finish of the program. Students receive emails from the Office of Clinical Practice about important program deadlines with follow up support from their clinical coaches and program faculty who teach in the inquiry series (LEE 160, LEE 167, LEE 169s). When a student starts to show signs of struggle either from verbal feedback or formative performance rubric feedback, it is the University Coach and inquiry series (iPOP) program faculty that are quickest to request support from the MS Program Coordinator. Depending on the area of need, the MS program coordinator may recommend that an Individual Plan of Assistance is established in collaboration with the coach and credential student. When program redesign changes are being made the students can feel as if the program is unstable or they are being treated as guinea pigs to be experimented on. This reminded the program to carefully consider plans for implementing change and the kind of communication that is needed to keep all stakeholders on the same page.   

Additionally, the program coordinator holds program orientations that provide the students with the requirements they need to meet in order to earn their preliminary credential. These orientations help students know what to expect and what follow up questions to ask of their clinical coaches. The program coordinator also holds weekly drop in-sessions for credential students to receive additional support navigating the program. 

Process for Ensuring Data Quality through Validity, Reliability, Trustworthiness, & Fairness:

Mid-Final Semester Formative Performance Review (CREATe Rubric) 
A collaborative process between Fresno State and three school districts led to the development of a common rubric intended to provide action-oriented formative feedback for teachers that is most currently used by coaches and credential students during the mid and final semester formative performance review. Aligned to both district and state standards, the rubric focuses on specific action-oriented competencies. The rubric also provides multiple stakeholders a common language that helps prevent mixed messages or misinterpretations of jargon. Various constituents can use the rubric to provide specific feedback and next steps to strengthen practice for any teacher along the novice to expert continuum. Each competency is represented in the rubric as a separate continuum so that the rubric can help identify each teacher’s individual zone of proximal development (Vygotsky, 1978) and help teacher educators provide tailored and targeted scaffolding. Thus, the rubric provides a way to create practice profiles across multiple competencies that can help document teacher development over time.

Through synthesis of existing district observation tools, and by aligning this synthesis to the new Teacher Performance Expectations (2016 CCTC TPEs), the rubric went from design into implementation. Only observable standards were included, selected by consensus among representatives of the partner districts. Based on partner district input, the rubric includes 14 items in four sections: 1) Positive Environment, 2) Instructional Design and Implementation, 3) Rigorous and Appropriate Content, and 4) Reflection-In-Action.

Coaches who help supervise the teacher candidates' clinical experience were also offered in-depth training on the rubric. Coaches were asked to participate in three days of rubric training. The three day, two-hour in length training workshops utilize video clips, small group discussion, whole group discussion, and individual reflection to engage coaches in actively thinking about and trying out each rubric item. All of our MS coaches have participated in CREATe Rubric training. 60% of the MS coaches (N15) are fully reliable, and 7 more are partially reliable in that they are half way through the process and on their way to being fully reliable. 3 Faculty in Residence and 3 Teachers in Residence along with additional leadership team members from our district partners and Fresno State’s Continuous Improvement Lead participated in CREATe training. Each residency hosted the trainings which also included 2 live 20-minute observations with placing the teacher on the continuum followed by immediate discussion and reflection using the rubric after visiting the classroom. As coaches and district partners were intentionally introduced to the rubric, teacher candidates were also introduced to the rubric over a period of four to six weeks during one of their core courses of phase 1 of the program. Once the teacher candidates have been introduced to all fourteen items, they evaluate themselves and their peers with the tool using their own teaching videos. Additionally, when the teacher candidates begin writing lesson plans, they are instructed to choose a few items to find evidence of in their plans. These opportunities for coaches and teacher candidates to develop knowledge, skills and disposition about rubric-use are critical to rubric implementation to fidelity. 

Although CREATe Rubric has been well-received by district leadership and candidates, coaches and faculty have been split on their reception of the CREATe rubric. Some of their concerns center on the length and cognitive load of the tool as well as disagreements around who is responsible for introducing and “teaching” the rubric to the credential students. It is plausible that although the lead developers of the rubric engaged collaboratively and intentionally with our district partners to develop this tool, they ended up excluding the coaches from the development process. It is plausible that this misstep did not set the rubric up for sustainability as those -the coaches- who would use the rubric most directly with students on a regular basis are arguably the most valuable voice to have at the table when developing tools that directly impact their practice. When faculty who were the lead developers of the CREATe moved on from the program and coaches continued to question the utility of the CREATe Rubric, an assessment of all of the available reliable rubrics was conducted based on a set of criteria. Additionally presentations and surveys to gather coach, faculty, and district feedback was administered. Based on this input, a decision was made in December 2020 to transition to the New Teacher Project (TNTP) Core rubric, which Chico State adapted to align with the CTC Standards. A Rubric Advisory Board had formed consisting of representatives from all three basic credential programs and an implementation timeline plus additional adaptations were made. However, by March 2020 California moved into its first COVID-19 lockdown putting the new formative rubric implementation timeline on hold. 

One thing is evident from our investigations, ALL candidates responded well to a formative rubric that provided common language to make the skills of teaching more visible so they could receive actionable feedback from their coaches. This makes moving toward initial implementation of the new rubric ever more pressing and promising.  

Fresno Assessment of Student Teachers (FAST)
​​The Fresno Assessment of Student Teachers (FAST 2) is a state-approved Teacher Performance Assessment (TPA) system designed for use by the Multiple Subject Credential Program. FAST 2 assesses the pedagogical competence of teacher candidates, including Interns, with respect to the 13 Teaching Performance Expectations (TPEs). Faculty and coaches receive formal FAST 2 training and take a test in order to become reliable scorers. The FAST 2 --because it is run in-house -- allows faculty and coaches to participate in the scoring, which gives us great insight into how our candidates are performing in ways that help us reflect on what is working and what needs more attention.

All MS coaches participate in the FAST training and calibration sessions each semester with a 100% calibration rate becoming reliable scorers of the SVP and TSP. On average, 7 tenure track Multiple Subject Program Faculty attend the FAST TSP training and calibration sessions each semester. 100% pass the calibration test and become reliable scorers for the TSP. 

Journey Maps
Journey maps help us understand candidate’s experiences but they also provide unique insights as they document individual candidates’ experiences over time by anchoring them to memorable emotional highs and lows during their time in the program (Rains, 2017). Although the traditional method of journey mapping consists of collecting empathy interviews and then the research team using that information to map out the emotional highs and lows (Grunow, Park, & Bennett, n.d.), we modified this method by developing a process that empowers our candidates to map their own memorable emotional highs and lows in the program. We wanted our candidates to decide what counts as a milestone so that we could see if we, as teacher educators, share a similar understanding of which experiences require the most attention.  

Not only do journey maps allow us to track individual candidates’ experiences through the program but they also enable us to identify patterns of candidate experiences that can inform improvement ideas. It is in this regard that we designed the journey map data collection process in a way that allowed for multiple purposes. Rather than administering this as an item for our candidates to complete on their own time like they do for our program surveys, we built this as a reflective, in-class activity facilitated by an instructor on record or a member from the program research team. As a result, journey maps offer opportunities for multi-levels of reflective purpose: at the individual candidate level, at the individual instructor level who can reflect on their class of candidates as a whole, and at the program level where trends can be identified across time.

Journey maps are coded to identify emergent themes pointing to potential areas for program improvement. After reading through a sample of the journey maps to become familiar with the data, we developed a focused coding scheme that would help us look for similar information across all of the candidates maps while also noting new themes as they emerged. The focused coding scheme also allowed us to engage in interrater reliability practices. Two coders would first meet to discuss what they see in the same journey map, then they would analyze the map using the scheme and then discuss where the analysis was confirming and disconfirming to then determine how to interpret the scheme moving forward. The coders then analyze a set of five journey maps from the same cohort and meet to again compare and discuss their analysis. Once the coders saw more alignment in their coding practices then they would code a full cohort. Then, a third coder who has also been trained to use the coding scheme would code 10% of the other coders’ maps to compare the analysis as one way to strengthen the trustworthiness of the journey map data. The focused coding scheme allowed us to calculate counts of candidates' experiences while we also paid close attention and still documented their experiences from a nuanced, qualitative perspective. Events were coded as either negative or positive. Program highlights were also identified.  

In summary, the routines and systems that we have in place to carry out internal audit processes and outcomes ensures data quality of local measures. Nonetheless there is room for growth and improvement in terms of capacity to sustain such rigorous audits including the need for a program data analyst and graduate research assistants especially considering that from 2017-2020 the program had between 2-4 graduate research assistants to support data organization and analysis for program data dialogues and to support continuous improvement efforts.

Key Data Measures can be found in Appendix E

Aspect F →