Skip to main content Skip to main navigation Skip to footer content

AAQEP Accreditation 2022

Standard 3 Aspect E

Preparation programs ensure that candidates, upon completion, are ready to engage in professional practice, to adapt to a variety of professional settings, and to grow throughout their careers. Effective program practices include: consistent offering of coherent curricula; high-quality quality, diverse clinical experiences; dynamic, mutually beneficial partnerships with stakeholders; and comprehensive and transparent quality assurance processes informed by trustworthy evidence. Each aspect of the program is appropriate to its context and to the credential or degree sought.

Internal Audit Process:
As detailed in Appendix D, four Single Subject Credential Candidates from the target population were purposely selected for the internal 2020-2021 audit process: two from the traditional program and two from our alternative pathways, a residency and an intern. Each of the candidates selected had experienced some sort of admission problem. Once the four candidates were identified, their academic journey was traced from the time they were recruited to completion through a point of contact search that included recruitment, admission, advising, program monitoring, clinical practice, and completion. More specifically, information was collected, as follows:

  1. Recruitment: Where, when, & How?
  2. Admissions: What process and criteria were used
  3. Advising: Expectations for advancing through program
  4. Monitoring: What criteria were used to monitor along the way? (Sample courses and/or benchmarks)
  5. Final Clinical Placement: Were criteria met? How? (Include sample of placements, partnerships, supervisors)
  6. Completion Check: What process was followed? Were criteria met?
  7. Completer Follow-Up: Who gets contacted post- completion? Did that happen?

Outcome from Audit Process:
The outcomes from the 2020-21 audit process revealed that, even though the candidates in the Single Subject program are supported by the internal program stakeholders like the Admissions Advisors and Analyst, the Academic Subject Matter Advisor, the Program Coordinators, the Program Faculty, and the Program Administration, admission appeal procedures are not readily accessible to the applicant.  Instituting an applicant admission appeal procedure into the Single Subject website will allow the program to collect better data in order to make changes in the admissions time frame.  

The 2020-21 audit process also revealed that very little to almost no contact occurs once our candidates complete the program. It was also noted that the response rate for the program’s  completer surveys was low. Therefore, instituting a permanent email reminder to complete the CCTC and CSU Educator Quality Center Completer Survey will increase the amount of post-completer data that is currently available to the Single Subject Program. In addition, there are currently no measures in place for evaluating the impact of our candidates in their classrooms. Therefore, sending a post-completion survey asking candidates to share their employment information will also provide the Single Subject Program with a measure to identify gaps between the profile of our teacher credential candidates and their subsequent classroom performance.

The 2019-20 audit process revealed that there needs to be a data management system in place for tracking candidates who are in need of field placement assistance and/or coursework  support. While there are systems in place that support our candidates when an issue arises, there is not a central collection system in place.  As a result, the Single Subject Program is considering creating an in-house tracking system that would not only keep track of students who need fieldwork and/or coursework assistance but would document the trajectories needed for program decision-making.

Process for Ensuring Data Quality through Validity, Reliability, Trustworthiness, & Fairness: (see Appendix E)
The integrity of data provided by the Single Subject Program is critical to the decision-making process  of the program. To provide assurance that the data submitted for this process is reliable, accurate, and complete in order to enhance the program’s ability to produce effective graduates, the program begins with program inputs. 

Once admitted into the program, the processes for ensuring that candidates are effectively prepared is evaluated through candidates’ course grades, signature assignments, and coursework product samples.  In clinical practice, candidates are evaluated through their student teaching, fieldwork assignments and evaluations, and the FAST. Upon completion of the program, candidates are asked to complete the CSU/CTC Survey of Completers and the exit surveys with the school dispositions.

The key assessment measures to evaluate our outcome measures consist of the Disposition Survey Results, Signature Assessment Scores, the Fresno Assessment of Student Teachers (FAST) Site Visitation Plan (SVP) and Teaching Sample project (TSP), fieldwork observations, and mid and final-semester evaluations.  Our post-completer measures include the Commission on Teacher Credential Completer Survey and the CSU Educator Quality Center Completer, Employer, and One-Year Teacher surveys.

The unit data collected from these measures is then used by the admission advisors, the program credential coordinators, the program faculty, the Office of Clinical Practice, and the university coaches to clarify admission requirements, rewrite admission policies, develop signature assignments, and implement clinical practice evaluations.  Monthly meetings are held to ensure that changes to the program are made in a timely fashion in order to improve every aspect of the program. 

Even though our Single Subject Credential admission benchmarks, program courses, and the FSR instrument are designed to assure that candidates develop and demonstrate all the skills and knowledge needed to teach effectively; it is but a small slice of the continuous improvement cycle, as the Single Subject Program stakeholders work as a team to make our program better in order to prepare the most effective teachers. 

Data quality is also maintained by the program faculty, the Office of Clinical Practice, and the FAST Coordinator, as those entities meet once an academic year to review the data.  During these meetings, it has been noted that the TPEs are continuously assessed with increasing complexity throughout the program. What is not done is to interview district employed supervisors to see if the Single Subject Program is having a positive impact on the candidates themselves.

Currently, the review of data quality also occurs among the stakeholders. Some key findings related to data quality include:

  • There needs to be a more definitive effort in increasing the pool of trained observers to review the data collected from student teaching videos.  
  • There also needs to be a more concerted effort to pool together a random sample size of candidate data and have multiple evaluators evaluate the collected candidate work samples. 
  • There are very few alternative measures taking place between the mid and final-semester student teaching evaluation survey. It would be beneficial for the program to see how the two scores compare during the eight-week application time frame. 
  • The quality data process results also revealed that there appears to be more qualitative coding of coursework reflections than there are of student teaching reflections.

Aspect F →