image

By Elmira Watts, Salisbury College, United Kingdom, student of John Martin

9
Evaluating Education

“If you don't know where you are going, you might end up some place else.”

Yogi Berra

 

Just as it is important to test and evaluate students to validate that they are learning, we need to do the same for the processes that we use to help them learn. These evaluation tools will help us be more effective in our teaching and in our creation of effective photographic education. Just as learning is a progressive process for our students, becoming a truly effective teacher and developing photographic programs will also be progressive. Evaluative tools establish both strengths and weaknesses, identifying areas that will allow improvement and growth of the program.

We must look at the goals of evaluation or assessment in education to be either summative or formative. In the summative evaluation the goal is to assess the quality of the educational product. This is often seen as an evaluation at the end of the class or program. A formative evaluation's goal is to advance and/or improve the quality of education. In some cases the idea of a formative evaluation is seen as a tool for a program that is in the process of developing and forming. However, most assessments can be constructed to meet the needs to define the level of quality and to assist in giving direction in improving the quality.

“Let your words teach and your actions speak.”

St. Anthony

For some teachers, the idea of being graded is not welcomed, and this is understandable. They may believe that they have done their time as students and have been employed because they know the materials to be taught. They often feel that this knowledge will allow them to be good teachers, and rating their performance and developing more or improving skills in teaching will naturally occur without introspection or examination of any kind. Those coming from advanced education, whether as graduate teaching assistants or teachers, see their roles based on previous education to be those who will “give” grades rather than receive grades. And evaluations are too often seen as grades and hoops that teachers have already passed through. Thus we must look at how teachers can improve, to raise the quality of their teaching, rather than looking at improvement in the sense of dealing with being rated.

Measuring Variations

Assessment of quality teaching requires measurement against standards. With our main focus on creating quality education and high levels of learning photography, we must go through a series of steps to assure that the students learn at their best level. At any point in our program we will have already envisioned the flow of our ideas about teaching photography, and will have planned our educational process through the creation of lesson plans and/or syllabi. We will have implemented, through teaching these plans, and now it becomes important to check how well we did.

The primary method used in many planning and implementation models involves planning, development, implementation, and evaluation. New planning to incorporate the information from the evaluation, to improve the quality, follows evaluation. This concept, while often enforced by administration, is one of the best ways that we can progress as teachers and have our students become ever more competent as photographers.

“In the ‘good old days,’ teachers just made up tests and students received a grade, but today we must validate the methods and outcomes of our educational processes.”

Janet Bonsall

Central Missouri State University, MO

We need to find those things within our educational environment that enable high levels of education. This means that we are trying to identify and measure the variations that occur and that either promote or deter quality education. The bases of these quality standards are the objectives that we set out in determining our learning strategy. As discussed earlier, objectives should be measurable. This means that the outcomes of our learning process will provide evidence of success and to what level that success has happened.

This idea is not peculiar only to education. W. Edwards Deming in the 1950s proposed that business processes should be analyzed and measured to identify sources of variations that cause products to deviate from customer requirements. In our case, “products” can be related to our students and graduates and the “customers” can be viewed as parents and employers.

Using this model, we can look at our environment and determine separate types of variations that may impact learning. These are mainly common variations and systemic variations, and sometimes special variations that occur only sporadically or in special situations. Common variations are created by the environment, institutional practices, course materials, and regularly occurring activities. Special variations are those that are particular to a teacher or course. We can assume that variations of both types will impact every course. We must identify and analyze these variations because they are key to identifying problems, describing issues, defining strategies for correcting problems, and improving our educational process. It is the improvement of the educational process that we are driving toward, not a simple process to rate or compare faculty teaching styles, course materials, or classroom environmental elements.

“However beautiful the strategy, you should occasionally look at the results.”

Winston Churchill

Realizing that these types of variations are not the same directs us to see that they will not use the same processes to solve problems identified in the types of variations. Treating common variations as though they were special variations does not solve problems and will likely create more problems.

The key is identifying problems by analyzing the variations between expectations and our teaching. As in most situations, many of these variations will not be clear with the application of only one evaluation tool. This tells us that continuous/regular processes must be used to identify emerging and continuing variations from the standards that have been developed for our photographic programs. We need to measure regularly and consistently to develop both a baseline of variability and a method to identify changes in the environment, contents of courses, teaching methods, and/or student capabilities for our courses and programs.

Types of Evaluations

“When students have a good feeling about their own work and a feeling that they have really progressed, they are positive about themselves, and they are positive about the college and the teachers.”

Ian R. Smith

Salisbury College, United Kingdom

Two basic types of evaluations are commonly used. These are student evaluations and teacher evaluations. These may be done in the classroom or online by students or by peers, and as classroom visitations and consultations by administrators.

Student Evaluations

Within much of higher education, student evaluations are a regular part of the educational landscape. The stated goal of these tools is to improve teaching and learning. When used properly and with understanding, evaluations can be great tools to help you become better at helping your students to learn.

Critical in understanding how to make evaluations become beneficial to learning is to realize that they are based on perceptions and that perceptions are not hard facts. Perceptions, however, are real. The perception of teaching affects learning. If the students perceive the teaching to be good in the classroom, the students will be more receptive to learning.

“We all react more to the negative evaluations than to the positive ones. The positive one comes in and it makes make you feel ‘kinda’ good, but when one comes in negative it makes you feel really bad. When it comes in bad you have to ask ‘can this be true?’ and if it is an issue, then do something to change it.”

Buck Mills

Colorado Mountain College, CO

Research shows that student evaluations correlate more with how the student perceives the grade they will receive than with the amount of learning in the class. This means that if the students as a cohort perceive that they are doing well in a class, then the students will tend to evaluate the class more positively.

Two types of evaluations are commonly used. These are “numerical (scored)” and “brief response.” While both are used independently, they are also used together. In a common evaluation, a numerical rating will be used with an optional brief-response questionnaire. For numerical measurement, often a Likert scale will be applied to statements about the course or instructor. The student will be asked to rate each statement on a scale (1 to 5), from “strongly disagree” to “strongly agree.”

“It is the people who count, not the numbers.”

Peter Day

BBC, Global Business

Use of Likert scale evaluations means we are trying to “objectify” the learning process. Asking questions that are objectified does not give the truest picture, but makes it easier to do comparisons between teachers and to see improvement. Therefore, those who develop classroom evaluation tools look for measurable aspects of the process. Evaluation developers try to find items in the course and classroom that can separate into distinct functions that can be measured. Thus the aspects of the process are defined in terms of a simple scale of functions: “Does the teacher return exams in a timely manner?” or “Does the teacher start class promptly?” or “Is he/she neat and professional in appearance?” These questions, while part of the process, are not measuring learning. As we know, grading and returning tests and assignments promptly provides timely feedback to the student and can assist in the learning process.

Normally in these tools, one or a few questions may compare the course or teacher to other courses or teachers. Once again these types of evaluation questions do not measure learning but rate the differences between instructors or courses. A rated statement, such as “This instructor/course is among the best,” functions to establish this differential measure. Research in which these types of questions or ratings are asked many times shows that they tend to give the same results.

“I never read student evaluations until the semester after so that I will not be as influenced by my memory of the personalities, so that I can read them more objectively.”

Gary Wahl

In student evaluations it is interesting that few, if any, ask the students to rate learning using a statement such as “Because of this teacher or course I learned the objectives of the course.” Such questions would be beneficial for perfecting and evaluating the learning process.

While often optional, brief-response evaluations can be very helpful in improving the teaching/learning process. Eliciting from students comments on both the positives and the negatives they see in a course can give direct information about areas that can increase or maintain quality. Simple open-ended questions such as “What topics do you feel were the most (or least) valuable?” or “What has your instructor done especially well (or improved) in teaching this course?” can provide insight into the students' perceptions of their learning process.

“The wisest mind has something yet to learn.”

George Santayana

Because of two tendencies, most student evaluation results will likely fall into the upper part of the rating scale or will contain overall positive statements about the course or the teacher. The two tendencies are that the normal relationship between students and teachers is respectful and often the students are in a situation where the amount of course material is large. Therefore, an evaluation that results in low ratings or negative remarks carries more weight as a sign of ineffectiveness, compared to an evaluation that results in high ratings/positive comments as evidence of superior performance.

Regardless of the design of the evaluation tool, evaluations can be defined as comparative devices. While the design of the tool suggests that it functions to assist in the creation and improvement of an individual course and/or serves as a teacher evaluation, the actual function is to compare educational situations. Even for students participating in their first evaluation, they are seeing the present situation within the flow of their education to that point. This can lead to popularity contests and even in a few cases to instructors begging for good evaluations.

“Three cardinal rules for evaluation or assessment: ‘Nobody wants to be evaluated, nobody wants to be evaluated and finally, nobody wants to be evaluated. ’”

Frank Newman

Administrative Teacher Evaluations

The involvement in evaluations is part of an administrator's job. Within education administrative evaluations are designed to improve the quality of education. Unfortunately, at times, administrators use the evaluation tools in punitive ways. This being said, we must look at the positive side of administrative evaluation, because it provides the rationale and guidance for improvement.

For the most part administrative evaluations take two forms. The first form involves a classroom visitation and observation of teaching methods, skills, control of environment, etc. The other common form of administrative evaluations is through consultations with instructors that are based on evaluations of functions within the institution that are important to classroom performance.

Since the administrator will have access to the student evaluations, student perceptions of the teacher's function within the classroom can be ascertained, and classroom visitations and observations normally deal with specific functions within the educational environment. Therefore, administrators tend to look more at methodological portions of the teaching. Primarily, classroom observations look at how well the teacher uses methods of instruction.

At Brooks Institute of Photography, the administrative classroom observation includes two primary areas. First, within the area of “Presentation Proficiency,” the administrator observes and comments on the following six concerns:

The instructor's abilitytogaint he rapport with the students.

The instructor's overall tone of communication with the students.

The instructor's ability to gain the cooperation of the students.

The instructor's ability to direct student attention.

The instructor's ability to deal with and correct off-task behavior.

The instructor's ability to facilitate student engagement with the subject matter.

For the area of “Methodology,” there are seven defined measures for the administrator to rate:

Clearly designed handouts detailing course assignments.

Well-organized demonstrations, exercises, and simulations meeting

course objectives.

Text supports course objectives.

Assignments facilitate acquisition of skills that meet course objectives.

Use of equipment enhances lecture and demonstrations.

Pace and sequence of information delivered maximize student

acquisition of course material.

Method of evaluating and grading criteria for assignments clearly

articulated.

In some institutions, peer evaluations are used. This tends to be less threatening to the teacher than having an administrator visiting classrooms for part of the evaluation.

These evaluations, when added to the student evaluation, give a view of how the instructor is performing in his or her teaching. At this point the common approach is for the administrator of the program to meet with the instructor to go over the evaluation tools that were used, and to discuss the outcomes and future directions based on the evaluations.

“Internally, a system of mentoring underperforming faculty by those faculty considered outstanding can help. It is also important to communicate specific goals and measurements when working with faculty who fall short of expectations. Ultimately, it becomes increasingly important that clear documentation exists if it becomes necessary to proceed toward issuing a terminal contract.”

Nancy M. Stuart

The Cleveland Institute of Art, OH

Professional Development Plans

“He, who is too old to learn, is too old to teach.”

Proverb

The outcome of the evaluations, whether formalized or personalized, should be a professional development plan. This needs to address how the quality items of the evaluations can be increased. Even when the evaluations show excellence, improvement or continuation should be engaged to assure future quality.

The form of a professional development plan can vary from institution to institution or within individual institutions, but certain items likely will appear in the plan. These are how you will improve classroom performance, service to students beyond classroom performance, service to the institution, professional development within photography and as an educator, and personal academic growth. The following criteria are the parts of the professional development plan from Brooks Institute of Photography. The criteria represent potential areas that need to be addressed in any plan and that promote moving forward within the teaching profession.

Faculty Performance Review Criteria

Reviewed
Documentation
Check-Off
% Weight of Job
Performance
Criterion #1: Teaching/Instruction
Course Preparation, Delivery, and Assessment
• Multiple class observations—chair, peer, Director of Education (DOE).
• Submission and review of course syllabi and teaching plans.
• Student faculty—course evaluations for every class taught.
• Must ensure that content of the course matches the expected course competencies and that these outcomes are measurable.
• Other:
Administrative Instructional Responsibilities
• Posted office hours or contact time (home/cell phone numbers, e-mail addresses, hours before/after class on course syllabus).
• Submission of attendance sheets for every class on the days they meet.
• Submission of course grades within 72 hours of the last class session.
• Other:
Criterion #2: Service to Students
• Tutoring/mentoring.
• Academic advising.
• Career-related counseling.
• Identification and assistance of at-risk students.
• Faculty advisor to clubs or organizations.
• Other:
Criterion #3: Service to the Program and Institution
• Serving on institutional or program committees.
• Participation in all required events, including new student orientation, registration, graduation, and other like activities.
• Sharing “product knowledge” information with other departments, such as Admissions and Carer Services.
• Attendance at department meetings (minimum of two per term for adjuncts).
• Working with other members of the department to continually enhance program curriculum to stay current with the market demand.
• Other:
Criterion #4: Professional Development
• All faculty must attend in-service training/orientation provided at the beginning of each term.
• Faculty are involved in continual professional development, including in-service training, outside training, and educational opportunities to enhance their content and/or teaching skills.
• Each full-time faculty member should develop, in a brief paragraph, a teaching and/or an education philosophy that will serve as a guide in his/her profession.
• Each faculty member should complete an annual faculty development plan that is reviewed and approved by the DOE.
• Evidence of scholarship or publication.
• Other:
Personal/Professional Development:
• Accomplishments or new abilities demonstrated since last review (for example, since the last review, the following activities to be accomplished):
• Courses taught and improved.
• Academic activities (writing, research, artistic production, etc.).
• Attendance at conferences, trade shows, and producing and exhibiting.
• Service to the profession.
• Service to the Institution.

At each year's administrative evaluation the teacher and administrator agree on the next year's areas of accomplishment and professional development. At this point the weighting of each area (criteria) is agreed. This allows both the teacher and the administration to have a view of the desired growth and development for the next evaluation. In the rare but important situation where performance in any of the areas is not progressing to the standards of the institution, a corrective action plan will be implemented. In this situation the teacher must realize that this is a “shot across the bow” to advise the teacher that his performance is endangering his continuance in his role at the institution.

“It is really important that faculty are involved in professional development activities; it is the way they stay in touch with the ‘latest and greatest’ and what's hot and what's not. It is a way to stay ahead of the curve.”

Scott DeBoer

Career Education Corporation

Designing Instruments for Evaluation

For many reasons there is a push within any single institution or system to have a single evaluation tool that can identify problems and occasionally show successes. It is often the case that faculty members themselves are asked, either through committee or individually, to design evaluation tools. In effect, developing an evaluation tool is similar to creating an examination. There are the objectives that are desired by the faculty members and the courses that the evaluation instrument must measure in terms of these objectives. There are two major functions that must be considered as a tool is designed: when the tool will be administered and the amount of involvement or openness of answering structured by the method used in building the evaluation tool.

When determining implementation time for a student evaluation of faculty or course, the flow of the course must be taken into consideration. There are advantages to having a set evaluation time—for example, the last fifteen minutes of the class—and to having evaluation fall within a time window that is not restrictive for the faculty and/or administration, but allows the evaluation tool to function properly, without interrupting the proper flow of the course. As stated previously, the evaluation has a high correlation to the anticipated grade of the student filling out the evaluation. If the examination, the last activity in many courses, is to be considered as part of the evaluation of learning, then the evaluation will be placed after that in-class examination has been completed. This would also be true for a final portfolio critique. Evaluation earlier within the flow of a course may reduce the effect of final grade anticipation as a measure within the tool.

image

Jake; by Alexandra Hoffman, Brooks Institute of Photography, CA, student of Tim Meyers

Whether the evaluation is built on a quantitative/objective model using a Likert scale or has qualitative/open-ended questions with short-answer responses, the design of the questions needs to identify the objectives within the course and the teaching methods that will show quality and/or lead to you improvement. Just as essay examinations are easy to write and more difficult to grade, so are evaluations. But open-ended subjective answers can give more specific information to deal with the quality of the educational program. For this reason, many institutions use open-ended questions to augment objective evaluation tools.

Critical in design of an evaluation tool is alignment with the learning objectives and the environment of the course. As classroom evaluation tools become more general, they must deal with the environment of the class.

Regardless of all other considerations, it is in your best interest to evaluate every course you teach on an ongoing basis. Provided that the form of the evaluation stays consistent, that the objective questions remain constant, or that the subjective questions approach the same areas of concern, then growth in teaching and quality can be seen by viewing the evaluations over time. If similar problems appear on successive evaluations, then it becomes obvious as to the areas that should be addressed to improve the quality of education. It will be the continuing nature of an evaluation that allows for growth.

“It has always seemed strange to me that in our endless discussions about education so little stress is laid on the pleasure of becoming an educated person, the enormous interest it adds to life. To be able to be caught up into the world of thought—that is to be educated.”

Edith Hamilton

Evaluation of the Program

Just as it is important to understand the strengths and weaknesses of the faculty's performance in the classroom, it is important to evaluate the health and quality of the program. All tools that can be used to evaluate faculty can be redesigned to apply to the program. There are also other tools and methods that can be used to define the strengths or areas needing improvement within the programs.

Program Reviews

For the photographic program, regardless if it is a new program or one that has been in existence for years, there is a need to review how the program is functioning. Accreditation, in one form or another, will require a review of every program at the institution. Even if accreditation is not an issue, state, local, institutional, or other bodies may require a review to see how well the objectives of the program are being met. But assuming an external requirement is not the most important reason for program review; rather, the review is a method to improve the quality of the photographic curricula and the ancillary activities.

The review is an assessment of the effectiveness of the program. How the assessment is handled will vary from institution to institution. Many institutions have ongoing assessment and program review while other institutions review periodically or as required by outside agencies. Some photographic programs see the concept of a program review as a periodic nuisance project. Whether attached to accreditation or other mandate or self-evaluative tool, the program review needs to be a continuous and ongoing process.

The assessment/review process should have seven steps. These are definition, alignment, assessment planning, data collection, analyzing, comparing, and return. The first step is to define the mission, goals, and outcomes that the photographic program strives for through its curriculum. Though the mission, goals, and outcomes/objectives may have been envisioned during the program's planning, it is important that they are written out and codified to facilitate planning an effective assessment tool. If these have been written out previously, it is still wise to review and update all three, since the photographic discipline and technologies are in constant flux.

To understand the importance of mission, goals, and outcomes/ objectives, let us briefly state how they relate to each other and therefore how they will become important to the evaluation and program review. The mission is a vision of the program, department, or institution based on its values and philosophy. This is the broadest underpinning for the educational process. Goals delineate the attitudes, knowledge, skills, and values expected of those finishing the program. Finally, the outcomes are specific, demonstrated measures of the goals.

With the centrality of the mission, such as providing photographic education, the mission may seem obvious, but restating the mission is helpful when entering into the review process. It provides an overview of how the program should look through the review process. Just as defining the mission is helpful in looking at the overall program, defining and restating the goals and outcome objectives provide the focus for the individual parts of the program.

Having definitions of the program's mission, goals, and outcomes, the next step is to delineate the alignment of these three program definers with the curriculum and activities. Assuming the correctness of the program definers, then the program review assures the coordination of the outcomes with the curriculum and activities.

The third process step is to plan how the assessment will happen. for many photographic programs there will be an institutionally prescribed process flow. Even with a template for how the program review is accomplished institutionally, individualizing the process to ascertain the photography program's unique attributes will be needed. This will include who should be involved and how to approach and acquire information about the program. The criteria for assessing success are defined to set standards against which the program review will be judged.

The planning will necessarily involve how data will be collected about the program. This characterizes the who, what, where, when, and how data will be collected for the program review. Planning needs to include both the process involved in gathering and analyzing the data and how the product of the review will be reported and used.

The actual collection of data is what most people see as program review—asking questions and receiving answers about the success of the photographic program. The collection of data naturally flows into the analysis step of the program review.

The sixth step in the program review is to compare the analysis of the collected data to the defined mission, goals, and outcome criteria. As this comparison occurs, description of how improvement within the program can happen.

The final step in the program review process is to return to the first step. As the assessment takes on its final form, it provides guidance in reworking the mission, goals, and outcomes for the program.

An Assessment Rubric

Today one of the most common assessment tools in education is the “rubric.” A rubric is a method to assess desired successes against defined criteria. Commonly the rubric is a matrix with criteria standards at each matrix location. The advantage of using a rubric is its ability to

image

Jan; by David Page, Rochester Institute of Technology, New York, student of Richard Zakia

simultaneously deal with multiple aspects of the program review. A rubric for a photographic program might include the following criteria areas:

  • Curricular criteria area: Examinations, portfolio reviews, certifications, completion rates, grade point averages (GPAs).
  • Faculty criteria area: Degrees held, professional development activities, professional or artistic work, faculty evaluations.
  • Program criteria area: Breadth of curriculum, academic support, educational and student-accessible equipment, classroom and laboratory accessibility.
  • External criteria area: National reputation, employment or further education placements, grants, awards.

“There are two types of education… One should teach us how to make a living, and the other how to live.”

John Adams

The preceding list does not attempt to show the totality of criteria, but gives a brief view of some ways that the program can be reviewed.

When the criteria areas are chosen, a matrix can be constructed with ascending measures for each individual criterion. For example, if the criterion is completion rates, the measure might be excellent if 90–100% of freshman majors complete all photographic course work within a set number of years, with a good assessment representing 75–90% completion, satisfactory representing 50–75% completion, and unsatisfactory representing less than 50% completion. This criterion, when compared to other criteria, can give a picture of the effectiveness and strength of the curriculum. To make the most out of the rubric, the criteria and standards of measure need to be in place before the review starts.

Accreditation

From an outside view, either personal or governmental, accreditation is a badge of acceptance. Whether for funding, grants, matriculation, or employment, accreditation becomes a level of certification. The body that accredits is stating that an institution or program meets or exceeds agreed standards for educational quality. By doing this, the accrediting body allows those outside the institution to assess if the institution or program will be valuable in the pursuit of educational goals, will be valuable in providing an educational level to facilitate entering employment, and/or will be valuable in conferring merit to the degrees the students will have earned when they graduate. By insisting on meeting their standards, the accreditation bodies try to assure consistency and accountability for the quality of education within their purview.

These and other reasons for accreditation can be seen by the functions described by the Accrediting Council for Independent Colleges and Schools. These are as follows:

  • Evaluate whether an institution meets or exceeds minimum standards of quality.
  • Assist institutions in determining acceptabil ityoftrans fer enrollment.
  • Assist institutions in determining acceptability of transfer credits.
  • Assist employers in determining eligibility of programs of study and the acceptability of graduate qualifications.
  • Assist employers in determining eligibility for employee tuition reimbursement programs.
  • Enable graduates to sit for certification examinations.
  • Involve staff, faculty, students, graduates, and advisory boards in institutional evaluation and planning.
  • Create goals for institutional self-improvement.
  • Provide a self-regulatory alternative for state oversight functions.
  • Provide a basis for determining eligibility for federal student assistance.

There are different types of accrediting bodies based on the type of institution, the program specificity, and/or jurisdiction. Each body has its own standards and methods of giving its accreditation within its scope. Primarily there are four types of accrediting bodies. They are presented here based on scope and not on their status or reputation. All accredit against their own standards. The smallest scope is the specialty accreditation such as the National Association of Schools of Art and Design (NASAD), which will accredit only academic institutions or programs that center their educational efforts on art and/or design. Statewide commissions or boards—for example, the Ohio Board of Regents—hold the educational institution to the mandates of the state or locale. Next are national and international groups that accredit a certain type of educational institution, not based on program but more on the concept of institutional organization, such as the Accrediting Council for Independent Colleges and Schools (ACICS). Last in this listing are the regional accrediting bodies that accredit all levels and program areas within a large geographic area; an example is the North Central Association of Colleges and Schools (North Central).

With the exception of legislative need for approval either by state or national standards, accreditation is a voluntary process, though many funding or certification programs require accreditation. Within photographic education it is access to student support that may require regional, national, or international accreditation. However, the real advantage to the accreditation process, whether initial or for continuing accreditation, is the self-examination as well as external comment on issues of quality of the education.

The feedback the school receives from the accrediting agency provides a useful overview of how the entire school is functioning: administration, finances, management, planning, faculty, curriculum, library resources, staff, and communications. Here is an example of a 2005 accreditation review for a college made by the “Accrediting Commission for Community and Junior Colleges Western Association for Schools and Colleges”:

Dialogue: The College's dialogue has not yet reached the stage of defining, explicitly stating, and assessing student-learning outcomes at the course, program, and degree/certification levels.

Governance: The team also found a college with deep-seated problems related to governance, communication, and trust.

Library: The library collection needs attention. A collection development policy needs to be formulated.

Management: The fiscal management system does not produce timely or even accurate information for sound decision making.

Financial: The team encourages the college in the strongest terms possible to pursue strategies that will result in a financial system that will produce clear, reliable, timely, and transparent reports in which all constituents can have full faith and confidence.

Evaluation: There is currently no formal process in place to evaluate the integrity and effectiveness of the college's overall governance and decision-making processes and structures.

Planning: The college does not have a process to evaluate the effectiveness of the ongoing planning and resource allocation process.

A careful review of an accreditation report is a reminder that educating our students involves the entire school, across the board. It provides the faculty with an assessment of not just how they are doing but how well the administration is doing in fulfilling their support of the educational process. Loss of accreditation can have severe consequences on a school recruiting students, alumni support, and private and government funding.

“Open your arms to change, but don't let go of your values.”

Dali Lama

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset