Bookmark and Share
|
  • Text +
  • Text -

Assessment FAQs

What is "Academic Assessment"?

Tom Angelo once summarized it this way: "Assessment is an ongoing process aimed at understanding and improving student learning. It involves making our expectations explicit and public; setting appropriate criteria and high standards for learning quality; systematically gathering, analyzing, and interpreting evidence to determine how well performance matches those expectations and standards; and using the resulting information to document, explain, and improve performance. When it is embedded effectively within larger institutional systems, assessment can help us focus our collective attention, examine our assumptions, and create a shared academic culture dedicated to assuring and improving the quality of higher education."

What do we really want to know about our students? The questions you ask will vary from program to program, whether they deal with students learning specific content, skills or attitudes or perhaps issues of student motivation and ability to monitor their own learning. Our assumption is that the key assessment questions are best known by the program faculty themselves, for they are the ones who encounter students on a daily basis, whether in their classes or outside. But finding ways to answer these questions is key to our success.

Academic assessment seeks to answer the broad question, "What and how well do our students learn what we are attempting to teach them?" As such, academic assessment is not designed to evaluate individual faculty or even individual courses. It is designed to evaluate individual programs as a whole, such as academic majors or interdisciplinary programs, and to determine where the programs might be strengthened in order to improve our students' abilities to learn. The primary audience for academic assessments is not administrators or accrediting agencies, but, rather, the program faculty themselves.

An assessment program is essentially a way of formalizing the informal discussions, concerns, and questions that faculty have always had about their classes and their students, whether in the hallways, their offices, department meetings, or social gatherings.

Academic assessments work best when they are designed and carried out by the academic faculty themselves, supported by appropriate support units in the College, such as Institutional Research and the Director of Assessment. Therefore, it is essential that all faculty in our programs ask themselves such key questions as, "What should a graduate of our program know, be able to do, and/or value?" and "How do our courses provide students with opportunities to develop their knowledge, skills, and values?" The answers to such questions provide the basis for assessing the program

In addition to assessments that become part of the fabric of each academic department, the institution assesses student learning in institution-wide contexts. For example, is the core curriculum accomplishing all that we want it to accomplish? Are residential life programs supportive of academic learning? What are the roles of extracurricular activities such as athletics, clubs, and guest speakers or performers? Clearly, the responsibility for assessing academic learning extends beyond the program faculty, for we all know that what students learn while in college is an accumulation of learning experiences, both formal and informal.

An assessment plan involves more than determining what students should learn and assessing their learning. It requires time to share the results of the assessment with the faculty members and time to reflect upon what those results may imply for individual courses, course sequences, pedagogical practices, and/or student support. Faculty discussions of assessment results may even lead to recommendations for changes to student support structures, such as the library, technology, career placement, or counseling and can provide substantial documentation supporting requests for needed resources.

Why should we want to be involved in assessment?

Think of some of the real questions you have about your curriculum and how well your students are doing, questions such as: How strong are our students' research skills? Can our students apply what they are learning outside of class? How motivated are our students to learn on their own? If our students can choose from a wide variety of electives, are they leaving our program with the same skills and knowledge, or does what they learn vary greatly from student to student? By the time are students are seniors, are they ready for their final courses or do some seem to have gaps in what they've learned? Do our introductory courses attempt to cover too much? Should we revise the sequence of our courses to enable students to learn more effectively? Our courses are now four credits instead of three-do our student learn more or in greater depth as a result? These questions - and others like them - reflect the real concerns that faculty have about the effectiveness of their curricula. Finding answers to such questions is one of the most important roles for assessment.

In addition to the most important reason for assessment, finding ways to help our students learn more effectively, assessment results can provide data for fund raising, for grant writing, for recruiting students, and for demonstrating the quality of our programs.

What is a typical assessment process?

Each academic faculty will want to develop a process that best fits its context, but assessment is most clearly viewed as a cycle:

  1. Determine the goals or student learning objectives for the program, and/or determine what the key questions are that you have about your students and how well they are learning what you are trying to teach them.
  2. Decide upon the methods to assess those.
  3. Assess the goals or learning objectives.
  4. Discuss the results, determining whether any of the curriculum content, courses, or pedagogy needs to be changed.
  5. Implement the changes. At this point, you might also change some of the goals or the methods used to assess the program.
  6. Assess the revised curriculum.
  7. Discuss the results of the assessment.

... and so forth.

Do we have to do the same thing every year?

That depends upon how satisfied you are with the results that you have gotten so far. Suppose that you assess your program and those results lead to new questions about your program? Well, you may want to assess the new questions that you have rather than do the same assessment all over again. On the other hand, there may be some aspects of the assessment process that you do want to do each year. Which approach you take depends upon how satisfied you are that the assessment has really answered your questions.

For example, imagine that in the first year of your assessment program, you assess or evaluate the quality of the research papers in your capstone seminar. You determine that, while they are generally competent, your students tend not to select the best research sources for their work. Your discussion about why this is so might lead to a number of conclusions: Perhaps at no point in the program does any course introduce students to methods of finding the best sources. You might decide to create a new course, insert a special module on research methods into an existing course, or review research methods in all courses of the major. Having done that, it might be appropriate to wait a year or two before assessing the seniors on this issue again so that all of them will have had a chance to learn better research methods.

But, in the meantime, you may want to assess some other aspect of the program, such as whether students can apply what they learn in class to real world problems, if that is one of the goals of your program. So, in year two, you assess that, rather than the quality of their research papers. Or you decide to focus upon their knowledge of specific content, if you have content goals for the students.

In short, you don't have to assess everything every year. Your key questions about your students will help you determine what you do want to assess each year.

What techniques can we use to assess our students' learning?

Assessments may be carried out in many different ways, depending upon the depth of information and nature of what is being assessed. The assessment methods may be categorized into both direct and indirect assessments.

Direct assessment methods

Direct assessment methods are "direct" because they look at actual student work to determine whether the students have learned what the faculty want them to learn. Among the direct methods most commonly used are the following:

Portfolios: Student portfolios may be collected from the time that students enter a program until they graduate or may be collected for narrower time frames. Students are responsible for gathering the information that the faculty want them to gather. Among the types of materials contained in a portfolio may be: research papers, essays, drafts of written material leading to a final product, laboratory research, videotapes of performances, exhibits of creative work, and examinations. A particularly valuable component of student portfolios is the reflective essay, in which the student reflects back upon her or his growth in scholarship or creative efforts and draws conclusions about his or her strengths and weaknesses at the time the portfolio is compiled. To save valuable space, many portfolios are now gathered electronically. The primary drawback of the portfolio is that it takes time for faculty to review. The primary advantage is that it can be designed to represent a broad view of student academic development, one that also contains some depth.

Embedded assessments: Embedded assessments make use of student work produced in specific classes. As a result, the students do not even need to know that their work is being used for assessment purposes. In addition, the material used for assessment is produced within the normal workload of both faculty and students. As such, embedded assessments provide a realistic source of information about student work. In departments that use examinations to evaluate students, sometimes only a few of the examination items are actually designed for assessment purposes. The data provided by embedded assessments should be reviewed by faculty beyond the course instructor, perhaps using a rubric of key characteristics to guide the assessments. The instructor uses the student work to provide grades. The faculty examine the student work to understand what and how students are learning in the program.

Capstone experiences or senior projects: Capstone experiences most often occur in courses taken by students toward the end of their academic program, typically in the senior year. Capstone courses can be designed to require students to demonstrate their accumulated knowledge, skills, and/or values through major creative or research projects, as well as written and oral presentations. The major advantage to the capstone course or experience is that it provides a focused event upon which the assessment can be based. As with embedded assessments, capstone courses make use of data that students produce within the normal course of their work. One caution is that, while the faculty member teaching the course is responsible for giving grades to students, other program faculty should be involved in evaluating the work of the students from an assessment perspective. A drawback to the capstone course is that it cannot hope to encapsulate everything that a student has learned, but assignments can be designed to elicit student work that does include much of what they have learned.

Examinations or standardized tests external to the courses: Culminating examinations may be constructed by the faculty or purchased from national testing organizations (such as the ACT CAAP, ETS field exams, or the Missouri BASE). Constructing such examinations is time-consuming, and standardized national measures may not correlate with your academic program. They are costly to either the institution or the student. And, unless they are required for graduation, student motivation to do well in them may be low.

Internships and other field experiences: Internships and field experiences provide opportunities for students to apply their learning outside the classroom. Evaluations of student work in such experiences may provide valuable information on whether the students are able to use what they have learned in class when they are confronted with "real world" situations. They may, in fact, be the capstone experience for the students' program.

Indirect assessment methods

Indirect assessment methods require that faculty infer actual student abilities, knowledge, and values rather than observe them through direct methods. Among indirect methods are:

Surveys: Student surveys or surveys of employers and others provide impressions from survey respondents. These impressions may change over time (for example, will a senior value the same thing as an alumnus who has been working for several years?). Respondents may respond with what they think those conducting the survey want to hear, rather than what they truly believe. Surveys are easy to administer, but often do not result in responses from everyone surveyed. They may, however, provide clues to what should be assessed directly. And they may be the only way to gather information from alumni, employers, or graduate school faculty.

Exit interviews and focus groups: Exit interviews and focus groups allow faculty to ask specific questions face-to-face with students. Their limitations are that the students may not respond honestly or fully, while their answers may be, as with surveys, impressions that may change over time. Often, for more objectivity, it may be best to have someone outside the actual program faculty conduct the interviews. Interviews and focus groups may provide clues to what should be assessed directly.

Inventories of syllabi and assignments: Inventories of syllabi and assignments may turn up information about the curriculum that is not evident until the actual inventory is conducted. As an indirect technique, the inventory does not indicate what students have learned, but it does provide a quick way of knowing whether some courses are redundant in what they teach or whether some gap in the curriculum exists. It is a valuable tool within the total assessment assemblage of tools.

How is assessment different from regular evaluation of students?

Assessing students in class is often called "classroom assessment," as opposed to "program assessment" or "learning outcomes assessment."

You assess students in your classes to determine how much they have learned in your classes and to assign grades. "Assessment of academic programs" is intended to assess how well programs are working by looking at the assessment results of groups of students in those programs. Therefore, an effective assessment program requires that the faculty in those programs have agreed upon the learning outcomes or learning goals for all students in the program, regardless of the courses that they take. Then, the faculty need to agree upon how they are going to determine what the students have learned. When faculty assess students as a group rather than as individual students, look at the assessment results from a program perspective, analyze those results, and determine whether they need to revise anything in the program, then they are conducting assessment of the academic programs.

What is "authentic assessment"?

"Authentic assessment" involves evaluating students' ability to perform real-world tasks. It is an attempt to measure more directly whether students can perform well on intellectual tasks that are valued outside of the classroom. This is in contrast to indirect measures of student ability, such as multiple choice tests. Techniques range from portfolios to class projects to examinations that require students to respond to real world situations or tasks. For more information on authentic assessment, see ERIC resources or an article by Grant Wiggins.

What is "embedded assessment"?

"Embedded assessment" is an assessment process that involves using the regular work that students produce in their classes as the material that is assessed or evaluated. The student work may be a final research paper, a set of questions "embedded" in a final exam, a lab project, or anything that the professor would regularly use to evaluate the students in the class. One of the advantages of this type of assessment is that the students do not know that their efforts are being used for assessment and therefore do not have any additional pressure or effort required of them. The work they produce is more indicative of their normal work rather than being something produced just for assessment purposes. So, for example, one might assess the general education competencies of students when they reach the junior or senior year and are in the major by selecting specific assignments in specific courses and sending them to a team of faculty to evaluate.

Please explain the term "value-added".

In assessment, "value-added" usually refers to the difference between some statistically-determined base measurement of a student or a group of students and a final assessment measure or measures. Thus, it is used to determine whether a particular curriculum has added any "value" to the students as a result of their education with that curriculum. It can be useful when trying to compare the education of groups students who are very different in their characteristics. One site, here, notes that growth in and of itself is not necessarily "competence." An MBA site used employers' evaluations of graduated students to determine value-added. A journal of articles on value-added in history curricula recounts their experiences. A consortium of 7 universities working on a value-added project concluded that a good and thorough data base of information on students is absolutely essential in applying value-added measures. A large study in the United Kingdom sought to determine the most effective value-added indicators for the nation. And value-added vocational education also has been reported here.

What is "primary trait analysis"?

Primary trait analysis is a way for faculty to specify the exact criteria against which they will judge student work. Using it, faculty create a scale for grading or scoring student work. To create this scale, they must (1) identify the exact characteristics that they will be looking for; (2) construct a scale; and (3) evaluate the student's work against the scale. The scale can be changed for each type of assignment or task that the student is asked to complete. Most important for the students' benefit, when they know the traits that their work will be judged against, they can more knowledgeably address the assignment. For purposes of program assessment, faculty can construct primary trait scales for each of the types of student work that they will be evaluating, whether the evidence for the assessment is provided by the student portfolios, essays, science projects, mathematical solutions, case study analyses, or whatever. A major benefit of primary trait analysis to the assessment process is that it is a tool for faculty to use when working to reach consensus on what is worth evaluating in student work.

How can the assessment of the major also serve as an assessment of general education?  Aren't we assessing very different competencies?

To a great extent, yes, what we expect students to learn from the major program is different from what we expect them to learn from general education. However, we can find ways to assess general education learning outcomes within the major by choosing to assess those learning outcomes that are of most value to the major. For example, do we want students to be able to write well within the students' disciplines. If so, we can use the writing projects that they do for the major to assess student writing ability.

But, what has that to do with general education? Well, it can help inform our general education writing program. If students still have major weaknesses in their writing, even after they are in the major, perhaps the writing program can be modified to address those weaknesses. Or the faculty may decide upon a different solution or solutions. Some institutions, for example, have implemented a junior or senior level writing course for each major. Some have devised "writing intensive" courses within the majors. Some have writing-across-the curriculum or writing-across the disciplines workshops for faculty who want to improve students' abilities to write. Although such solutions are not part of the general education program, the assessment of a general education competency (in this case, writing) has led to the program changes.

NOTE: it may not be worthwhile trying to assess all general education competencies in all the majors, for some will be more pertinent to the major than others.

*Special thanks to the Assessment Office at Skidmore College.

Academic Dean