Glossary of Assessment Terms

Assessment is the systematic, reported evaluation of student outcomes for demonstrating effectiveness and improving offerings.

Capstone Course is an upper division class designed to help students integrate their knowledge. For assessment purposes student work needs to be evaluated by faculty members responsible for the program, not just the instructor of the course. Capstone experiences and standardized exams are sometimes part of a capstone course.

Capstone Experience/Student Teaching is a activity for graduating seniors that is designed to demonstrate comprehensive learning in the major thorough some type of product or performance.

Case Studies are detailed analyses of projects or problems that result in exemplary models. Counseling and School Psychology asks students to generate a course of action for dealing with a client based upon a comprehensive profile of that client.

Competitions/Meets are experiences during which students demonstrate their expertise and are judged or rated by experts in the field while in competition with other students. Forensic meets are examples of this assessment means.

Demonstrations are performances that show skills students have mastered. Theatre students might be assessed on their acting skills demonstrated in theatrical productions.

Direct Measures are activities by students that directly demonstrate learning. They include both objective exams and performance measures such as evaluations of demonstrations, internships, and portfolios that are evaluated by individuals other than the instructor.

Exams are standardized tests that may include multiple-choice, true-false, response selection, essays and/or problems.

  • National exams are usually produced by a professional organization, such as the American Chemical Society, or by a commercial company, such as ETS (Educational Testing Service). National exams are usually nationally normed so there is feedback about the relative accomplishment of participants.
  • Local exams are produced by the department or institution using them. Usually careful attention is given to reliability and validity and data are kept to compare current participants with past groups of participants.

Field Experience is application of knowledge and analysis in professional settings. For example, field experience in practical marketing offers on-site activities designed to allow the student to experience, analyze and develop a marketing plan for an organization in a major market area.

Focus groups are carefully planned group discussions conducted by trained moderators. A small number of questions, developed in advance, is used to generate in-depth consideration of a narrowly defined topic. Focus groups examine perceptions, feelings, attitudes and ideas. A focus group could be used to assess student satisfaction with the General Studies program.

Formal Observation involves experts watching and evaluating student performance. Students working in a group to solve a problem could be the object of formal observation.

Goals express intended results in general terms. A goal might describe such broad learning concepts as clear communication, problem solving, or ethical awareness. Goals are further defined by being broken down into measurable objectives.

Indirect Measures are means of assessment that ask students to reflect on their learning rather than to demonstrate it. Focus groups, interviews and surveys are examples of indirect measures.

Internship is an experience that allows the student to apply what has been learned to a professional situation within a close mentoring relationship. The mentor evaluates student performance. That evaluation is used as the assessment tool.

Interviews are one-on-one conversations beginning with a predetermined set of questions to ascertain students' reflections on their learning experiences. Interviews can be structured or open-ended. A carefully constructed data collection instrument is necessary.

Juried Activities are performances that are evaluated by experts in the profession. Recitals, art shows and conference presentations are examples of juried activities.

Objectives express intended learning outcomes in precise terms, describing specific behaviors and knowledge students should exhibit. Objectives are measurable and collectively measured to indicate the achievement or failure to satisfy a goal.

Oral presentations are reports given to a group including experts who evaluate the work presented. Theses or senior project defenses are examples of oral presentations.

Performance Measures are assessment methods based on student activities or products, as opposed to tests or surveys, to evaluate students' knowledge, skill and development.

Portfolios are systematic collections of students' works that are evaluated by set standards for evidence of learning and development. An art student's portfolio is an example.

Projects are produced by real-life work based on clients' needs. These allow students to practice oral and written communication skills to define a problem, develop a plan, collect data, select tools and analyze results. Usually the client evaluates the student's performance. Software development for clients is an example of a project for senior computer science students.

Reliability means an assessment instrument, such as a comprehensive examination or survey, will produce consistent results over time. With a reliable instrument, differences in results among subjects will be due to differences in knowledge or opinions, rather than as a result of measurement error. In performance-based assessment, a participant should receive similar scores from the evaluators.

Survey is a method of collecting information from people about their characteristics, behaviors, attitudes, or perceptions. Most often surveys are questionnaires or structured interviews, administered by mail, telephone or completed at the site, requiring a specific set of questions, appropriate instructions and a carefully constructed data collection instrument. Many 体育菠菜大平台 departments administer alumni surveys. Care needs to be taken that surveys used as assessment instruments collect information about student outcomes.

Survey research is one tool of cross-sectional analysis and it usually relies upon a statistically valid sampling technique. Valid sampling techniques generally provide for reliable data that can be compared to the general population of people one desires to examine. In the case of surveying alums and employers, several practices are required:

  1. A listing of the population (all the people you want to examine) is needed.
  2. A sample of the population must be drawn using one of two generally accepted techniques:
    1. Random sample-use a random number generator to develop a list of numbers, then number all of your names in the population and select the names with the numbers on the random list.
    2. Systematic sample-pick every nth name on the list. To use this approach, take the total number in the population and divide it by the number for the sample size you wish to choose. For instance, if you have 6,500 students at 体育菠菜大平台 and you want to sample 650 (6,500/650=10) you will find the number ten. Now, start somewhere in your list of students and mark every tenth one. The student names you marked are your sample. This technique will give you a systematically drawn sample that approximates a random sample with less work. The systematic technique is easier to use but it still approximates random sampling because you have used no particular reason to choose the names you did.

Survey research is based upon the idea that a sample of individuals can provide enough people to represent the entire population. The problem presented by sampling is deciding how many people to sample. The easiest approach to solving this problem is to access a website where a sample size calculator can be found. You can either search for a "sample size calculator" or simply use this URL http://www.surveysystem.com/sscalc.htm. Here are a few guidelines to keep in mind when you use a sample size calculator:

  1. Confidence level is a term that specifies how confident you can be that a given sample size is adequate. Normally, a 95% confidence level is appropriate; it means that 95 times out of a hundred your sample size will produce results within a limited interval (range).
  2. Confidence interval is usually expressed in terms of a range of percentages. For evaluation research, a +/-5% confidence interval is appropriate. This means that responses with the sample size you specified will be within 5%.
  3. A sample size with a +/- 5% confidence interval set at the 95% confidence level will produce answers that are 95 times out of a hundred within plus or minus five percentage points of the answer that would be produced if you asked the entire population.

Validity is the instrument's accuracy in measuring what a program/department is trying to measure. In assessment, instruments should measure actual student outcomes. There should be an obvious and demonstrable link between the assessment instrument and the program/department's curriculum. There should be a high correlation between the predicted success of graduates on an assessment instrument and the graduates' actual success.

Writing samples are collected from students' work at fixed intervals in their program. They are assessed anonymously and without identifying indicators of course level by a team of faculty evaluators using a standard rubric. The purpose of the samples is to assess the writers' degree of improvement over their course of study. Writing samples can be assessed both as a group, anonymously by class rank, and as individuals, using a type of student portfolio created by the faculty rather than by the students themselves.

Much of the information used in this glossary is from Assessment Essentials by Catherine A. Palomba and Trudy W. Banta, published in 1999 by Jossey-Bass.