Today’s post was written by Dr. Cynthia Tweedell, EMS’s assessment and accreditation lead.

I first got involved in assessment for non-traditional education in 1998 when I joined the team at Indiana Wesleyan University.  I had been teaching at a school that was dabbling in adult education in the 1980s, but Indiana Wesleyan was moving like a freight train into accelerated and online degree programs.  As a traditional academic who had been teaching sociology for over twenty years I was extremely suspicious of this move.  How could we have the same quality when a class only meets for five weeks taught by adjunct faculty?  And online learning?  You’ve got to be kidding!  If it weren’t for the fact that sociology faculty positions had dried up and higher education research was something I knew I could do, I would have never relocated from the Chicago suburbs to the cornfields of north central Indiana.  But I was the perfect one for the job because I was curious to know if we could find quality outcomes in this new innovation in higher education.  Within the first six months of collecting and analyzing assessment data I became a believer.

Gradually we convinced the rest of the world that traditional education was not necessarily superior to accessible adult-focused models.  More and more students and faculty are participating in non-traditional education. It is now recognized by accreditors and regulators as a viable model for learning.   How did we convince them?  Not through hyperbole or charismatic showmanship.  We convinced them through assessment data.  We had to produce data to show that accelerated and online models have comparable outcomes to traditional education. At the same time, we had to use that data to improve these innovative models of education.  That’s the importance of assessment.

Today, I regularly represent a regional accreditor on reviews of colleges with non-traditional programs.  I’ve reviewed the assessment processes of over forty institutions.  There is no one way to do assessment and I’ve seen a lot of variations over the years.  I frequently see assessment processes that are so complex few faculty and administrators understand them.  Some schools spend a lot of money producing data that most faculty never see. There seems to be very high turnover in assessment directors and each new director brings a whole new approach, which is very confusing to faculty.  I can usually find someone on campus who can “talk assessment,” but when I ask faculty what data they use to make curricular changes, I frequently get blank stares.     And there are still faculty that hate assessment processes, thinking they are the administration’s way of checking up on them.  It’s no wonder that assessment directors don’t last long.

Assessment doesn’t have to be confusing and expensive.  I often consult with schools that have spent a lot of money on standardized tests, out-sourced surveys and assessment management systems.  They hope their investment will pay off with approval from accreditors.  But as a reviewer for a regional accreditor, my focus is not on the amount of data, but what the college is doing with the data.  How have they used the data to make informed decisions to improve learning outcomes?    I’ve walked into colleges which have files and files of data that no one has examined for years.  As we say in assessment, “You can’t fatten a pig just by weighing it.”  Having lots of data is no guarantee of improved learning outcomes.

Simple home-grown tests, essays and surveys administered at the beginning and end of the program can be an inexpensive, effective alternative to out-sourcing assessment.  Faculty are probably already assessing learning through course assignments.  Why not collect some key assignments from a representative sampling of students and have a team of faculty review them? Data can be used to inform program changes.

Another simple but useful tool is a curriculum map.  How does each course in a program contribute to the program learning outcomes?  Make a simple chart of each of the program outcomes, how they align to course objectives and how each is measured.  Such a simple exercise can help curriculum builders to understand where there are gaps that need to be filled.

Institutions can also make a lot of progress on assessment by looking around to see what data already exist.  Things like:

  1. External review of senior projects
  2. Juried review of music or art performance
  3. National or state licensing exams
  4. Home-grown entrance exams (which could also be used as exit exams)
  5. Career/graduate school placement data
  6. Retention/graduation statistics
  7. Student evaluations
  8. Alumni surveys

Instead of putting such data in a drawer and thinking that “we fattened the pig by weighing it,” critically examine the results to discover areas that need improvement.  Then make and implement an improvement plan.  This is called “closing the loop” on assessment.

When it comes to assessment the KISS principle really comes into play: Keep It Simple and Sustainable.  Assessment plans that are too complex will never be fully implemented. They will not be sustained when the task gets passed off to the next assessment director.  When institutions go through multiple assessment directors, and multiple plans, faculty become frustrated, confused and exhausted with the process.  In order to get faculty fully on board, they must see a simple, direct process that will enhance learning outcomes.

Regulators are still watching carefully to make sure innovations in higher education produce learning outcomes.  Assessment data are critical to check on the validity of new modes of course delivery.  Assessment does not have to be so complex that it breaks the budget.  Faculty are already committed to learning and using techniques to measure it.  Why not use these faculty-created measures to allow them to assess the learning in non-traditional modes of delivery?

Need help with your assessment systems and accreditation?  Let the EMS team help!

 

Tags: Source: