Assessment in higher education

From Wikipedia, the free encyclopedia

Assessment in higher education was a reform movement that emerged in the United States in the early 2000s to spur improved learning in higher education through regular and systematic measurement. The campaign was a higher education corollary to the standardized testing required in K-12 schools by the No Child Left Behind Act. By the latter 2010s the bureaucratic demands of assessment advocates were being reconsidered in higher education even by some of those who had played a major part in promoting them.[1]

Assessment as promoted by advocates[edit]

Advocates of systematic assessment in higher education promoted it as a process that would use empirical data to improve student learning.[2] They envisioned that colleges would identify measurable and clear descriptions of intended learning, gather evidence to determine whether students' actual learning matched the expectations, and use the collected information to improve teaching and student support.[3] Institutions of higher education implemented systems of creating, collecting, and reporting in response to increased demands from accrediting agencies, which had promoted the concept as necessary to satisfy political demands for accountability, including from Spellings Commission launched in 2005.[1][4]

Advocates of assessment insisted that colleges should be able to distill their intended student learning outcomes into statements and related data at the level of the course, each program or major, and for the institution overall.[2] The internal process of analyzing and discussing the evidence about what students know and can do would transform teaching and learning for the better.[5]

The growth of demands for campus assessment data contributed to an industry of software products offered to colleges. In 2019 one professional association catalogued more than 60 assessment-related technology products offered by vendors to schools.[6]

Criticism of the assessment movement[edit]

In a 2018 New York Times opinion piece titled "The Misguided Drive to Measure Learning Outcomes," Molly Worthen criticized the assessment profession for creating an elaborate, expensive, "bureaucratic behemoth" lacking an empirical foundation.[7] Robert Shireman, an advocate for student access and success, has written that the system evolved in a way that “prevents rather than leads to the type of quality assurance that has student work at the center.[8]” Erik Gilbert, a professor of history, argued that assessment in higher education has little effect on educational quality and that accrediting agencies require institutions to invest time and resources in collecting data that is not useful for improving student learning.[9]

Some leading assessment practitioners have been critical of common practices in the field.[10] David Eubanks, an assessment director, has observed that sample sizes in most course- and program-level assessments are so low that they cannot provide meaningful information.[11] In 2019, Natasha Jankowski, Director of the National Institute of Outcomes Assessment, described the current state of assessment as a “hot mess” and allowed that, “[t]here are good reasons why faculty hate it. It's real and it's earned."[12] In January 2020, the professional association of campus assessment professionals adopted a "foundational statement" intended to clarify the profession's purpose.[13]

In July 2020, the National Advisory Committee on Institutional Quality and Improvement established a subcommittee, chaired by David Eubanks, to examine how accrediting agencies approach the assessment of student success.[14] The subcommittee found that federal and accreditor standards it examine did not require expensive and bureaucratic monitoring approaches. Instead, the subcommittee pointed to peer reviewers with inflexible expectations as creating an impression that has sometimes steered colleges in unproductive directions.[15]

References[edit]

  1. ^ a b Roscoe, Douglas (2017-03-03). "Toward an Improvement Paradigm for Academic Quality". Liberal Education, Association of American Colleges & Universities. Archived from the original on April 25, 2021. Retrieved 2021-05-09.
  2. ^ a b Allen, M.J. (2004). Assessing Academic Programs in Higher Education. San Francisco: Jossey-Bass. ISBN 978-1882982677.
  3. ^ Suskie, Linda (2004). Assessing Student Learning. Bolton, MA: Anker.
  4. ^ Angelo, Thomas; Cross, K. Patricia (1993). Classroom Assessment Techniques: A Handbook for College Teachers. San Francisco: CA: Josses-Bass. ISBN 1555425003.
  5. ^ Stevens, D.D.; A.J. Levi (2013). Introduction to Rubrics. Sterling: Stylus.
  6. ^ NILOA (August 12, 2019). "Assessment-Related Technologies" (PDF). Archived (PDF) from the original on May 16, 2020.
  7. ^ Worthen, Molly (2018-02-23). "The Misguided Drive to Measure 'Learning Outcomes'". The New York Times. Retrieved 16 April 2018.
  8. ^ Shireman, Robert Shireman. "SLO Madness". Inside Higher Ed. Retrieved 10 April 2017.
  9. ^ Gilbert, Erik (2015-08-14). "Does Assessment Make Colleges Better? Who Knows?". The Chronicle of Higher Education. Retrieved 10 April 2017.
  10. ^ Blaich, Charles; Wise, Kathleen (2018-07-04). "Scope, Cost, or Speed: Choose Two—The Iron Triangle of Assessment". Change: The Magazine of Higher Learning. 50 (3–4): 73–77. doi:10.1080/00091383.2018.1509606. ISSN 0009-1383.
  11. ^ Eubanks, David (April 17, 2019). "Guest Post: Reclaiming Assessment's Promise | Inside Higher Ed". www.insidehighered.com. Retrieved 2021-05-17.
  12. ^ Lederman, Doug (April 17, 2019). "Advocates for student learning assessment say it's time for a different approach". www.insidehighered.com. Retrieved 2021-05-17.
  13. ^ "Foundational Statement #1". www.aalhe.org. Retrieved 2021-05-17.
  14. ^ U.S. Department of Education. "Archive of NACIQI meetings".
  15. ^ "NACIQI Subcommittee Report" (PDF). U.S. Department of Education. July 2021.