medigraphic.com
SPANISH

Investigación en Educación Médica

ISSN 2007-5057 (Print)
Investigación en Educación Médica
  • Contents
  • View Archive
  • Information
    • General Information        
    • Directory
  • Publish
    • Instructions for authors        
  • medigraphic.com
    • Home
    • Journals index            
    • Register / Login
  • Mi perfil

2021, Number 39

<< Back Next >>

Inv Ed Med 2021; 10 (39)

The challenge of evaluating oral presentations: the use of a rubric in a graduate program in medical education

Olvera LA, Pompa MM, Maya LMPJ, Hernández FMD, García Minjares, Manuel; Sánchez Mendiola, Melchor; Fortoul,Teresa Imelda
Full text How to cite this article

Language: Spanish
References: 13
Page: 35-42
PDF size: 459.86 Kb.


Key words:

Assessment instruments, reliability, validity, graduate students, medical education.

ABSTRACT

Introduction: The results of the application of a rubric to assess oral presentations of research projects in a graduate course of medical education are presented.
Method: The instrument assesses verbal and nonverbal abilities, visual aids, content and organization. The evidence of validity of the results was obtained through: principal component factor analysis, reliability by Cronbach’s alpha, and psychometrically with the Rasch Partial Credit model.
Results: The most relevant findings were the identification of three groups of abilities in the presentation: design, internal control, and interaction with the audience. High reliability for the whole instrument, correct operation of the response options and independence between item and ability parameters estimations.
Conclusions: The instrument is useful to assess oral presentations of the graduate students in the medical education program, given its reliability and validity.


REFERENCES

  1. Arenis OY, Pinilla AE. Evaluación de estudiantes de posgrado en ciencias de la salud. Acta Med Colomb. 2016;41(1):49- 57.

  2. García-Ros R. Analysis and validation of a rubric to assess oral presentation skills in university contexts. Electron J Res Educ Psychol. 2011;9(3):1043-62.

  3. Jönsson A, Matthaios K, Svingby G, Attström R. The use of scoring rubrics: Reliability, validity and educational consequences. Educ Res Rev. 2007;2:130-44.

  4. Peeters MJ, Sahloff EG, Stone GE. A standardized rubric to evaluate student presentations. Am J of Pharm Educ. 2010; 74(9):1-8.5.

  5. Prieto G, Delgado A. Fiabilidad y validez. Papeles del Psicólogo. 2010;31(1):67-74.

  6. Sayans JS, Fernández Calderón PF, Vidal Giné G, Rojas Tejada A. Aplicación de un modelo politómico de TRI al test ASSIST para el estudio de sus propiedades métricas. Trastornos Adictivos. 2012;14(2):50-7.

  7. López Pina JA. Análisis psicométrico de la escala de marcha y equilibrio de Tinetti con el modelo de Rasch. Fisioterapia. 2009;31(5):192-202.

  8. Downing S. Validity: on the meaningful interpretation of assessment data. Med Educ. 2003;37:830-7.

  9. Kane M. Validating the interpretations and uses of test scores. JEM. 2013;50(1):74-83.

  10. Malini R, Andrade H. A review of rubric use in higher education. Assess Eval High Educ. 2010;35(4):435-48.

  11. Tierney R, Simon M. What’s still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. PARE. 2004 [citado: 21 de mayo de 2018];9, Article 2. Disponible en: http://pareonline.net/getvn.asp?v=9&n=2

  12. Andrade H. Teaching with rubrics: The good, the bad, and the ugly”. Coll Teach. 2005;53(1):27-31.

  13. Andrade H, Ying D. Student perspectives on rubric-referenced assessment”. PARE. 2005;10(3):1-11.




2020     |     www.medigraphic.com

Mi perfil

C?MO CITAR (Vancouver)

Inv Ed Med. 2021;10