Experiments using three types of learning methods
- conventional classroom learning;
- mastery learning;
Apparently 1st defined by Bloom. Class sizes of 20 to 30 students, which as a group must get at least 90% on a knowledge test before the class moved on. Formative tests are given to inform corrective procedures and others to test understanding
- 1-to-1 tutoring;
Individual teaching using formative tests and feedback similar to the mastery learning tests.
- Two-sigma - 1-to-1
Average student with 1-to-1 tutoring with mastery performed higher than 98% of students using classroom approach - two standard deviations higher.
- 1-sigma - mastery
average student in this method achieved 84% above conventional classroom
- "90% of tutored students and 70% of the mastery learning students attained the level of summative achievement reached by only the highest 20% of the students under conventional instructional conditions" (Bloom, 1984, p. 4)
- How do the "smart" or the less motivated students deal with mastery learning's requirement to keep going over a topic until all achieve 90%?
- How would a teacher handle this in a traditional setting?
- "solving" mentions Khan Academy
Reliant on student motivation to engage, could be supported.
- "solving" mentions Udemy having "massive office hours"
Lots of tutors available to handle questions. Leading to ideas of call-centres to ensure tutors are available when required. Raising the "AIC" problem where the tutor may not get the content/teaching approach being used in the course.
- "solving" talks about various forms of "feedback-corrective" tools - leading toward the personalisation approach and to differentiated instruction and adaptive learning systems (which is what Khan has become somewhat)
- "solving" does touch on the idea of Fitbit/IoT type devices providing specific feedback.
- "back" talks about CBE and analytics as providing data that can provide teachers with the insight required to support mastery approaches.
Bloom's (1984) suggestions
- improving instructional materials
- enhancing peer interactions
- considering student differences
- engaging higher mental processes
- Identify all that is required to be learned and the order in which it is to be learned.
Not always possible or desirable in all courses, touches on arguments against learning objectives etc.
Comment on "back" makes this point and suggests "look to the future and use dynamic learning more aligned with today’s technologies and the capacity to measure learning post hoc".
- Develop appropriate instructional methods and formative/summative tests to identify mastery.
- Develop/implement corrective measures based on results from tests.
In a class setting or by single tutor.
Problems with 2-sigma
- VanLehn (2011) in a review article found that human tutoring had an effect size of 0.79 (rather than 2), while intelligent tutoring systems had an effect size of 0.76
- "the problem" quotes VanLehn (2011) deeper explorations of the experiments Bloom reported upon,
- Topic was probabilty
- tutors were education majors (not necessarily experts in probability)
- Only one of the 6 studies of human tutoring from Bloom involved 1-to-1 (others had tutors working with 3 students)
- the "mastery mark" for the class-based mastery was 80%, but for tutoring 90%. That tutors were holding students to a higher standard may explain 2.0 effect size achieved while class-based mastery only achieved 1.0
- Solving the 2 Sigma Problem
- Back to Bloom
- Closing the 2-Sigma Gap: 8 strategies to replicate one-to-one tutoring in blended learning
- The problem with Bloom's two-sigma problem
Bloom, B. S. (1984). Sigma of Problem : The Methods Instruction One-to-One Tutoring. Educational Researcher, 13(6), 4–16.
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221. http://doi.org/10.1080/00461520.2011.611369