So on second listen, I was struck by two things. One, there was so much transparency in her introduction into the process and production of the podcast. Interesting tactic.
Two, the “all or none” approach seems pretty unrealistic. I think the likelihood is that the more straight-forward learning will be done with robots/AI/machine learning. Things like specific lessons, or placement (which is already kind of robotic, in the sense that it’s a standardized test), and that could open up in more tutor-style relationships with educators in smaller classroom for in-person instruction.
In terms of defining a speculative future, I like the selection of a specific date. It has an emotional effect and puts you in a mindspace. But even so, I think the focus of the episode that robots simply replace teachers in 2099 is pretty narrow.
There’s a great (often untapped or poorly executed) power in design and technology to bring together, rather than separate. I wonder whether that was explored. The individual assignment or lesson is isolating to be sure (although perhaps not much more than the heads-down writing assignment or math quiz), but what about an interactive group game or augmented environment that required learning facilitated through a machine?
Another thought was around the risks with algorithms. I would be curious to know whether (as algorithms get more tested and smarter) those issues would happen less or more often than with people. Humans are notoriously susceptible to prejudice and perception—would a child with ADHD fare better with a robot, who doesn’t care about his behavioral issues, than a frustrated teacher who feels this child is stealing energy and focus from their classroom? Same goes for any other number of adaptive learning methods and learning disabilities. Is it a matter of an algorithm at all, or just the level of its intelligence and accuracy? As a near-term issue, the human is obviously superior, but if we’re talking 80 years out… something to think about?
One question about the assertion that teaching isn’t respected because it’s a female-dominated field. I imagine that’s not true at the university level, or at least didn’t used to be? Would be curious to know those demographics. On that note, I noticed the choice of robot teacher for the episode was the stereotypical robotic female voice, rather than a silent robot, a male robot, or a less overtly sexed robot. In this sense, it opens up a number of design question about the user’s knowledge of the bot as well as many of the bot’s characteristic change how you interact with it. For example, does having an asexual teacher do anything for students? How much of the teacher’s role is to model behaviors from an adult that isn’t a parent? Do students without a male or female role model benefit from having an instructor of that gender?
All food for thought.