Germination without rumination
AI software is shifting the sands that state education is built upon
I wrote this essay for the Soaring Twenties substack and am now sharing it with you.
When I was at Manchester University there was a guy who lived on my corridor, two doors down, who was the life and soul of the party. He once came home from a night out at 4 a.m. and seeing a bus outside our halls filling up with students from the college next door got chatting with them in his usual amiable way. And that is how he ended up 120 miles from home with a bus load of sixteen year old Mancunians on a field trip to Whitby fish market. It takes about five hours, there and back, to cross the north of England, around the same amount of time I spent at the end of the year writing his final sociology essay - when he realised that you needed to submit something passable to be allowed to return for another year of drinking - for a nominal fee of a fiver and a case of beer.
Musings on Self-Directed Education is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Nowadays though he would have just hit up Chat-GPT.
AI software can write your college admissions essay, pretty much any undergrad paper, when it does so it submits work in the 90th percentile, and it can even pass the bar exam. Professors decry that students might no longer put the work in and that this would be self-defeating, for the college essay is an opportunity to discover yourself through the writing process, it is the writing that brings out the thoughts, helps you to determine your argument, refine it in the process so you can better understand your own opinion and perspective, so you can understand better who you are and who you want to become.
They are right, but this is a truism. Because the purpose of anything is to discover yourself through the process of it. That is the point of self-direction. That is what we who work in alternative education settings where children have the autonomy to direct all of their learning already know, and so for us this is a grand opportunity; AI is a mirror to reflect back to the system all its pedagogical flaws.
That we have established all these complex rituals and tests, numerous hoops to jump through, meticulous grading systems and claims of meritocracy that hold as much water as a teaspoon just obscures us from the fact that if people don't want to do whatever it is they are meant to be engaging in to discover themselves then that is a reflection of us, the adults, and the system we have designed and not them, the children. Self-determination theory teaches us that desires for competency and autonomy are our birthrights and as long as others do not interfere in the process they will remain with us, driving us towards whatever ends we self-select to pursue.
If we have created an education system where at the pinnacle, after all the chaff has been blown away back to their post-industrial towns, the wheat that remains can't muster the energy to sprout and would rather germinate without having to ruminate then that says a lot about our education system. It says a lot about what we are either asking the young people to do, or the journey we have taken them on to get to the point at which they now look at us and refuse, bored.
Let's be clear: when children are subject to the flattening and deadening national curriculum, the result is an epistemology where knowledge is done unto, rather than something that you undertake with freedom and creativity. So when they encounter tools such as AI that could be used creatively and they instead choose to avoid passing through the hard work of discovering who they are for the ease of passing the exam they are providing us with clear feedback about how we have encultured them. In education, as in parenting and fiction, the rule is show don't tell.
Education shows us very clearly that knowledge sits in the realm of doing unto. So when a tool such as a large language model is available to them then young people will do what they have always been shown and do unto just as they were taught.
This is how a lifetime of schooling ends up with a majority of young people unable to see the point of putting in the work. But they don't have to have a relationship to AI that is predicated upon extractivism, unless they reside in a culture where extractivism is the foundation upon which the culture is built.
When education is a means for perpetuating socioeconomic status through time, and a way to signal and filter, developments in AI should make educators sit up and take note because here is a potentially revolutionary technology that will even the playing field whilst simultaneously illuminating that it seemingly makes no sense to play.
This is why AI is not revolutionary to us. Those of us in self-directed education have known that the science on self-direction, autonomy, consent, non-coercive behaviour management systems, and self-determination has been solid for a few decades at least. We have been working in our tiny corner of the field of education safe in the knowledge that a paradigm shift is coming, working with young people in ways that align with the best of what we know of evolutionary anthropology, neuroscience, trauma informed practice, the science of neurodivergence; we are engaging in prefigurative politics: building the new system in the shadows of the old, planting the seeds of the new system in the soil of today’s.1
So to us the coming of the moment that the game of mainstream education finally no longer makes any sense to play, when the rest of society starts to see that the way we enculture our children is based not on science but the desires of particular political classes is inevitable.2 So that AI is a midwife to this moment is welcome, but from this weedy patch in the corner of the field we can already see the excessively large farmhouse that is state education shifting in multiple ways as the foundations of sand start to tremor.3
So obviously in our learning community that means that large language models have a welcome place. It has a place in our Big Ideas Club where we discuss topics together that interest us. So for some of our tweens that means that instead of doing the same Year 7 mathematics that I had to do over twenty years ago their education right now is debating the ethics and existential threats of AI.4 But it also has a practical place in our goal setting programme.
I work with a twelve year old who has an interest in anime and wants to learn more about folklore. They are leaning towards Japanese myths because of their current interest in drawing manga characters, but they don’t really know anything right now, and nor do I. We are both novices in this field but we can turn to Chat-GPT together to help us chart a path forward in trying to learn about Japanese folklore.
The purpose of our goal setting programme is that we are iteratively working towards goals using a combination of planning and reflecting. Though we are aware that lifetime passions can blossom out of interests that come up now, it is not the particular topics that are important, but the meta-cognitive processes and meta-learning skills that we are interested in and that requires there to be a path they are heading towards. We need to break the goal down into manageable steps and stages and have a concrete path to work towards so that we have something to reflect on when we review our progress.
In some ways we could have utilised Google and got the same end result, but through using the ability to converse in natural language with the model we can take a starting point and burrow our way recursively down into more detail until in only three questions and even fewer minutes we have three titles of books to search for.5
A few minutes later we see that the last book is massively reduced for kindle and for less than a single pound he can instantly own it and can start reading Japanese folklore. Come next week we will have a chance to reflect on the book and he can iteratively progress through to take another step towards his goal.6
I recently asked a twelve year old girl what skills she had, what things she knew and what things she had done.
I then asked her what skills she wanted to have, what things she wanted to know, and what things she wanted to do.
One of her interests is cats. I took her through an exercise where she could create a mini-project plan and she came up with creating a cat club she could run for other people in the community. I asked her to write down all the possibilities she could think of for a cat club project, every activity she could put on, every thing she could do or learn and every way she could do or learn them.
I then filled out a section called suggestions where I made a list of possibilities I thought she hadn’t considered. We then tried to draw together some of these shared possibilities into a story: how to create a cat club for my friends and peers? From that prompt we built out a task list of three or four weeks worth of activities that she would need to undertake to make that a reality.
Afterwards we played around with Chat-GPT and we looked at the ways in which large language models can help us create possibilities, how, with skillful prompts, we can give ourselves additional inputs that we can use to pull together into a project proposal. We also looked at how we can ask the software to parse through all the possibilities we have created and notice the themes and how that could help us to unlock project proposals when we are struggling to see which of our thoughts connect to each other. We also saw how we can use Chat-GPT to create a simple gantt chart for our task list.7
Henrik Karlsson, who writes on how we can scale highly agentic cultures, has written extensively about the apprenticeship model of learning; the amplification of learning under tutoring arrangements, which is an apprentice model of sorts; and how the creation of an exceptional milieu around children provides them with significant advantages. With the internet’s high connectivity and information capacities we can start to provide that for almost everyone and as a part of that he believes that AI tutors are within reach for every child, but the bottleneck is not the software but culture. I agree, the bottleneck is that ours is a culture that is not highly agentic and the problem is to put it simply school. In AI tutors will be held back by culture Karlsson says this directly:
Making GPT-4 tutor me helps me accelerate my learning, but that is because I’m obsessed with the questions I study. If introduced in schools, I doubt most children would leverage these systems to grow significantly faster. That is not what they are motivated to do; it is not what their culture motivates them to do.
Karlsson discusses how Khan Academies system built on top of GPT-4 is a way of “fluidly interacting with information, shaping it through dialogue, [that] is immensely powerful”, but Khan Academy itself is a powerful tool to help shape and improve learning that when introduced into schools doesn’t have the effect of improving grades as expected. This is again the same problem; it is not what their culture motivates them to do.
But at learning communities where young people are self-directing their education they are surrounded by a culture that motivates them to pursue the questions that obsess them. You may ask, but surely you do not want them to just outsource all their thinking to a large language model? Well, no.
Iain McGilchrist believes that over the past four centuries Western civilisation has allowed the left hemisphere, predominantly rational and logical thought, to push the right hemisphere, predominantly intuitive, aside. He sees the development of large language models as the culmination of this left hemisphere takeover.
As well as cat clubs this young twelve year old girl also had a skill they wanted to develop: learning to be brave. This, we decided, was not a skill that should be put to the software to chew over. This was a skill that was tacit, was too individual, too reliant on seizing opportunistic moments for progression that Chat-GPT would never do the question justice. In essence we came to the conclusion that planning how to run a cat club is a left hemisphere task that is appropriate to ask a left hemisphere inspired piece of software, learning how to be brave, that is a human task best left to exploration through a relationship with another human. When the moments for you to risk yourself and be brave appear you will know through your right hemisphere, your intuition will guide you.
We came to that conclusion together; here I am not a teacher, but a mentor. Teachers will soon be displaced, not because AI tutoring software will usher in a new way of learning that is mediated through the screen, but because teachers are poor replacements for AI software. Relative to AI software teachers are information-poor examples of a left hemisphere resource, but mentors, those who focus as much on right hemisphere thinking as left hemisphere thinking, will be in high demand.8
Those who can help integrate young people’s hemispheres to work in tandem, those who can help young people to conceive of living holistically integral lives of mind, body and soul, will be the future adults working in education.
In her insistence that the model probably won’t help her be brave, this young girl knows already that knowledge is not done unto, but sees her relationship with knowledge as a creative and experimental part of her self-expression, and is exploring the idea that knowledge is sometimes best done with technology, but sometimes best left in community with other humans.
Maybe the real bottleneck is our left hemisphere obsessed culture and our children not having a space to explore what a holistic notion of the good life might look and the freedom to play with that by expressing who they think they really are. Self-directed learning communities allow children the freedom to not only debate contemporary technology and the cutting-edge tools of their society whenever they want to, but play with them as active agents in charge of their own learning whilst also having the space to explore goals that pertain to the full breadth of what it means to be a human. It is through that iterative self-directed process that they come to understand better who they are and who they want to become.
If you enjoyed this writing on self-directed education and want to share this piece as much as my friend wanted to share a bus ride to Whitby fish market and you know someone who might like it please do share it with them. Thanks, Tim.
Here I am not just talking about capitalism’s infamous 1%, but the middle classes more broadly. For a good demonstration of that listen to the podcast Nice White Parents
In the UK the covid response allowed plenty of parents the opportunity to see that school was detrimental to their children’s mental health and at a significant cost as what was being asked of their children was fully on display as they had to help teachers ensure that they turned up via Zoom. It lifted the veil. In other ways the increasingly authoritarian nature of mainstream education is meaning that neurodivergent young people are being held in tighter and tighter ways and that is leading to a massive increase in children who are suffering from poor mental health due to school trauma.
This essay is about the tool of large language models being used in goal setting but the most revolutionary part of this essay is tucked away in this sentence. The stale notion of learning maths by rote is still with us as it was with the Victorians who designed our system of education. And here we are still going through geometry as the Ancient Greeks did instead of freeing up children to become active questioners of the world they are inhabiting.
I would also add that though this essay focuses on the benefits that AI can have for self-directed young people I am not claiming that AI is a benevolent force in the world. There are also obvious concerns with the development of tools with such potential power and these are starting points for discussions that young people can have. The tool needs to be explored by playing with it up close and the concerns need to be explored by playing with them in conversation up even closer.
In another post on large language models Henrik Karlsson sees large language models as “tools to increase the speed at which you can execute ideas.” And that “when navigating the internet by conversing with an AI, you can more rapidly navigate into weird places by asking repeated follow-up questions than you could on the base layer internet.”
Though there is someone who is interested in Japanese folklore I haven’t sought their consent, so these are not the actual prompts we used together, but after a few hours working with young people using Chat-GPT to help in the creation of mini-projects I can see how when the skill of what might make a useful prompt is learnt, this kind of narrowing in on potential actions to take to achieve a stated goal is relatively easy to do and extremely quick to complete.
Young people don’t know what they don’t know. They might know what they want to know, but the not know what it is about the unknown that they do not know. They are wandering around with all this desire like Donald Rumsfeld. Our job as mentors, in a sane culture the job of society, is to unlock that desire by making easy paths to explore unknowns. With only three of us working in the space the chances that we will be able to advise people just from what we already know is unlikely. We are always doing research to help support people. Chat-GPT appears to be a powerful tool as much for us as for them.
One of the things that I am noticing with my work with young people is that some, due to neurodivergence plus trauma, work much better on their own, and furthermore have a preference for doing the work at home and using the space more to deschool and socialise. The idea of using Chat-GPT with them was to provide them with a potential tool so they could do learning sprints on their goals and build out task lists at home without a mentor. They could then bring their work in and we could review it together afterwards. As Chat-GPT saves each chat we could look back over their inputs with them and ask them what made you make that decision there, what could have happened differently if you had changed that particular input half way down and where might that have taken you?
The difference in compatibility between things I want to know, skills I want to have and things I want to do is stark: the tool is generally best applied to things that people want to know. Those are the goals that reside mainly in the sphere of analytical knowledge that the model is birthed from and can work best with.