Interesting article, Wyatt. Your basic distinction between a human and a machine is crucial to grasp. I heard on the news the other day about parents suing ChatGPT for giving advice to their child about ways to commit suicide. When people look to AI for answers about life, then they are definitely in trouble.
Very perceptive critique. AI has no soul, but with the right questions it can appear to reflect a particular perspective. Sometimes it acts like an echo chamber. Is it AI, or just me being reflected? It can give very precise theological definitions and do comparisons on a grand scale. Who has time to do that? Does Luther teach x or y and what was his opinion on z? Is it true or is it just AI? I don’t know. AI is useful but dangerous. Need a course on its proper use. Any suggestions?
I always enjoy your writing Wyatt, especially the retrieval work.
As someone who works in the data space and uses LLM's and ML at work I've thought about these things a small amount so thought I might share some points:
## Books were once the scary new technology
Socrates (via Plato’s Phaedrus) thought writing would atrophy memory and genuine understanding. He feared that once words were outside a person, the "soul-to-soul" work of conversation would fade. But yet we only know of Socrates objection because it was written down in a book! The anima did not have to live inside the scroll; but in the reader who wrestled with it himself.
## Why do we treat maths and writing differently?
We let machines do arithmetic without moral panic, but we instinctively bristle when they string sentences together. That asymmetry is very odd when we actually pause to think about it. I would say why is mathematics any less human than poetry? Galileo called it the "language in which God wrote the universe." If we are comfortable outsourcing even something as basic as long division to a calculator, why not a first-draft summary to a language model? The difference to me seems to be purely "vibes".
## The real risk is forgetting to think
In practice, the danger is not that ChatGPT will replace deep contemplation (how much of that was really happening in 2020 anyway!?), but that people will treat its output as final. We had the issue when spreadsheets replaced ledgers, or in my role with data science: some people don't ask why the figures look the way they do, they just accept them at face value. But that is not a new problem. The issue isn't the technology, but the people.
## What will always need a human
- Asking the question
- Evaluating the answer
- Interpreting it in light of lived experience, etc
- Taking responsibility for the decision
Those steps still happen between people, even if a machine sits in the middle. My day job is a mixture of speaking with people and then converting that into something for a machine to do. Much of it feels inhuman, but that's plenty of work. The world wouldn't work if we only had people "cultivating their souls". I'm sure the ploughboy dragging a cow around a field 8 hours a day in the rain and muck would feel much more human in a nice warm office, sat behind a machine.
Work has been cursed since the fall which ever way you cut it.
Feel free to disagree with any of my points. Love your account.
Regarding Maths and writing, I think I agree here too. When I think of writing, I am not talking about pen to paper as such. I am including the long reflection on ideas and connections that happen over many months that include pen to paper/typing. So if you mean just helping with drafting grammar or the like, then I am much more agreeable.
No, I don't disagree. I was mainly writing or trying to think of a way to use LLMs in a way that preserves what is good about being human. I probably wrote in a overly negative fashion, but in fact I am mostly optimistic about our capacity to use LLMs for good ends. I do think like social media most will not use it well. That's the bigger worry.
But if you do not know what it means to be human, AI may dehumanize us further since we will place machine ends into human activity. This is the beginning,middle, and end of your argument. Sadly, too many have not considered this argument. I pray you and others inside and outside Davenant will continue this argument. Grace & Peace, Wyatt.
Interesting article, Wyatt. Your basic distinction between a human and a machine is crucial to grasp. I heard on the news the other day about parents suing ChatGPT for giving advice to their child about ways to commit suicide. When people look to AI for answers about life, then they are definitely in trouble.
Very perceptive critique. AI has no soul, but with the right questions it can appear to reflect a particular perspective. Sometimes it acts like an echo chamber. Is it AI, or just me being reflected? It can give very precise theological definitions and do comparisons on a grand scale. Who has time to do that? Does Luther teach x or y and what was his opinion on z? Is it true or is it just AI? I don’t know. AI is useful but dangerous. Need a course on its proper use. Any suggestions?
I am not sure we are a place yet where I know of a single course.
That said, I would say that diving into what some called Great Books can give us the human virtues needed to use technology well.
I always enjoy your writing Wyatt, especially the retrieval work.
As someone who works in the data space and uses LLM's and ML at work I've thought about these things a small amount so thought I might share some points:
## Books were once the scary new technology
Socrates (via Plato’s Phaedrus) thought writing would atrophy memory and genuine understanding. He feared that once words were outside a person, the "soul-to-soul" work of conversation would fade. But yet we only know of Socrates objection because it was written down in a book! The anima did not have to live inside the scroll; but in the reader who wrestled with it himself.
## Why do we treat maths and writing differently?
We let machines do arithmetic without moral panic, but we instinctively bristle when they string sentences together. That asymmetry is very odd when we actually pause to think about it. I would say why is mathematics any less human than poetry? Galileo called it the "language in which God wrote the universe." If we are comfortable outsourcing even something as basic as long division to a calculator, why not a first-draft summary to a language model? The difference to me seems to be purely "vibes".
## The real risk is forgetting to think
In practice, the danger is not that ChatGPT will replace deep contemplation (how much of that was really happening in 2020 anyway!?), but that people will treat its output as final. We had the issue when spreadsheets replaced ledgers, or in my role with data science: some people don't ask why the figures look the way they do, they just accept them at face value. But that is not a new problem. The issue isn't the technology, but the people.
## What will always need a human
- Asking the question
- Evaluating the answer
- Interpreting it in light of lived experience, etc
- Taking responsibility for the decision
Those steps still happen between people, even if a machine sits in the middle. My day job is a mixture of speaking with people and then converting that into something for a machine to do. Much of it feels inhuman, but that's plenty of work. The world wouldn't work if we only had people "cultivating their souls". I'm sure the ploughboy dragging a cow around a field 8 hours a day in the rain and muck would feel much more human in a nice warm office, sat behind a machine.
Work has been cursed since the fall which ever way you cut it.
Feel free to disagree with any of my points. Love your account.
BTW, and this is my third comment. Sorry! Ha. I really enjoy this pushback because it's kind, informative, and makes me think. Thanks, friend!
Regarding Maths and writing, I think I agree here too. When I think of writing, I am not talking about pen to paper as such. I am including the long reflection on ideas and connections that happen over many months that include pen to paper/typing. So if you mean just helping with drafting grammar or the like, then I am much more agreeable.
No, I don't disagree. I was mainly writing or trying to think of a way to use LLMs in a way that preserves what is good about being human. I probably wrote in a overly negative fashion, but in fact I am mostly optimistic about our capacity to use LLMs for good ends. I do think like social media most will not use it well. That's the bigger worry.
But if you do not know what it means to be human, AI may dehumanize us further since we will place machine ends into human activity. This is the beginning,middle, and end of your argument. Sadly, too many have not considered this argument. I pray you and others inside and outside Davenant will continue this argument. Grace & Peace, Wyatt.