Religion within the Bounds of Amusement

Satire / Humor Warning:

As the author, I have been told I have a very subtle sense of humor.

This page is a work of satire, and it is not real.

Own C.J.S. Hayward's complete works in paper!

On the screen appear numerous geometrical forms—prisms, cylinders, cubes — dancing, spinning, changing shape, in a very stunning computer animation. In the background sounds the pulsing beat of techno music. The forms waver, and then coalesce into letters: "Religion Within the Bounds of Amusement."

The music and image fade, to reveal a man, perfect in form and appearance, every hair in place, wearing a jet black suit and a dark, sparkling tie. He leans forward slightly, as the camera focuses in on him.

"Good morning, and I would like to extend a warm and personal welcome to each and every one of you from those of us at the Church of the Holy Television. Please sit back, relax, and turn off your brain."

Music begins to play, and the screen shows a woman holding a microphone. She is wearing a long dress of the whitest white, the color traditionally symbolic of goodness and purity, which somehow manages not to conceal her unnaturally large breasts. The camera slowly focuses in as she begins to sing.

"You got problems? That's OK. You got problems? That's OK. Not enough luxury? That's OK. Only three cars? That's OK. Not enough power? That's OK. Can't get your way? That's OK. Not enough for you? That's OK. Can't do it on your own? That's OK. You got problems? That's OK. You got problems? That's OK. Just call out to Jesus, and he'll make them go away. Just call out to Jesus, and he'll make them go away."

As the music fades, the camera returns to the man.

"Have you ever thought about how much God loves us? Think about the apex of progress that we are at, and how much more he has blessed us than any one else.

"The Early Christians were in a dreadful situation. They were always under persecution. Because of this, they didn't have the physical assurance of security that is the basis for spiritual growth, nor the money to buy the great libraries of books that are necessary to cultivate wisdom. It is a miracle that Christianity survived at all.

"The persecution ended, but darkness persisted for a thousand years. The medievals were satisfied with blind faith, making it the context of thought and leisure. Their concept of identity was so weak that it was entangled with obedience. The time was quite rightly called the Dark Ages.

"But then, ah, the Renaissance and the Enlightenment. Man and his mind enthroned. Religion within the bounds of reason. Then science and technology, the heart of all true progress, grew.

"And now, we sit at the apex, blessed with more and better technology than anyone else. What more could you possibly ask for? What greater blessing could there possibly be? We have the technology, and know how to enjoy it. Isn't God gracious?"

There is a dramatic pause, and then the man closes his eyes. "Father, I thank you that we have not fallen into sin; that we do not worship idols, that we do not believe lies, and that we are not like the Pharisees. I thank you that we are good, moral people; that we are Americans. I thank you, and I praise you for your wondrous power. Amen."

He opens his eyes, and turns to the camera. It focuses in on his face, and his piercing gaze flashes out like lightning. With a thunderous voice, he boldly proclaims, "To God alone be the glory, for ever and ever!"

The image fades.

In the background can be heard the soft tones of Beethoven. A couple fades in; they are elegantly dressed, sitting at a black marble table, set with roast pheasant. The room is of Baroque fashion; marble pillars and mirrors with gilt frames adorn the walls. French windows overlook a formal garden.

The scene changes, and a sleek black sports car glides through forest, pasture, village, mountain. The music continues to play softly.

It passes into a field, and in the corner of the field a small hovel stands. The camera comes closer, and two half-naked children come into view, playing with some sticks and a broken Coca-Cola bottle. Their heads turn and follow the passing car.

A voice gently intones, "These few seconds may be the only opportunity some people ever have to know about you. What do you want them to see?"

The picture changes. Two men are walking through a field. As the camera comes closer, it is seen that they are deep in conversation.

One of them looks out at the camera with a probing gaze, and then turns to the other. "What do you mean?"

"I don't know, Jim." He draws a deep breath, and closes his eyes. "I just feel so... so empty. A life filled with nothing but shallowness. Like there's nothing inside, no purpose, no meaning. Just an everlasting nothing."

"Well, you know, John, for every real and serious problem, there is a solution which is trivial, cheap, and instantaneous." He unslings a small backpack, opening it to pull out two cans of beer, and hands one to his friend. "Shall we?"

The cans are opened.

Suddenly, the peaceful silence is destroyed by the blare of loud rock music. The camera turns upwards to the sky, against which may be seen parachutists; it spins, and there is suddenly a large swimming pool, and a vast table replete with great pitchers and kegs of beer. The parachutists land; they are all young women, all blonde, all laughing and smiling, all wearing string bikinis, and all anorexic.

For the remaining half of the commercial, the roving camera takes a lascivious tour of the bodies of the models. Finally, the image fades, and a deep voice intones, "Can you think of a better way to spend your weekends?"

The picture changes. A luxury sedan, passing through a ghetto, stops beside a black man, clad in rags. The driver, who is white, steps out in a pristine business suit, opens his wallet, and pulls out five crisp twenty dollar bills.

"I know that you can't be happy, stealing, lying, and getting drunk all of the time. Here is a little gift to let you know that Jesus loves you." He steps back into the car without waiting to hear the man's response, and speeds off.

Soon, he is at a house. He steps out of the car, bible in hand, and rings the doorbell.

The door opens, and a man says, "Nick, how are you? Come in, do come in. Have a seat. I was just thinking of you, and it is so nice of you to visit. May I interest you in a little Martini?"

Nick sits down and says, "No, Scott. I am a Christian, and we who are Christian do not do such things."

"Aah; I see." There is a sparkle in the friend's eye as he continues, "And tell me, what did Jesus do at his first miracle?"

The thick, black, leatherbound 1611 King James bible arcs through the air, coming to rest on the back of Scott's head. There is a resounding thud.

"You must learn that the life and story of Jesus are serious matters, and not to be taken as the subject of jokes."

The screen turns white as the voice glosses, "This message has been brought to you by the Association of Concerned Christians, who would like to remind you that you, too, can be different from the world, and can present a positive witness to Christ."

In the studio again, the man is sitting in a chair.

"Now comes a very special time in our program. You, our viewers, matter most to us. It is your support that keeps us on the air. And I hope that you do remember to send us money; when you do, God will bless you. So keep your checks rolling, and we will be able to continue this ministry, and provide answers to your questions. I am delighted to be able to hear your phone calls. Caller number one, are you there?"

"Yes, I am, and I would like to say how great you are. I sent you fifty dollars, and someone gave me an anonymous check for five hundred! I only wish I had given you more."

"That is good to hear. God is so generous. And what is your question?"

"I was wondering what God's will is for America? And what I can do to help?"

"Thank you; that's a good question.

"America is at a time of great threat now; it is crumbling because good people are not elected to office.

"The problem would be solved if Christians would all listen to Rush Limbaugh, and then go out and vote. Remember, bad people are sent to Washington by good people who don't vote. With the right men in office, the government would stop wasting its time on things like the environment, and America would become a great and shining light, to show all the world what Christ can do.

"Caller number two?"

"I have been looking for a church to go to, and having trouble. I just moved, and used to go to a church which had nonstop stories and anecdotes; the congregation was glued to the edges of their seats. Here, most of the services are either boring or have something which lasts way too long. I have found a few churches whose services I generally enjoy—the people really sing the songs—but there are just too many things that aren't amusing. For starters, the sermons make me uncomfortable, and for another, they have a very boring time of silent meditation, and this weird mysticism about 'kiss of peace' and something to do with bread and wine. Do you have any advice for me?"

"Yes, I do. First of all, what really matters is that you have Jesus in your heart. Then you and God can conquer the world. Church is a peripheral; it doesn't really have anything to do with Jesus being in your heart. If you find a church that you like, go for it, but if there aren't any that you like, it's not your fault that they aren't doing their job.

"And the next caller?"

"Hello. I was wondering what the Song of Songs is about."

"The Song of Songs is an allegory of Christ's love for the Church. Various other interpretations have been suggested, but they are all far beyond the bounds of good taste, and read things into the text which would be entirely inappropriate in holy Scriptures. Next caller?"

"My people has a story. I know tales of years past, of soldiers come, of pillaging, of women ravaged, of villages razed to the ground and every living soul murdered by men who did not hesitate to wade through blood. Can you tell me what kind of religion could possibly decide that the Crusades were holy?"

The host, whose face had suddenly turned a deep shade of red, shifted slightly, and pulled at the side of his collar. After a few seconds, a somewhat less polished voice hastily states, "That would be a very good question to answer, and I really would like to, but I have lost track of time. It is now time for an important message from some of our sponsors."

The screen is suddenly filled by six dancing rabbits, singing about toilet paper.

A few minutes of commercials pass: a computer animated flash of color, speaking of the latest kind of candy; a family brought together and made happy by buying the right brand of vacuum cleaner; a specific kind of hamburger helping black and white, young and old to live together in harmony. Somewhere in there, the Energizer bunny appears; one of the people in the scene tells the rabbit that he should have appeared at some time other than the commercial breaks. Finally, the host, who has regained his composure, is on the screen again.

"Well, that's all for this week. I hope you can join us next week, as we begin a four part series on people whose lives have been changed by the Church of the Holy Television. May God bless you, and may all of your life be ever filled with endless amusement!"

How Shall I Tell an Alchemist?

Buy Happiness in an Age of Crisis on Amazon.

The cold matter of science—
Exists not, O God, O Life,
For Thou who art Life,
How could Thy humblest creature,
Be without life,
Fail to be in some wise,
The image of Life?
Minerals themselves,
Lead and silver and gold,
The vast emptiness of space and vacuum,
Teems more with Thy Life,
Than science will see in man,
Than hard and soft science,
Will to see in man.How shall I praise Thee,
For making man a microcosm,
A human being the summary,
Of creation, spiritual and material,
Created to be,
A waterfall of divine grace,
Flowing to all things spiritual and material,
A waterfall of divine life,
Deity flowing out to man,
And out through man,
To all that exists,
And even nothingness itself?

And if I speak,
To an alchemist who seeks true gold,
May his eyes be opened,
To body made a spirit,
And spirit made a body,
The gold on the face of an icon,
Pure beyond twenty-four carats,
Even if the icon be cheap,
A cheap icon of paper faded?

How shall I speak to an alchemist,
Whose eyes overlook a transformation,
Next to which the transmutation,
Of lead to gold,
Is dust and ashes?
How shall I speak to an alchemist,
Of the holy consecration,
Whereby humble bread and wine,
Illumine as divine body and blood,
Brighter than gold, the metal of light,
The holy mystery the fulcrum,
Not stopping in chalice gilt,
But transforming men,
To be the mystical body,
The holy mystery the fulcrum of lives transmuted,
Of a waterfall spilling out,
The consecration of holy gifts,
That men may be radiant,
That men may be illumined,
That men be made the mystical body,
Course with divine Life,
Tasting the Fountain of Immortality,
The transformed elements the fulcrum,
Of God taking a lever and a place to stand,
To move the earth,
To move the cosmos whole,
Everything created,
Spiritual and material,
Returned to God,
Deified.

And how shall I tell an alchemist,
That alchemy suffices not,
For true transmutation of souls,
To put away searches for gold in crevices and in secret,
And see piles out in the open,
In common faith that seems mundane,
And out of the red earth that is humility,
To know the Philosopher's Stone Who is Christ,
And the true alchemy,
Is found in the Holy Orthodox Church?

How Shall I Tell an Alchemist?

Read more of The Best of Jonathan's Corner: An Anthology of Orthodox Christian Mystical Theology on Amazon!

Ajax without JavaScript or Client-Side Scripting

The Ajax application included in this page implements a legitimate, if not particularly useful or even usable, "proof of concept" with partial page updates based on server communication. It accepts a string, and then lets you click on one of a few buttons to see that string styled the way the button is styled, appending a link from the server. But it demonstrates one interesting feature:

It works just the same if you turn off JavaScript and any other client-side scripting completely.

How does it work?

Ajax partial page updates don't need to manipulate a monolithic page's DOM; the reason browser back buttons work in Gmail is an invisible, seamless use of iframes that create browser history. And not only can you do partial page updates via iframes without DOM manipulation, you can do it without client side scripting.

The source code to the server is available here, but it is simple, stateless, and doesn't really hold any secrets; it could be fairly well reconstructed simply by observing what is going on in the demo app above. The basic insight is that a webpage that talks to a server and makes partial updates can be made by the usual Ajax tools, but at least a basic proof of concept can be made with old HTML features like frames and iframes, links and targets, forms, and meta refresh.

This Ajaxian use of old web technologies may or may not produce graceful alternatives to standard Ajax techniques, either alone or in a "progressive enhancement"/"graceful degradation" strategy, but it may allow graceful degradation to be just a little more graceful, and JAWS might at least know when something on the sceen has changed. But here is a proof of concept that it is possible to implement a webapp with partial page updates and server communications that works in a browser that has JavaScript and any other client-side scripting turned off.

AI as an Arena for Magical Thinking Among Skeptics

Own C.J.S. Hayward's complete works in paper!

Surgeon General's Warning

This piece represents my first serious study as an Orthodox Christian. The gist of it, by which I mean a critique of the artificial intelligence and cognitive science movement whose members are convinced of its progress for reasons unrelated to any real achievement of its core goal, is one I would still maintain. Artificial intelligence, over a decade after the thesis was written, remains "just around the corner since 1950". The core pioneer John von Neumann's The Computer and the Brain's core assertion that the basis of human thought is "add, subtract, multiply, and divide" remains astonishingly naïve to the point of being crass.

With that much stated, there are things that don't belong. The "I-Thou" existentialism is not of Orthodox origin and its study of occult aspects is simply inappropriate. I do not say inaccurate, only wrong. I believe there is probably some truth to some suggestion that the artificial intelligence endeavor represents a recurrence of age-old occult dreams dressed in the clothing of computer science and secular rationality. Such things should still not have been studied, or at very least not by me.

For those still interested, my dissertation is below.

Cog, portrayed as 'Robo Sapiens'

AI as an Arena for Magical Thinking Among Skeptics
Artificial intelligence, cognitive science, and Eastern Orthodox views on personhood

M.Phil. Dissertation

CJS Hayward
christos.jonathan.hayward@gmail.com
CJSHayward.com

15 June 2004

Table of Contents

Abstract

Introduction

Artificial Intelligence

The Optimality Assumption

Just Around the Corner Since 1950

The Ghost in the Machine

Occult Foundations of Modern Science

Renaissance and Early Modern Magic

Science, Psychology, and Behaviourism

I-Thou and Humanness

Orthodox Anthropology in Maximus Confessor's Mystagogia

Intellect and Reason

Intellect, Principles, and Cosmology

The Intelligible and the Sensible

Knowledge of the Immanent

Intentionality and Teleology

Conclusion

Epilogue

Bibliography

Abstract

I explore artificial intelligence as failing in a way that is characteristic of a faulty anthropology. Artificial intelligence has had excellent funding, brilliant minds, and exponentially faster computers, which suggests that any failures present may not be due to lack of resources, but arise from an error that is manifest in anthropology and may even be cosmological. Maximus Confessor provides a genuinely different background to criticise artificial intelligence, a background which shares far fewer assumptions with the artificial intelligence movement than figures like John Searle. Throughout this dissertation, I will be looking at topics which seem to offer something interesting, even if cultural factors today often obscure their relevance. I discuss Maximus's use of the patristic distinction between 'reason' and spiritual 'intellect' as providing an interesting alternative to 'cognitive faculties.' My approach is meant to be distinctive both by reference to Greek Fathers and by studying artificial intelligence in light of the occult foundations of modern science, an important datum omitted in the broader scientific movement's self-presentation. The occult serves as a bridge easing the transition between Maximus Confessor's worldview and that of artificial intelligence. The broader goal is to make three suggestions: first, that artificial intelligence provides an experimental test of scientific materialism's picture of the human mind; second, that the outcome of the experiment suggests we might reconsider scientific materialism's I-It relationship to the world; and third, that figures like Maximus Confessor, working within an I-Thou relationship, offer more wisdom to us today than is sometimes assumed. I do not attempt to compare Maximus Confessor's Orthodoxy with other religious traditions, however I do suggest that Orthodoxy has relevant insights into personhood which the artificial intelligence community still lacks.

Introduction

Some decades ago, one could imagine a science fiction writer asking, 'What would happen if billions of dollars, dedicated laboratories with some of the world's most advanced equipment, indeed an important academic discipline with decades of work from some of the world's most brilliant minds—what if all of these were poured into an attempt to make an artificial mind based on an understanding of personhood that came out of a framework of false assumptions?' We could wince at the waste, or wonder that after all the failures the researchers still had faith in their project. And yet exactly this philosophical experiment has been carried out, in full, and has been expanded. This philosophical experiment is the artificial intelligence movement.

What relevance does AI have to theology? Artificial intelligence assumes a particular anthropology, and failures by artificial intelligence may reflect something of interest to theological anthropology. It appears that the artificial intelligence project has failed in a substantial and characteristic way, and furthermore that it has failed as if its assumptions were false—in a way that makes sense given some form of Christian theological anthropology. I will therefore be using the failure of artificial intelligence as a point of departure for the study of theological anthropology. Beyond a negative critique, I will be exploring a positive alternative. The structure of this dissertation will open with critiques, then trace historical development from an interesting alternative to the present problematic state, and then explore that older alternative. I will thus move in the opposite of the usual direction.

For the purposes of this dissertation, artificial intelligence (AI) denotes the endeavour to create computer software that will be humanly intelligent, and cognitive science the interdisciplinary field which seeks to understand the mind on computational terms so it can be re-implemented on a computer. Artificial intelligence is more focused on programming, whilst cognitive science includes other disciplines such as philosophy of mind, cognitive psychology, and linguistics. Strong AI is the classical approach which has generated chess players and theorem provers, and tries to create a disembodied mind. Other areas of artificial intelligence include the connectionist school, which works with neural nets,[1] and embodied AI, which tries to take our mind's embodiment seriously. The picture on the cover[2] is from an embodied AI website and is interesting for reasons which I will discuss below under the heading of 'Artificial Intelligence.'

Fraser Watts (2002) and John Puddefoot (1996) offer similar and straightforward pictures of AI. I will depart from them in being less optimistic about the present state of AI, and more willing to find something lurking beneath appearances. I owe my brief remarks about AI and its eschatology, under the heading of 'Artificial Intelligence' below, to a line of Watts' argument.[3]

Other critics[4] argue that artificial intelligence neglects the body as mere packaging for the mind, pointing out ways in which our intelligence is embodied. They share many of the basic assumptions of artificial intelligence but understand our minds as biologically emergent and therefore tied to the body.

There are two basic points I accept in their critiques:

First, they argue that our intelligence is an embodied intelligence, often with specific arguments that are worth attention.

Second, they often capture a quality, or flavour, to thought that beautifully illustrates what sort of thing human thought might be besides digital symbol manipulation on biological hardware.

There are two basic points where I will be departing from their line of argument:

First, they think outside the box, but may not go far enough. They are playing on the opposite team to cognitive science researchers, but they are playing the same game, by the same rules. The disagreement between proponents and critics is not whether mind may be explained in purely materialist terms, but only whether that assumption entails that minds can be re-implemented on computers.

Second, they see the mind's ties to the body, but not to the spirit, which means that they miss out on half of a spectrum of interesting critiques. I will seek to explore what, in particular, some of the other half of the spectrum might look like. As their critiques explore what it might mean to say that the mind is embodied, the discussion of reason and intellect under the heading 'Intellect and Reason' below may give some sense of what it might mean to say that the mind is spiritual. In particular, the conception of the intellects offers an interesting base characterisation of human thought that competes with cognitive faculties. Rather than saying that the critics offer false critiques, I suggest that they are too narrow and miss important arguments that are worth exploring.

I will explore failures of artificial intelligence in connection with the Greek Fathers. More specifically, I will look at the seventh century Maximus Confessor's Mystagogia. I will investigate the occult as a conduit between the (quasi-Patristic) medieval West and the West today. The use of Orthodox sources could be a particularly helpful light, and one that is not explored elsewhere. Artificial intelligence seems to fail along lines predictable to the patristic understanding of a spirit-soul-body unity, essentially connected with God and other creatures. The discussion becomes more interesting when one looks at the implications of the patristic distinction between 'reason' and the spiritual 'intellect.' I suggest that connections with the Orthodox doctrine of divinisation may make an interesting a direction for future enquiry. I will only make a two-way comparison between Orthodox theological anthropology and one particular quasi-theological anthropology. This dissertation is in particular not an attempt to compare Orthodoxy with other religious traditions.

One wag said that the best book on computer programming for the layperson was Alice's Adventures in Wonderland, but that's just because the best book on anything for the layperson was Alice's Adventures in Wonderland. One lesson learned by a beginning scholar is that many things that 'everybody knows' are mistaken or half-truths, as 'everybody knows' the truth about Galileo, the Crusades, the Spanish Inquisition, and other select historical topics which we learn about by rumour. There are some things we will have trouble understanding unless we can question what 'everybody knows.' This dissertation will be challenging certain things that 'everybody knows,' such as that we're making progress towards achieving artificial intelligence, that seventh century theology belongs in a separate mental compartment from AI, or that science is a different kind of thing from magic. The result is bound to resemble a tour of Wonderland, not because I am pursuing strangeness for its own sake, but because my attempt to understand artificial intelligence has taken me to strange places. Renaissance and early modern magic is a place artificial intelligence has been, and patristic theology represents what we had to leave to get to artificial intelligence.

The artificial intelligence project as we know it has existed for perhaps half a century, but its roots reach much further back. This picture attests to something that has been a human desire for much longer than we've had digital computers. In exploring the roots of artificial intelligence, there may be reason to look at a topic that may seem strange to mention in connection with science: the Renaissance and early modern occult enterprise.

Why bring the occult into a discussion of artificial intelligence? It doesn't make sense if you accept science's own self-portrayal and look at the past through its eyes. Yet this shows bias and insensitivity to another culture's inner logic, almost a cultural imperialism—not between two cultures today but between the present and the past. A part of what I will be trying to do in this thesis is look at things that have genuine relevance to this question, but whose relevance is obscured by cultural factors today. Our sense of a deep divide between science and magic is more cultural prejudice than considered historical judgment. We judge by the concept of scientific progress, and treating prior cultures' endeavours as more or less successful attempts to establish a scientific enterprise properly measured by our terms.

We miss how the occult turn taken by some of Western culture in the Renaissance and early modern period established lines of development that remain foundational to science today. Many chasms exist between the mediaeval perspective and our own, and there is good reason to place the decisive break between the mediaeval way of life and the Renaissance/early modern occult development, not placing mediaeval times and magic together with an exceptionalism for our science. I suggest that our main differences with the occult project are disagreements as to means, not ends—and that distinguishes the post-mediaeval West from the mediaevals. If so, there is a kinship between the occult project and our own time: we provide a variant answer to the same question as the Renaissance magus, whilst patristic and mediaeval Christians were exploring another question altogether. The occult vision has fragmented, with its dominion over the natural world becoming scientific technology, its vision for a better world becoming political ideology, and its spiritual practices becoming a private fantasy.

One way to look at historical data in a way that shows the kind of sensitivity I'm interested in, is explored by Mary Midgley in Science as Salvation (1992); she doesn't dwell on the occult as such, but she perceptively argues that science is far more continuous with religion than its self-understanding would suggest. Her approach pays a certain kind of attention to things which science leads us to ignore. She looks at ways science is doing far more than falsifying hypotheses, and in so doing observes some things which are important. I hope to develop a similar argument in a different direction, arguing that science is far more continuous with the occult than its self-understanding would suggest. This thesis is intended neither to be a correction nor a refinement of her position, but development of a parallel line of enquiry.

It is as if a great island, called Magic, began to drift away from the cultural mainland. It had plans for what the mainland should be converted into, but had no wish to be associated with the mainland. As time passed, the island fragmented into smaller islands, and on all of these new islands the features hardened and became more sharply defined. One of the islands is named Ideology. The one we are interested in is Science, which is not interchangeable with the original Magic, but is even less independent: in some ways Science differs from Magic by being more like Magic than Magic itself. Science is further from the mainland than Magic was, even if its influence on the mainland is if anything greater than what Magic once held. I am interested in a scientific endeavour, and in particular a basic relationship behind scientific enquiry, which are to a substantial degree continuous with a magical endeavour and a basic relationship behind magic. These are foundationally important, and even if it is not yet clear what they may mean, I will try to substantiate these as the thesis develops. I propose the idea of Magic breaking off from a societal mainland, and sharpening and hardening into Science, as more helpful than the idea of science and magic as opposites.

There is in fact historical precedent for such a phenomenon. I suggest that a parallel with Eucharistic doctrine might illuminate the interrelationship between Orthodoxy, Renaissance and early modern magic, and science (including artificial intelligence). When Aquinas made the Christian-Aristotelian synthesis, he changed the doctrine of the Eucharist. The Eucharist had previously been understood on Orthodox terms that used a Platonic conception of bread and wine participating in the body and blood of Christ, so that bread remained bread whilst becoming the body of Christ. One substance had two natures. Aristotelian philosophy had little room for one substance which had two natures, so one thing cannot simultaneously be bread and the body of Christ. When Aquinas subsumed real presence doctrine under an Aristotelian framework, he managed a delicate balancing act, in which bread ceased to be bread when it became the body of Christ, and it was a miracle that the accidents of bread held together after the substance had changed. I suggest that when Zwingli expunged real presence doctrine completely, he was not abolishing the Aristotelian impulse, but carrying it to its proper end. In like fashion, the scientific movement is not a repudiation of the magical impulse, but a development of it according to its own inner logic. It expunges the supernatural as Zwingli expunged the real presence, because that is where one gravitates once the journey has begun. What Aquinas and the Renaissance magus had was composed of things that did not fit together. As I will explore below under the heading 'Renaissance and Early Modern Magic,' the Renaissance magus ceased relating to society as to one's mother and began treating it as raw material; this foundational change to a depersonalised relationship would later secularise the occult and transform it into science. The parallel between medieval Christianity/magic/science and Orthodoxy/Aquinas/Zwingli seems to be fertile: real presence doctrine can be placed under an Aristotelian framework, and a sense of the supernatural can be held by someone who is stepping out of a personal kind of relationship, but in both cases it doesn't sit well, and after two or so centuries people finished the job by subtracting the supernatural.

Without discussing the principles in Thomas Dixon's 1999 delineation of theology, anti-theology, and atheology that can be un-theological or quasi-theological, regarding when one is justified in claiming that theology is present, I adopt the following rule:

A claim is considered quasi-theological if it can conflict with theological claims.

Given this rule, patristic theology, Renaissance and early modern magic (hereafter 'magic' or 'the occult'), and artificial intelligence claims are all considered to be theological or quasi-theological.

I will not properly trace an historical development so much as show the distinctions between archetypal scientific, occult, and Orthodox worldviews as seen at different times, and briefly discuss their relationships with some historical remarks. Not only are there surprisingly persistent tendencies, but Lee repeats Weber's suggestion that there is real value to understand ideal types.[5]

I will be attempting to bring together pieces of a puzzle—pieces scattered across disciplines and across centuries, often hidden by today's cultural assumptions about what is and is not connected—to show their interconnections and the picture that emerges from their fit. I will be looking at features including intentionality,[6] teleology,[7] cognitive faculties,[8] the spiritual intellect,[9] cosmology, and a strange figure who wields a magic sword with which to slice through society's Gordian knots. Why? In a word, all of this connected. Cosmology is relevant if there is a cosmological error behind artificial intelligence. There are both an organic connection and a distinction between teleology and intentionality, and the shift from teleology to intentionality is an important shift; when one shifts from teleology to intentionality one becomes partly blind to what the artificial intelligence picture is missing. Someone brought up on cognitive faculties may have trouble answering, 'How else could it be?'; the patristic understanding of the spiritual intellect gives a very interesting answer, and offers a completely different way to understand thought. And the figure with the magic sword? I'll let this figure remain mysterious for the moment, but I'll hint that without that metaphorical magic sword we would never have a literal artificial intelligence project. I do not believe I am forging new connections among these things, so much as uncovering something that was already there, overlooked but worth investigating.

This is an attempt to connect some very diverse sources, even if the different sections are meant primarily as philosophy of religion. This brings problems of coherence and disciplinary consistency, but the greater risk is tied to the possibility of greater reward. It will take more work to show connections than in a more externally focused enquiry, but if I can give a believable case for those interconnections, this will ipso facto be a more interesting enquiry.

All translations from French, German, Latin, and Greek are my own.

Artificial Intelligence

Artificial intelligence is not just one scientific project among others. It is a cultural manifestation of a timeless dream. It does not represent the repudiation of the occult impulse, but letting that impulse work out according to its own inner logic. Artificial intelligence is connected with a transhumanist vision for the future[10] which tries to create a science-fiction-like future of an engineered society of superior beings.[11] This artificial intelligence vision for the future is similar to the occult visions for the future we will see below. Very few members of the artificial intelligence movement embrace the full vision—but I may suggeste that its spectre is rarely absent, and that that spectre shows itself by a perennial sense of, 'We're making real breakthroughs today, and full AI is just around the corner.' Both those who embrace the fuller enthusiasm and those who are more modestly excited by current project have a hope that we are making progress towards creating something fundamentally new under the sun, of bequeathing humanity with something that has never before been available, machines that genuinely think. Indeed, this kind of hope is one of magic's most salient features. The exact content and features vary, but the sometimes heady excitement and the hope to bestow something powerful and new mark a significant point contact between the artificial intelligence and the magic that enshrouded science's birth.

There is something timeless and archetypal about the desire to create humans through artifice instead of procreation. Jewish legend tells of a rabbi who used the Kaballah to create a clay golem to defend a city against anti-semites in 1581.[12] Frankenstein has so marked the popular imagination that genetically modified foods are referred to as 'Frankenfoods,' and there are many (fictional) stories of scientists creating androids who rebel against and possibly destroy their creators. Robots who have artificial bodies but think and act enough like humans never to cause culture shock are a staple of science fiction. [13] There is a timeless and archetypal desire to create humans by artifice rather than procreation. Indeed, this desire has more than a little occult resonance.

We should draw a distinction between what may be called 'pretentious AI' and 'un-pretentious AI.' The artificial intelligence project has managed technical feats that are sometimes staggering, and from a computer scientist's perspective, the state of computer science is richer and more mature than if there had been no artificial intelligence project. Without making any general claim that artificial intelligence achieves nothing or achieves nothing significant, I will explore a more specific and weaker claim that artificial intelligence does not and cannot duplicate human intelligence.

A paradigm example of un-pretentious AI is the United States Postal Service handwriting recognition system. It succeeds in reading the addresses on 85% of postal items, and the USPS annual report is justifiably proud of this achievement.[14] However, there is nothing mythic claimed for it: the USPS does not claim a major breakthrough in emulating human thought, nor does it give people the impression that artificial mail carriers are just around the corner. The handwriting recognition system is a tool—admittedly, quite an impressive tool—but it is nothing more than a tool, and no one pretends it is anything more than a tool.

For a paradigm example of pretentious AI, I will look at something different. The robot Cog represents equally impressive feats in artificial hand-eye coordination and motor control, but its creators claim something deeper, something archetypal and mythic:

The robot Cog, portrayed as Robo sapiens
Fig. 2: Cog, portrayed as Robo sapiens[15]

The scholar places his hand on the robots' shoulder as if they had a longstanding friendship. At almost every semiotic level, this picture constitutes an implicit claim that the researcher has a deep friendship with what must be a deep being. The unfortunately blurred caption reads, '©2000 Peter Menzel / Robo sapiens.' On the Cog main website area, every picture with Cog and a person theatrically shows the person treating the robot as quite lifelike—giving the impression that the robot must be essentially human.

But how close is Cog to being human? Watts writes,

The weakness of Cog at present seems to be that it cannot actually do very much. Even its insect-like computer forebears do not seem to have had the intelligence of insects, and Cog is clearly nowhere near having human intelligence.[16]

The somewhat light-hearted frequently-asked-questions list acknowledges that the robot 'has no idea what it did two minutes ago,' answers 'Can Cog pass the Turing test?' by saying, 'No... but neither could an infant,' and interestingly answers 'Is Cog conscious?' by saying, 'We try to avoid using the c-word in our lab. For the record, no. Off the record, we have no idea what that question even means. And still, no.' The response to a very basic question is ambiguous, but it seems to joke that 'consciousness' is obscene language, and gives the impression that this is not an appropriate question to ask: a mature adult, when evaluating our AI, does not childishly frame the question in terms of consciousness. Apparently, we should accept the optimistic impression of Cog, whilst recognising that it's not fair to the robot to ask about features of human personhood that the robot can't exhibit. This smells of begging the question.

Un-pretentious AI makes an impressive technical achievement, but recognises and acknowledges that they've created a tool and not something virtually human. Pretentious AI can make equally impressive technical achievements, and it recognises that what it's created is not equivalent to human, but it does not acknowledge this. The answer to 'Is Cog conscious?' is a refusal to acknowledge something the researchers have to recognise: that Cog has no analogue to human consciousness. Is it a light-hearted way of making a serious claim of strong agnosticism about Cog's consciousness? It doesn't read much like a mature statement that 'We could never know if Cog were conscious.' The researcher in Figure 2 wrote an abstract on how to give robots a theory of other minds[17], which reads more like psychology than computer science.

There's something going on here that also goes on in the occult. In neo-paganism, practitioners find their magic to work, not exactly as an outsider would expect, by making incantations and hoping that something will happen that a skeptic would recognise as supernatural, but by doing what they can and then interpreting reality as if the magic had worked. They create an illusion and subconsciously embrace it. This mechanism works well enough, in fact, that large segments of today's neo-paganism started as jokes and then became real, something their practitioners took quite seriously.[18] There's power in trying to place a magical incantation or a computer program (or, in programmer slang, 'incantation') to fill a transcendent hope: one finds ways that it appears to work, regardless of what an outsider's interpretation may be. This basic technique appears to be at work in magic as early as the Renaissance, and it appears to be exactly what's going on in pretentious AI. The basic factor of stepping into an illusion after you do what you can makes sense of the rhetoric quoted above and why Cog is portrayed not merely as a successful experiment in coordination but as Robo sapiens, the successful creation of a living golem. Of course we don't interpret it as magic because we assume that artificial and intelligence and magic are very different things, but the researchers' self-deception falls into a quite venerable magical tradition.

Computers seem quite logical. Are they really that far from human rationality? Computers are logical without being rational. Programming a computer is like explaining a task to someone who follows directions very well but has no judgment and no ability to recognise broader intentions in a request. It follows a list of instructions without any recognition or a sense of what is being attempted. The ability to understand a conversation, or recognise another person's intent—even with mistakes—or any of a number of things humans take for granted, belongs to rationality. A computer's behaviour is built up from logical rules that do certain precise manipulations of symbols without any sense of meaning whatsoever: it is logical without being rational. The discipline of usability is about how to write well-designed computer programs; these programs usually let the user forget that computers aren't rational. For instance, a user can undo something when the computer logically and literally follows an instruction, and the user rationally realises that that isn't really what was intended. But even the best of this design doesn't let the computer understand what one meant to say. One frustration people have with computers stems from the fact that there is a gist to what humans say, and other people pick up that gist. Computers do not have even the most rudimentary sense of gist, only the ability to logically follow instructions. This means that the experience of bugs and debugging in programming is extremely frustrating to those learning how to program; the computer's response to what seems a correct program goes beyond nitpicking. This logicality without rationality is deceptive, for it presents something that looks very much like rationality at first glance, but produces unpleasant surprises when you treat it as rational. There's something interesting going on here. When we read rationality into a computer's logicality, we are in part creating the illusion of artificial intelligence. 'Don't anthropomorphise computers,' one tells novice programmers. 'They hate that.' A computer is logical enough that we tend to treat it as rational, and in fact if you want to believe that you've achieved artificial intelligence, you have an excellent basis to use in forming a magician's self-deception.

Artificial intelligence is a mythic attempt to create an artificial person, and it does so in a revealing way. Thought is assumed to be a private manipulation of mental representations, not something that works in terms of spirit. Embodied AI excluded, the body is assumed to be packaging, and the attempt is not just to duplicate the 'mind' in a complete sense, but our more computer-like rationality: this assumes a highly significant division of what is essential, what is packaging, and what comes along for free if you duplicate the essential bits. None of this is simply how humans have always thought, nor is it neutral. Maximus Confessor's assumptions are different enough from AI's that a comparison makes it easier to see some of AI's assumptions, and furthermore what sort of coherent picture could deny them. I will explore how exactly he does so below under the heading 'Orthodox Anthropology in Maximus Confessor's Mystagogia,' More immediately, I wish to discuss a basic type of assumption shared by artificial intelligence and the occult.

The Optimality Assumption

One commonality that much of magic and science share is that broad visions often include the assumption that what they don't understand must be simple, and be easy to modify or improve. Midgley discusses Bernal's exceedingly optimistic hope for society to transform itself into a simplistically conceived scientific Utopia (if perhaps lacking most of what we value in human society);[19] I will discuss later, under various headings, how society simply works better in Thomas More's and B.F. Skinner's Utopias if only it is re-engineered according to their simple models.[20] Aren't Utopian visions satires, not prescriptions? I would argue that the satire itself has a strong prescriptive element, even if it's not literal. The connection between Utopia and AI is that the same sort of thinking feeds into what, exactly, is needed to duplicate a human mind. For instance, let us examine a sample of dialogue which Turing imagined going on in a Turing test:

Q: Please write me a sonnet on the subject of the Forth Bridge.

A: Count me out on this one. I never could write poetry.

Q: Add 34957 to 70764.

A: (Pause about 30 seconds and then give as answer) 105621.

Q: Do you play chess?

A: Yes.

Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

A: (After a pause of 15 seconds) R-R8 mate.[21]

Turing seems to assume that if you duplicate his favoured tasks of arithmetic and chess, the task of understanding natural language comes along, more or less for free. The subsequent history of artificial intelligence has not been kind to this assumption. Setting aside the fact that most people do not strike up a conversation by strangely requesting the other person to solve a chess problem and add five-digit numbers, Turing is showing an occult way of thinking by assuming there's nothing really obscure, or deep, about the human person, and that the range of cognitive tasks needed to do AI is the range of tasks that immediately present themselves to him. This optimism may be damped by subsequent setbacks which the artificial intelligence movement has experienced, but it's still present. It's hard to see an artificial intelligence researcher saying, 'The obvious problem looks hard to solve, but there are probably hidden problems which are much harder,' let alone consider whether human thought might be non-computational.

Given the difficulties they acknowledge, artificial intelligence researchers seem to assume that the problem is as easy as possible to solve. As I will discuss later, this kind of assumption has profound occult resonance. I will call this assumption the optimality assumption: with allowances and caveats, the optimality assumption states that artificial intelligence is an optimally easy problem to solve. This doesn't mean an optimally easy problem to solve given the easiest possible world, but rather, taking into the difficulties and nuances recognised by the practitioner, the problem is then assumed to be optimally easy, and thenit could be said that we live in the (believable) possible world where artificial intelligence would be easiest to implement. Anything that doesn't work like a computer is assumedly easy, or a matter of unnecessary packaging. There are variations on the theme of begging the question. One basic strategy of ensuring that computers can reach the bar of human intelligence is to lower the bar until it is already met. Another strategy is to try to duplicate human intelligence on computer-like tasks. Remember the Turing test which Turing imagined, which seemed to recognise only the cognitive tasks of writing a poem, doing arithmetic, and solving a chess problem: Turing apparently assumed that natural language understanding would come along for free by the time computers could do both arithmetic and chess. Now we have computer calculators and chess players that can beat humans, whilst natural language understanding tasks which are simple to humans represent an unscaled Everest to artificial intelligence.

We have a situation very much like the attempt to make a robot that can imitate human locomotion—if the attempt is tested by having a robot race a human athlete on a racetrack ergonomically designed for robots. Chess is about as computer-like a human skill as one could find.

Turing's script for an imagined Turing test is one manifestation of a tendency to assume that the problem is optimally easy: the optimality assumption. Furthermore, Turing sees only three tasks of composing a sonnet, adding two numbers, and making a move in chess. But in fact this leaves out a task of almost unassailable difficulty for AI: understanding and appropriately acting on natural language requests. This is part of human rationality that cannot simply be assumed to come with a computer's logicality.

Four decades after Turing imagined the above dialogue, Kurt VanLehn describes a study of problem solving that used a standard story problem.[22] The ensuing discussion is telling. Two subjects' interpretations are treated as problems to be resolved, apparently chosen for their departure from how a human 'should' think about these things. One is a nine year old girl, Cathy: '...It is apparent from [her] protocol that Cathy solves this problem by imagining the physical situation and the actions taken in it, as opposed to, say, converting the puzzle to a directed graph then finding a traversal of the graph.' The purpose of the experiment was to understand how humans solve problems, but it was approached with a tunnel vision that gave a classic kind of computer science 'graph theory' problem, wrapped up in words, and treated any other interpretation of those words as an interesting abnormality. It seems that it is not the theory's duty to approach the subject matter, but the subject matter's duty to approach the theory—a signature trait of occult projects. Is this merely VanLehn's tunnel vision? He goes on to describe the state of cognitive science itself:

For instance, one can ask a subject to draw a pretty picture... [such] Problems whose understanding is not readily represented as a problem space are called ill-defined. Sketching pretty pictures is an example of an ill-defined problem... There have only been a few studies of ill-defined problem solving.[23]

Foerst summarises a tradition of feminist critique:[24] AI was started by men who chose a particular kind of abstract task as the hallmark of intelligence; women might value disembodied abstraction less and might choose something like social skills. The critique may be pushed one step further than that: beyond any claim that AI researchers, when looking for a basis for computer intelligence, tacitly crystallised intelligence out of men's activities rather than women's, it seems that their minds were so steeped in mathematics and computers that they crystallised intelligence out of human performance more in computer-like activities than anything essentially human, even in a masculine way. Turing didn't talk about making artificial car mechanics or deer hunters any more than he had plans for artificial hostesses or childminders.

Harman's 1989 account of functionalism, for instance, provides a more polished-looking version of an optimality assumption: 'According to functionalism, it does not matter what mental states and processes are made of any more than it matters what a carburetor or heart or a chess king is made of.' (832). Another suggestion may be made, not as an axiom but as an answer to the question, 'How else could it be?' This other suggestion might be called the tip of the iceberg conception.

A 'tip of the iceberg' conception might reply, 'Suppose for the sake of argument that it doesn't matter what an iceberg is made of, so long as it sticks up above the surface and is hard enough to sink a ship. The task is then to make an artificial iceberg. One can hire engineers to construct a hard shell to function as a surrogate iceberg. What has been left out is that these properties of something observable from the surface rest on something that lies much, much deeper than the surface. (A mere scrape with an iceberg sunk the Titanic, not only because the iceberg was hard, but because it had an iceberg's monumental inertia behind that hardness.) One can't make a functional tip of the iceberg that way, because a functional tip of an iceberg requires a functional iceberg, and we have very little idea of how to duplicate those parts of an iceberg that aren't visible from a ship. You are merely assuming that one can try hard enough to duplicate what you can see from a ship, and if you duplicate those observables, everything else will follow.' This is not a fatal objection, but it is intended to suggest what the truth could be besides the repeated assumption that intelligence is as easy as possible to duplicate in a computer. Here again is the optimality assumption, and it is a specific example of a broader optimality assumption which will appear in occult sources discussed under the 'Renaissance and Early Modern Magic' heading below. The 'tip of the iceberg' conception is notoriously absent in occult and artificial intelligence sources alike. In occult sources, the endeavour is to create a magically sharp sword that will slice all of the Gordian knots of society's problems; in artificial intelligence the Gordian knots are not societal problems but obstacles to creating a thinking machine, and researchers may only be attempting to use razor blades to cut tangled shoelaces, but researchers are still trying to get as close to that magic sword as they believe possible.

Just Around the Corner Since 1950

The artificial intelligence movement has a number of reasonably stable features, including an abiding sense of 'Today's discoveries are a real breakthrough; artificial minds are just around the corner.' This mood may even be older than digital computers; Dreyfus writes,

In the period between the invention of the telephone relay and its apotheosis in the digital computer, the brain, always understood in terms of the latest technological inventions, was understood as a large telephone switchboard, or more recently, as an electronic computer.[25]

The discoveries and the details of the claim may change, and experience has battered some of strong AI's optimism, but in pioneers and today's embodied AI advocates alike there is a similar mood: 'What we've developed now is effacing the boundary between machine and human.' This mood is quite stable. There is a striking similarity between the statements,

These emotions [discomfort and shock at something so human-like] might arise because in our interactions with Cog, little distinguishes us from the robot, and the differences between a machine and its human counterparts fade.[26]

and:

The reader must accept it as a fact that digital computers can be constructed, and indeed have been constructed, according to the principles we have described, and that they can in fact mimic the actions of a human computer very closely.[27]

What is interesting here is that the second was made by Turing in 1950, and the first by Foerst in 1998. As regards Turing, no one now believes 1950 computers could perform any but the most menial of mathematicians' tasks, and some of Cog's weaknesses have been discussed above ("Cog... cannot actually very much. Even its insect-like forebears do not seem to have had the intelligence of insects..."). The more artificial intelligence changes, the more it seems to stay the same. The overall impression one receives is that for all the surface progress of the artificial intelligence, the underlying philosophy and spirit remain the same—and part of this underlying spirit is the conviction, 'We're making real breakthroughs now, and full artificial intelligence is just around the corner.' This self-deception is sustained in classically magical fashion. Artificial intelligence's self-presentation exudes novelty, a sense that today's breakthroughs are decisive—whilst its actual rate of change is much slower. The 'It's just around the corner.' rhetoric is a longstanding feature. For all the changes in processor power and greater consistency in a materialist doctrine of mind, there are salient features which seem to repeat in 1950's and today's cognitive science. In both, the strategy to ensure that computers could jump the bar of human intelligence is by lowering the bar until it had already been jumped.

The Ghost in the Machine

It has been suggested in connection with Polanyi's understanding of tacit knowledge that behaviourists did not teach, 'There is no soul.' Rather, they draw students into a mode of enquiry where the possibility of a soul is never considered.

Modern psychology takes completely for granted that behavior and neural function are perfectly correlated, that one is completely caused by the other. There is no separate soul or lifeforce to stick a finger into the brain now and then and make neural cells do what they would not otherwise. Actually, of course, this is a working assumption only....It is quite conceivable that someday the assumption will have to be rejected. But it is important also to see that we have not reached that day yet: the working assumption is a necessary one and there is no real evidence opposed to it. Our failure to solve a problem so far does not make it insoluble. One cannot logically be a determinist in physics and biology, and a mystic in psychology.[28]

This is a balder and more provocative way of stating what writers like Turing lead the reader to never think of questioning. The assumption is that the soul, if there is one, is by nature external and separate from the body, so that any interaction between the two is a violation of the body's usual way of functioning. Thus what is denied is a 'separate soul or lifeforce to stick a finger into the brain now and then and make neural cells do what they would not do otherwise.' The Orthodox and others' doctrine of unified personhood is very different from an affirmation of a ghost in the machine. To affirm a ghost in the machine is to assume the soul's basic externality to the body: the basic inability of a soul to interact with a body creates the problem of the ghost in the machine. By the time one attempts to solve the problem of the ghost in the machine, one is already outside of an Orthodox doctrine of personhood in which spirit, soul, and body are united and the whole unit is not an atom.

The objective here is not mainly to criticise AI, but to see what can be learned: AI seems to fail in a way that is characteristic. It does not fail because of insufficient funding or lack of technical progress, but on another plane: it is built on an erroneous quasi-theological anthropology, and its failures may suggest something about being human. The main goal is to answer the question, 'How else could it be?' in a way that is missed by critics working in materialist confines.

What can we say in summary?

First, artificial intelligence work may be divided into un-pretentious and pretentious AI. Un-pretentious AI makes tools that no one presents as anything more than tools. Pretentious AI is presented as more human than is properly warranted.

Second, there are stable features to the artificial intelligence movement, including a claim of, 'We have something essentially human. With today's discoveries, full artificial intelligence is just around the corner.' The exact form of this assertion may change, but the basic claim does not.

Third, artificial intelligence research posits a multifarious 'optimality assumption,' namely that, given the caveats recognised by the researcher, artificial intelligence is an optimally easy assumption to solve. The human mind is assumed to be the sort of thing that is optimally easy to re-create on a computer.

Fourth, artificial intelligence comes from the same kind of thinking as the ghost in the machine problem.

There is more going on in the artificial intelligence project than an attempt to produce scientific results. The persistent rhetoric of 'It's just around the corner.' is not because artificial intelligence scientists have held that sober judgment since the project began, but because there's something else going on. For reasons that I hope will become clearer in the next section, this is beginning to look like an occult project—a secularised occult project, perhaps, but 'secularised occult' is not an empty term in that you take all of the occult away if you take away spellbooks. There is much more to the occult than crystal balls, and a good deal of this 'much more' is at play even if artificial intelligence doesn't do things the Skeptical Enquirer would frown on.

Occult Foundations of Modern Science

With acknowledgment of the relevance of the Reformation, the wake of Aristotelianism, and the via moderna of nominalism,[29] I will be looking at a surprising candidate for discussion on this topic: magic. Magic was a large part of what shaped modernity, a much larger factor than one would expect from modernity's own self-portrayal, and it has been neglected for reasons besides than the disinterested pursuit of truth. It is more attractive to our culture to say that our science exists in the wake of Renaissance learning or brave Reformers than to say that science has roots in it decries as superstition. For reasons that I will discuss below under the next heading, I suggest that what we now classify as the artificial intelligence movement is a further development of some of magic's major features.

There is a major qualitative shift between Newton's development of physics being considered by some to be a diversion from his alchemical and other occult endeavours, and 'spooky' topics today being taboo for scientific research. Yet it is still incomplete to enter a serious philosophical discussion of science without understanding the occult, as as it incomplete to enter a serious discussion of Christianity without understanding Judaism. Lewis points out that the popular understanding of modern science displacing the magic of the middle ages is at least misleading; there was very little magic in the middle ages, and then science and magic flourished at the same time, for the same reason, often in the same people: the reason science became stronger than magic is purely Darwinian: it worked better.[30] One may say that medieval religion is the matrix from which Renaissance magic departed, and early modern magic is the matrix from which science departed.

What is the relationship between the mediaeval West and patristic Christianity? In this context, the practical difference is not yet a great one. The essential difference is that certain seeds have been sown—such as nominalism and the rediscovered Aristotelianism—which in the mediaeval West would grow into something significant, but had not in much of any practical sense affected the fabric of society. People still believed that the heavens told the glory of God; people lived a life oriented towards contemplation rather than consumption; monasteries and saints were assumed so strongly that they were present even—especially—as they retreated from society. Certain seeds had been sown in the mediaeval West, but they had not grown to any significant stature. For this discussion, I will treat mediaeval and patristic Christianity as more alike than different.

Renaissance and Early Modern Magic

Magic in this context is much more than a means of casting spells or otherwise manipulating supernatural powers to obtain results. That practice is the token of an entire worldview and enterprise, something that defines life's meaning and what one ought to seek. To illustrate this, I will look at some details of work by a characteristic figure, Leibniz. Then I will look at the distinctive way the Renaissance magus related to the world and the legacy this relationship has today. Alongside this I will look at a shift from understanding this life as a contemplative apprenticeship to Heaven, to understanding this life as something for us to make more pleasurable.

Leibniz, a 17th century mathematician and scientist who co-discovered calculus, appears to have been more than conversant with the occult memory tradition,[31] and his understanding of calculus was not, as today, a tool used by engineers to calculate volumes. Rather, it was part of an entire Utopian vision, which could encompass all knowledge and all thoughts, an apparently transcendent tool that would obviate the need for philosophical disagreements:

If we had this [calculus], there would be no more reason for disputes between philosophers than between accountants. It would be enough for them to take their quills and say, 'Let us calculate!'

Leibniz's 1690 Ars Combinatoria contains some material that is immediately accessible to a modern mathematician. It also contains material that is less accessible. Much of the second chapter (9-48) discusses combinations of the letters U, P, J, S, A, and N; these letters are tied to concepts ranging from philosophy to theology, jurisprudence and mathematics: another table links philosophical concepts with numbers (42-3). The apparent goal was to validly manipulate concepts through mechanical manipulations of words, but I was unable to readily tell what (mathematico-logical?) principle was supposed to make this work. (The principle is apparently unfamiliar to me.) This may reflect the influence of Ramon Lull, thirteenth century magician and doctor of the Catholic Church who adapted a baptised Kaballah which involved manipulating combinations of (Latin) letters. Leibniz makes repeated reference to Lull (28, 31, 34, 46), and specifically mentions his occult ars magna (28). Like Lull, Leibniz is interested in the occult, and seeks to pioneer some new tool that will obviate the need for this world's troubles. He was an important figure in the creation of science, and his notation is still used for calculus today. Leibniz is not trying to be just another member of society, or to contribute to society's good the way members have always contributed to society's good: he stands above it, and his intended contribution is to reorder the fabric of society according to his endowed vision. Leibniz provides a characteristic glimpse of how early modern magic has left a lasting imprint.

If the person one should be in Orthodoxy is the member of Church and society, the figure in magic is the magus, a singular character who stands outside of the fabric of society and seeks to transform it. What is the difference? The member of the faithful is an integrated part of society, and lives in submission and organic connection to it. The magus, by contrast, stands above society, superior to it, having a relation to society as one whose right and perhaps duty is to tear apart and reconstruct society along better lines. We have a difference between humility and pride, between relating to society as to one's mother and treating society as raw material for one to transform. The magus is cut off from the common herd by two closely related endowments: a magic sword to cut through society's Gordian knots, and a messianic fantasy.[32] In Leibniz's case the magic sword is an artificial language which will make philosophical disagreements simply obsolete. For the artificial intelligence movement, the magic sword is artificial intelligence itself. The exact character of the sword, knot, and fantasy may differ, but their presence does not.

The character of the Renaissance magus may be seen as as hinging on despair with the natural world. This mood seems to be woven into Hermetic texts that were held in such esteem in the Renaissance and were connected at the opening of pre-eminent Renaissance neo-Platonist Pico della Mirandola's Oration on the Dignity of Man.[33] If there is good to be had, it is not met in the mundane world of the hoi polloi. It must be very different from their reality, something hidden that is only accessible to an elite. The sense in which this spells out an interest in the occult means far more than carrying around a rabbit's foot. The specific supernatural contact was valued because the occult was far hidden from appearances and the unwashed masses. (The Christian claim that one can simply pray to God and be heard is thus profoundly uninteresting. Supernatural as it may be, it is ordinary, humble, and accessible in a way that the magus is trying to push past.) This desire for what is hidden or very different from the ordinary means that the ideal future must be very different from the present. Therefore Thomas More, Renaissance author, canonised saint, and strong devotee of Mirandola's writing, himself writes Utopia. In this work, the philosophic sailor Raphael establishes his own reason as judge over the appropriateness of executing thieves,[34] and describes a Utopia where society simply works better: there seem to be no unpleasant surprises or unintended consequences. [35] There is little sense of a complex inner logic to society that needs to be respected, or any kind of authority to submit to. Indeed, Raphael abhors authority and responds to the suggestion that he attach himself to a king's court by saying, 'Happier! Is that to follow a path that my soul abhors?' This Utopian vision, even if it is from a canonised Roman saint, captures something deep of the occult currents that would later feed into the development of political ideology. The content of an occult vision for constructing a better tomorrow may vary, but it is a vision that seeks to tear up the world as we now know it and reconstructs it along different lines.

Magic and science alike relate to what they are interested in via an I-It rather than an I-Thou relationship. Relating to society as to one's mother is an I-Thou relationship; treating society as raw material is an I-It relationship. An I-Thou relationship is receptive to quality. It can gain wisdom and insight. It can connect out of the whole person. The particular kind of I-It relationship that undergirds science has a powerful and narrow tool that deals in what can be mathematically represented. The difference between those two is misunderstood if one stops after saying, 'I-It can make technology available much better than I-Thou.' That is how things look through I-It eyes. But I-Thou allows a quality of relationship that does not exist with I-It. 'The fundamental word I-Thou can only be spoken with one's whole being. The fundamental word I-It can never be spoken with one's whole being.' I-Thou allows a quality-rich relationship that always has another layer of meaning. In the Romance languages there are two different words for knowledge: in French,connaissance and savoir. They both mean 'knowledge,' but in different ways: savoir is knowledge of fact (or know-how); one can sait que ('know that') something is true. Connaissance is the kind of knowledge of a person, a 'knowledge of' rather than a 'knowledge that' or 'knowledge how.' It can never be a complete knowledge, and one cannot connait que ('know-of that') something is true. It is personal in character. An I-It relationship is not just true of magic; as I will discuss below under the heading of 'Science, Psychology, and Behaviourism,' psychology seeks a baseline savoir of people where it might seek a connaissance , and its theories are meant to be abstracted from relationships with specific people. Like magic, the powers that are based on science are epiphenomenal to the relationship science is based on. Relating in an I-Thou rather than I-It fashion is not simply less like magic and science; it is richer, fuller, and more human.

In the patristic and medieval eras, the goal of living had been contemplation and the goal of moral instruction was to conform people to reality. Now there was a shift from conforming people to reality, towards conforming reality to people.[36] This set the stage, centuries later, for a major and resource-intensive effort to create an artificial mind, a goal that would not have fit well with a society oriented to contemplation. This is not to say that there is no faith today, nor that there was no technology in the middle ages, nor that there has been no shift between the early modern period and today. Rather, it is to say that a basic trajectory was established in magic that significantly shapes science today.

The difference between the Renaissance magus and the mediaeval member of the Church casts a significant shadow today. The scientist seems to live more in the shadow of the Renaissance magus than of the member of mediaeval society. This is not to say that scientists cannot be humble and moral, nor that they cannot hold wonder at what they study. But it is to say that there are a number of points of contact between the Renaissance magus's way of relating to the world and that of a scientist and those who live in science's shadow. Governments today consult social scientists before making policy decisions: the relationship seems to be how to best deal with material rather than a relationship as to one's mother. We have more than a hint of secularised magic in which substantial fragments of Renaissance and early modern magic have long outlived some magical practices.

Under the patristic and medieval conception, this life was an apprenticeship to the life in Heaven, the beginning of an eternal glory contemplating God. Magic retained a sense of supernatural reality and a larger world, but its goal was to improve this life, understood as largely self-contained and not as beginning of the next. That was the new chief end of humanity. That shift is a shift towards the secular, magical as its beginning may be. Magic contains the seeds of its own secularisation, in other words of its becoming scientific. The shift from contemplation of the next world to power in this world is why the occult was associated with all sorts of Utopian visions to transform the world, a legacy reflected in our political ideologies. One of the tools developed in that magical milieu was science: a tool that, for Darwinian reasons, was to eclipse all the rest. The real magic that has emerged is science.

Science, Psychology, and Behaviourism

What is the niche science has carved out for itself? I'd like to look at an academic discipline that is working hard to be a science, psychology. I will more specifically look at behaviourism, as symptomatic within the history of psychology. Is it fair to look at behaviourism, which psychology itself rejected? It seems that behaviourism offers a valuable case study by demonstrating what is more subtly present elsewhere in psychology. Behaviourism makes some basic observations about reward and punishment and people repeating behaviours, and portrays this as a comprehensive psychological theory: behaviourism does not acknowledge beliefs, for instance. Nonetheless, I suggest that behaviourism is a conceivable development in modern psychology which would have been impossible in other settings. Behaviourism may be unusual in the extreme simplicity of its vision and its refusal to recognise internal states, but not in desiring a Newton who will make psychology a full-fledged science and let psychology know its material with the same kind of knowing as physics has for its material.

Newton and his kin provided a completely de-anthropomorphised account of natural phenomena, and behaviourism provided a de-anthropomorphised account of humans. In leading behaviourist B.F. Skinner's Walden Two (1948), we have a Utopian vision where every part of society seems to work better: artists raised under Skinner's conditioning produce work which is 'extraordinarily good,' the women are more beautiful,[37] and Skinner's alter ego expresses the hope of controlling the weather,[38] and compares himself with God.[39] Skinner resemble seems to resemble a Renaissance magus more than a mediaeval member: society is raw material for him to transform. Skinner is, in a real sense, a Renaissance magus whose magic has become secularised. Quite a lot of the magus survives the secularisation of Skinner's magic.

Even without these more grandiose aspirations, psychology is symptomatic of something that is difficult to discern by looking at the hard sciences. Psychological experiments try to find ways in which the human person responds in terms comparable to a physics experiment—and by nature do not relate to their subjects as human agents. These experiments study one aspect of human personhood, good literature another, and literature offers a different kind of knowing from a psychological experiment. If we assume that psychology is the best way to understand people—and that the mind is a mechanism-driven thing—then the assumed burden of proof falls on anyone saying, 'But a human mind isn't the sort of thing you can duplicate on a computer.' The cultural place of science constitutes a powerful influence on how people conceive the question of artificial intelligence.

Behaviourism offers a very simple and very sharp magic sword to cut the Gordian knot of unscientific teleology, a knot that will be discussed under the heading of 'Intentionality and Teleology' below. It removes suspicion of the reason being attached to a spiritual intellect by refusing to acknowledge reason. It removes the suspicion of emotions having a spiritual dimension by refusing to acknowledge emotions. He denies enough of the human person that even psychologists who share those goals would want to distance themselves from him. And yet Skinner does more than entertain messianic fantasies: Walden Two is a Utopia, and when Skinner's alter ego compares himself with God, God ends up second best.[40] I suggest that this is no a contradiction at all, or more properly it is a blatant contradiction as far as common sense is concerned, but as far as human human phenomena go, we have two sides of the same coin. The magic sword and the messianic fantasy belong to one and the same magus.

There is in fact an intermediate step between the full-fledged magus and the mortal herd. One can be a magician's assistant, clearing away debris and performing menial tasks to support the real magi. [41] The proportion of the Western population who are scientists is enormous compared to science's founding, and the vast majority of the increase is in magician's assistants. If one meets a scientist at a social gathering, the science is in all probability not a full-fledged magus, but a magician's assistant, set midway between the magus and the commoner. The common scientist is below the magus in knowledge of science but well above most commoners. In place of a personal messianic fantasy is a more communal tendency to assume that the scientific enterprise is our best hope for the betterment of society. (Commoners may share this belief.) There is a significant difference between the magus and most assistants today. Nonetheless, the figure of the magus is alive today—secularised, in most cases, but alive and well. Paul Johnson's Augustinian account ofIntellectuals includes such eminent twentieth century scientific figures as Bertrand Russell, Noam Chompsky, and Albert Einstein;[42] the figures one encounters in his pages are steeped in the relationship to society as to raw material instead as to one's mother, the magic sword, and the messianic fantasy.

I-Thou and Humanness

I suggest that the most interesting critiques of artificial intelligence are not obtained by looking through I-It eyes in another direction, but in using other eyes to begin with, looking through I-Thou eyes. Let us consider Turing's 'Arguments from Various Disabilities'.[43]Perhaps the people who furnished Turing with these objections were speaking out of something deeper than they could explain:

Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.

Be kind:
Kindness is listed by Paul as the fruit of the Spirit (Gal. 5:22) in other words, an outflow of a person living in the Spirit. Disregarding the question of whether all kindness is the fruit of the Spirit, in humans kindness is not merely following rules, but the outflow of a concern for the other person. Even counterfeit kindness is a counterfeit from someone who knows the genuine article. It thus uses some faculty of humanity other than the reasoning ability, which classical AI tries to duplicate and which is assumed to be the one thing necessary to duplicate human cognition.

Be resourceful:
The artificial intelligence assumption is that if something is non-deterministic, it is random, because deterministic and pseudo-random are the only options one can use in programming a computer. This leaves out a third possibility, that by non-computational faculties someone may think, not merely 'outside the box,' in a random direction, but above it. The creative spark comes neither from continuing a systematic approach, nor simply picking something random ('because I can't get my computer to turn on, I'll pour coffee on it and see if that helps'), but something that we don't know how to give a computer.

Be beautiful:
Beauty is a spiritual quality that is not perceived by scientific enquiry and, given our time's interpretation of scientific enquiry, is in principle not recognised. Why not? If we push materialist assumptions to the extreme, it is almost a category error to look at a woman and say, 'She is beautiful.' What is really being said—if one is not making a category error—is, 'I have certain emotions when I look at her.' Even if there is not a connection between physical beauty and intelligence, there seems to be some peasant shrewdness involved. It is a genuine, if misapplied, appeal to look at something that has been overlooked.

Be friendly:
True as opposed to counterfeit friendliness is a manifestation of love, which has its home in the will, especially if the will is not understood as a quasi-muscular power of domination, but part of the spirit which lets us turn towards another in love.

Remarks could easily be multiplied. What is meant to come through all this is that science is not magic, but science works in magic's wake. Among relevant features may be mentioned relating as a magus would (in many ways distilling an I-It relationship further), and seeking power over the world in this life rather living an apprenticeship to the next.

Orthodox Anthropology in Maximus Confessor's Mystagogia

I will begin detailed enquiry in the Greek Fathers by considering an author who is foundational to Eastern Orthodoxy, the seventh century Greek Father Maximus Confessor. Out of the existing body of literature, I will focus on one work, his Mystagogia,[44] with some reference to the Capita Gnosticae. Maximus Confessor is a synthetic thinker, and the Mystagogia is an anthropological work; its discussion of Church mystagogy is dense in theological anthropology as the training for a medical doctor is dense in human biology.

Orthodox Christians have a different cosmology from the Protestant division of nature, sin, and grace. Nature is never un-graced, and the grace that restores from sin is the same grace that provides continued existence and that created nature in the first place. That is to say, grace flows from God's generosity, and is never alien to nature. The one God inhabits the whole creation: granted, in a more special and concentrated way in a person than in a rock, but the same God is really present in both.

Already, without having seriously engaged theological anthropology, we have differences with how AI looks at things. Not only are the answers different, but the questions themselves are posed in a different way. 'Cold matter,' such as is assumed by scientific materialism, doesn't exist, not because matter is denied in Berkeleyan fashion but because it is part of a spiritual cosmology and affirmed to be something more. It is mistaken to think of cold matter, just as it is mistaken to think of tepid fire. Even matter has spiritual attributes and is graced. Everything that exists, from God and the spiritual creation to the material creation, from seraphim to stone, is the sort of thing one connects to in an I-Thou relationship. An I-It relationship is out of place, and from this perspective magic and science look almost the same, different signposts in the process of establishing a progressively purer I-It relationship.

Intellect and Reason

Maximus' anthropology is threefold: the person is divided into soul and body, and the soul itself is divided into a higher part, the intellect, and a lower part, the reason:[45]

[Pseudo-Dionysius] used to teach that the whole person is a synthesis of soul and body joined together, and furthermore the soul itself can be examined by reason. (The person is an image which reflects teaching about the Holy Church.) Thus he said that the soul had an intellectual and living faculty that were essentially united, and described the moving, intellectual, authoritative power—with the living part described according its will-less nature. And again, the whole mind deals with intelligible things, with the intelligible power being called intellect, whilst the sensible power is called reason.

This passage shows a one-word translation difficulty which is symptomatic of a difference between his theology and the quasi-theological assumptions of the artificial intelligence project. The word in question, which I have rendered as 'authoritative power,' is 'exousiastikws,' with root word 'exousia.' The root and its associated forms could be misconstrued today as having a double meaning of 'power' and 'authority,' with 'authority' as the basic sense. In both classical and patristic usage, it seems debatable whether 'exousia' is tied to any concept of power divorced from authority. In particular this passage's 'exousiastikws' is most immediately translated as power rather than any kind of authority that is separate from power. Yet Maximus Confessor's whole sense of power here is one that arises from a divine authorisation to know the truth. This sense of power is teleologically oriented and has intrinsic meaning. This is not to say that Maximus could only conceive of power in terms of authority. He repeatedly uses 'dunamis,' (proem.15-6, 26, 28, etc), a word for power without significant connotations of authority. However, he could conceive of power in terms of authority, and that is exactly what he does when describing the intellect's power.

What is the relationship between 'intellect'/'reason' and cognitive faculties? Which, if either, has cognitive faculties a computer can't duplicate? Here we run into another difficulty. It is hard to say that Maximus Confessor traded in cognitive faculties. For Maximus Confessor the core sense of 'cognitive faculties' is inadequate, as it is inadequate to define an eye as something that provides nerve impulses which the brain uses to generate other nerve impulses. What is missing from this picture? This definition does not provide any sense that the eye interacts with the external world, so that under normal circumstances its nerve impulses are sent because photons strike photoreceptors in an organ resembling a camera. Even this description hides most teleology and evaluative judgment. It does not say that an eye is an organ for perceiving the external world through an image reconstructed in the brain, and may be called 'good' if it sees clearly and 'bad' if it doesn't. This may be used as a point of departure to comment on Maximus Confessor and the conception of cognitive faculties.

Maximus Confessor does not, in an amoral or self-contained fashion, see faculties that operate on mental representations. He sees an intellect that is where one meets God, and where one encounters a Truth that is no more private than the world one sees with the eye is private.

Intellect and reason compete with today's cognitive faculties, but Maximus Confessor understands the intellect in particular as something fundamentally moral, spiritual, and connected to spiritual realities. His conception of morality is itself different from today's private choice of ethical code; morality had more public and more encompassing boundaries, and included such things as Jesus' admonition not to take the place of highest honour so as not to receive public humiliation (Luke 14:7-10): it embraced practical advice for social conduct, because the moral and spiritual were not separated from the practical. It is difficult to Maximus Confessor conceiving of practicality as hampered by morality. In Maximus Confessor's day what we separate into cognitive, moral, spiritual, and practical domains were woven into a seamless tapestry.

Intellect, Principles, and Cosmology

Chapter twenty-three opens by emphasising that contemplation is more than looking at appearances (23.1-10), and discusses the Principles of things. The concept of a Principle is important to his cosmology. There is a foundational difference between the assumed cosmologies of artificial intelligence and Maximus Confessor. Maximus Confessor's cosmology is not the artificial intelligence cosmology with a spiritual dimension added, as a living organism is not a machine modified to use foodstuffs as fuel.

Why do I speak of the 'artificial intelligence cosmology'? Surely one can have a long debate about artificial intelligence without adding cosmology to the discussion. This is true, but it is true because cosmology has become invisible, part of the assumed backdrop of discussion. In America, one cultural assumption is that 'culture' and 'customs' are for faroff and exotic people, not for 'us'—'we' are just being human. It doesn't occur to most Americans to think of eating Turkey on Thanksgiving Day or removing one's hat inside a building as customs, because 'custom' is a concept that only applies to exotic people. I suggest that Maximus Confessor has an interesting cosmology, not because he's exotic, but because he's human.

Artificial intelligence proponents and (most) critics do not differ on cosmology, but because that is because it is an important assumption which is not questioned even by most people who deny the possibility of artificial intelligence. Searle may disagree with Fodor about what is implied by a materialist cosmology, but not whether one should accept materialism. I suggest that some artificial intelligence critics miss the most interesting critiques of artificial intelligence because they share that project's cosmology. If AI is based on a cosmological error, then no amount of fine-tuning within the system will rectify the error. We need to consider cosmology if we are to have any hope of correcting an error that basic. (Bad metaphysics does not create good physics.) I will describe Maximus Confessor's cosmology in this section, not because he has cosmology and AI doesn't, but because his cosmology seems to suggest a correction to the artificial intelligence cosmology.

At the base of Maximus's cosmology is God. God holds the Principles in his heart, and they share something of his reality. Concrete beings (including us) are created through the Principles, and we share something of their reality and of God. The Principles are a more concrete realisation of God, and we are a more concrete realisation of the Principles. Thought (nohsis) means beholding God and the Principles ( logoi) through the eye of the intellect. Thinking of a tree means connecting with something that is more tree-like than the tree itself.

It may be easier to see what the important Principles in Maximus Confessor's cosmology if we see how they are being dismantled today. Without saying that Church Fathers simply grafted in Platonism, I believe it safe to say that Plato resembled some of Church doctrine, and at any rate Plato's one finger pointing up to God offers a closer approximation to Christianity than Aristotle's fingers pointing down. I would suggest further that looking at Plato can suggest how Christianity differs from Aristotelianism's materialistic tendencies, tendencies that are still unfolding today. Edelman describes the assumptions accompanying Darwin's evolution as the 'death blow' to the essentialism, the doctrine that there are fixed kinds of things, as taught by Plato and other idealists.[46] Edelman seems not to appreciate why so many biologists assent to punctuated equilibrium.[47] However, if we assume that there is solid evidence establishing that all life gradually evolved from a common ancestor, then this remark is both apropos and perceptive.

When we look around, we see organisms that fit neatly into different classes: human, housefly, oak. Beginning philosophy students may find it quaint to hear of Plato's Ideas, and the Ideal horse that is copied in all physical horses, but we tend to assume Platonism at least in that horses are similar 'as if' there were an Ideal horse: we don't believe in the Ideal horse any more, but we still treat its shadow as if it were the Ideal horse's shadowy copy.

Darwin's theory of evolution suggests that all organisms are connected via slow, continuous change to a common ancestor and therefore to each other. If this is true, there are dire implications for Platonism. It is as if we had pictures of wet clay pottery, and posited a sharp divide between discrete classes of plates, cups, and bowls. Then someone showed a movie of a potter deforming one and the same clay from one shape to another, so that the divisions are now shown to be arbitrary. There are no discrete classes of vessels, just one lump of clay being shaped into different things. Here we are pushing a picture to the other end of a spectrum, further away from Platonism. It is a push from tacitly assuming there is a shadow, to expunging the remnant of belief in the horse and its shadow.

But this doesn't mean we're perfect Platonists, or can effortlessly appreciate the Platonic mindset. There are things we have to understand before we can travel in the other direction. If anything, there is more work involved. We act as if the Ideas' shadows are real things, but we don't genuinely believe in the shadows qua shadows, let alone the Ideas. We've simply inherited the habit of treating shadows as a convenient fiction. But Maximus Confessor believed the Principles (Ideas) represented something fuller and deeper than concrete things.

This is foundational to why Maximus Confessor would not have understood thought as manipulating mental representations in the inescapable privacy of one's mind. Contemplation is not a matter of closing one's eyes and fantasising, but of opening one's eyes and beholding something deeper and more real than reality itself. The sensible reason can perceive the external physical world through the senses, but this takes a very different light from Kant's view.

Maximus Confessor offers a genuinely interesting suggestion that we know things not only because of our power-to-know, but because of their power-to-be-known, an approach that I will explore later under the heading 'Knowledge of the Immanent.' The world is not purely transcendent, but immanent. For Kant the mind is a box that is hermetically sealed on top but has a few frustratingly small holes on the bottom: the senses. Maximus Confessor doesn't view the senses very differently, but the top of the box is open.

This means that the intellect is most basically where one meets God. Its powerful ability to know truth is connected to this, and it connects with the Principles of things, as the senses connect with mere things. Is it fair to the senses to compare the intellect's connection with Principles with the senses' experience of physical things? The real question is not that, but whether it is fair to the intellect, and the answer is 'no.' The Principles are deeper, richer, and fuller than the mere visible things, as a horse is richer than its shadow. The knowledge we have through the intellect's connection with the Principles is of a deeper and richer sort than what is merely inferred from the senses.

The Intelligible and the Sensible

Maximus Confessor lists, and connects, several linked pairs, which I have incorporated into a schema below. The first column of this schema relates to the second column along lines just illustrated: the first member of each pair is transcendent and eminent to the second, but also immanent to it.

Head Body
Heaven earth (3.1-6)
holy of holies sanctuary (2.8-9)
intelligible sensible (7.5-10)
contemplative active (5.8-9)
intellect reason (5.9-10)
spiritual wisdom practical wisdom (5.13-15)
knowledge virtue (5.58)
unforgettable knowledge faith (5.58-60)
truth goodness (5.58-9)
archetype image (5.79-80)
New Testament Old Testament (6.4-6)
spiritual meaning of a text literal meaning of a text (6.14-5)
bishop's seating on throne bishop's entrance into Church (8.5-6, 20-21)
Christ's return in glory Christ's first coming, glory veiled (8.6-7, 18)

Maximus Confessor's cosmology sees neither a disparate collection of unconnected things, nor an undistinguished monism that denies differences. Instead, he sees a unity that sees natures (1.16-17) in which God not only limits differences, as a circle limits its radii (1.62-67), but transcends all differences. Things may be distinguished, but they are not divided. This is key to understanding both doctrine and method. He identifies the world with a person, and connects the Church with the image of God. Doctrine and method are alike synthetic, which suggests that passages about his cosmology and ecclesiology illuminate anthropology.

One recurring theme shows in his treatment of heaven and earth, the soul and the body, the intelligible (spiritual) and the sensible (material). The intelligible both transcends the sensible, and is immanent to it, present in it. The intelligible is what can be apprehended by the part of us that meets God; the sensible is what presents itself to the world of senses. (The senses are not our only connection with the world.) This is a different way of thinking about matter and spirit from the Cartesian model, which gives rise to the ghost in the machine problem. Maximus Confessor's understanding of spirit and matter does not make much room for this dilemma. Matter and spirit interpenetrate. This is true not just in us but in the cosmos, which is itself 'human': he considers '...the three people: the cosmos (let us say), the Holy Scriptures, and this is true with us' (7.40-1). The attempt to connect spirit and matter might have struck him like an attempt to forge a link between fire and heat, two things already linked.

Knowledge of the Immanent

The word which I here render 'thought' is 'nohsis', cognate to 'intellect' ('nous') which has been discussed as that which is inseparably the home of thought and of meeting God. We already have a hint of a conceptual cast in which thought will be understood in terms of connection and contemplation.

In contrast to understanding thought as a process within a mind, Maximus describes thought in terms of a relationship: a thought can exist because there is a power to think of in the one thinking, and a power to be thought of in what is thought of.[48] We could no more know an absolutely transcendent creature than we could know an absolutely transcendent Creator. Even imperfect thought exists because we are dealing with something that 'holds power to be apprehended by the intellect' (I.82). We say something is purple because its manifest purpleness meets our ability to perceive purple. What about the claim that purple is a mental experience arising from a certain wavelength of light striking our retinas? One answer that might be given is that those are the mechanisms by which purple is delivered, not the nature of what purple is.[49] The distinction is important.

We may ask, what about capacity for fantasy and errors? The first response I would suggest is cultural. The birth of modernity was a major shift, and its abstraction introduced new things into the Western mind, including much of what supports our concept of fantasy (in literature, etc.). The category of fantasy is a basic category to our mindset but not to the patristic or medieval mind. Therefore, instead of speculating how Maximus Confessor would have replied to these objections, we can point out that they aren't the sort of thing that he would ever think of, or perhaps even understand.

But in fact a more positive reply can be taken. It can be said of good and evil that good is the only real substance. Evil is not its own substance, but a blemish in good substance. This parallels error. Error is not something fundamentally new, but a blurred or distorted form of truth. Fantasy does not represent another fundamentally independent, if hypothetical, reality; it is a funhouse mirror refracting this world. We do not have a representation that exists in one's mind alone, but a dual relationship that arises both from apprehending intellect and an immanent thing. The possibility of errors and speculation make for a longer explanation but need not make us discard this basic picture.

Intentionality and Teleology

One of the basic differences in cosmology between Maximus Confessor and our own day relates to intentionality. As it is described in cognitive science's philosophy of mind, 'intentionality' refers to an 'about-ness' of human mental states, such as beliefs and emotions. The word 'tree' is about an object outside the mind, and even the word 'pegasus' evokes something that one could imagine existing outside of the mind, even if it does not. Intentionality does not exist in computer programs: a computer chess program manipulates symbols in an entirely self-enclosed system, so 'queen' cannot refer to any external person or carry the web of associations we assume. Intentionality presents a philosophical problem for artificial intelligence. Human mental states and symbol manipulation are about something that reach out to the external world, whilst computer symbol manipulation is purely internal. A computer may manipulate symbols that are meaningful to humans using it, but the computer has no more sense of what a webpage means than a physical book has a sense that its pages contain good or bad writing. Intentionality is a special feature of living minds, and does not exist outside of them. Something significant will be achieved if ever a computer program first embodies intentionality outside of a living mind.

Maximus Confessor would likely have had difficulty understanding this perspective as he would have had difficulty understanding the problem of the ghost in the machine: this perspective makes intentionality a special exception as the ghost in the machine made our minds' interaction with our bodies a special exception, and to him both 'exceptions' are in fact the crowning jewel of something which permeates the cosmos.

The theory of evolution is symptomatic of a difference between the post-Enlightenment West and the patristic era. This theory is on analytic grounds not a true answer to the question, 'Why is there life as we know it?' because it does not address the question, 'Why is there life as we know it?' At best it is a true answer to the question, 'How is there life as we know it?' which people often fail to distinguish from the very different question, 'Why is there life as we know it?' The Enlightenment contributed to an effort to expunge all trace of teleology from causality, all trace of 'Why?' from 'How?' Of Aristotle's four causes, only the efficient cause[50] is familiar; a beginning philosophy student is liable to misconstrue Aristotle's final cause[51] as being an efficient cause whose effect curiously precedes the cause. The heavy teleological scent to final causation is liable to be missed at first by a student in the wake of reducing 'why' to 'how'; in Maximus Confessor, causation is not simply mechanical, but tells what purpose something serves, what it embodies, what meaning and relationships define it, and why it exists.

Strictly speaking, one should speak of 'scientific mechanisms' rather than 'scientific explanations.' Why? 'Scientific proof' is an oxymoron: science does not deal in positive proof any more than mathematics deals in experiment, so talk of 'scientific proof' ordinarily signals a speaker who has more faith in science than understanding of what science really does. 'Scientific explanation' is a less blatant contradiction in terms, but it reflects a misunderstanding, perhaps one that is more widespread, as it often present among people who would never speak of 'scientific proof.' Talk of 'scientific explanation' is not simply careless speech; there needs to be a widespread category error before there is any reason to write a book like Mary Midgley's Science as Salvation (1992). Science is an enterprise which provides mechanisms and has been given the cultural place of providing explanations. This discrepancy has the effect that people searching for explanations turn to scientific mechanisms, and may not be receptive when a genuine explanation is provided, because 'explanation' to them means 'something like what science gives.' This may not be the only factor, but it casts a long shadow. The burden of proof is born by anyone who would present a non-scientific explanation as being as real as a scientific explanation. An even heavier burden of proof falls on the person who would claim that a non-scientific explanation—not just as social construction, but a real claim about the external world—offers something that science does not.

The distinction between mechanism and explanation is also relevant because the ways in which artificial intelligence has failed may reflect mechanisms made to do the work of explanations. In other words, the question of 'What is the nature of a human?' is answered by, 'We are able to discern these mental mechanisms in a human.' If this is true, the failure to duplicate a human mind in computers may be connected to researchers answering the wrong question in the first place. These are different, as the question, 'What literary devices can you find in The Merchant of Venice[52]?' is different from 'Why is The Merchant of Venice powerful drama?' The devices aren't irrelevant, but neither are they the whole picture.

Of the once great and beautiful land of teleology, a land once brimming in explanations, all has been conquered, all has been levelled, all has been razed and transformed by the power of I-It. All except two stubborn, embattled holdouts. The first holdout is intentionality: if it is a category error to project things in the human mind onto the outer world, nonetheless we recognise that intentionality exists in the mind—but about-ness of intentionality is far less than the about-ness once believed to fill the cosmos. The second and last holdout is evolution: if there is to be no mythic story of origins that gives shape and meaning to human existence, if there cannot be an answer to 'Why is there life as we know it?' because there is no reason at all for life, because housefly, horse, and human are alike the by-product of mindless forces that did not have us in mind, nonetheless there is still an emaciated spectre, an evolutionary mechanism that does just enough work to keep away a teleological approach to origins questions. The land of teleology has been razed, but there is a similarity between these two remnants, placeholders which are granted special permission to do what even the I-It approach recognises it cannot completely remove of teleology. That is the official picture, at least. Midgley is liable to pester us with counterexamples of a teleology that is far more persistent than the official picture gives credit for: she looks at evolution doing the work of a myth instead of a placeholder that keeps myths away, for instance.[53] Let's ignore her for the moment and stick with the official version. Then looking at both intentionality and evolution can be instructive in seeing what has happened to teleology, and appreciating what teleology was and could be. Now Midgley offers us reasons why it may not be productive to pretend we can excise teleology: the examples of teleology she discusses do not seem to be improved by being driven underground and presented as non-teleological.

Maximus's picture, as well as being teleological, is moral and spiritual. As well as having intentions, we are living manifestations of a teleological, moral and spiritual Intention in God's heart. Maximus Confessor held a cosmology, and therefore an anthropology, that did not see the world in terms of disconnected and meaningless things. He exhibited a number of traits that the Enlightenment stripped out: in particular, a pervasive teleology in both cosmology and anthropology. He believed in a threefold anthropology of intellect/spirit, reason/soul, and body, all intimately tied together. What cognitive science accounts for through cognitive faculties, manipulating mental representations, were accounted for quite differently by an intellect that sees God and the Principles of beings, and a reason that works with the truths apprehended by intellect. The differences between the respective cosmologies and anthropologies are not the differences between two alternate answers to the same question, but answers to two different questions, differently conceived. They are alike in that they can collide because they are wrestling with the same thing: where they disagree, at least one of them must be wrong. They are different in that they are looking at the same aspect of personhood from two different cultures, and Maximus Confessor seems to have enough distance to provide a genuinely interesting critique.

Conclusion

Maximus Confessor was a synthetic thinker, and I suggest that his writings, which are synthetic both in method and in doctrine, are valuable not only because he was brilliant but because synthetic enquiry can be itself valuable. I have pursued a synthetic enquiry, not out of an attempt to be like Maximus Confessor, but because I think an approach that is sensitive to connections could be productive here. I'm not the only critic who has the resources to interpret AI as floundering in a way that may be symptomatic of a cosmological error. It's not hard to see that many religious cosmologies offer inhospitable climates to machines that think: Foerst's reinterpretation of the image of God[54] seems part of an effort to avoid seeing exactly this point. The interesting task is understanding and conveying an interconnected web. So I have connected science with magic, for instance, because although the official version is that they're completely unrelated, there is a strong historic link between them, and cultural factors today obscure the difference, and for that matter obscure several other things that interest us.

This dissertation falls under the heading of boundary issues between religion and science, and some readers may perceive me to approach boundary issues in a slightly different fashion. That perception is correct. One of the main ways that boundary issues are framed seems to be for Christian theologians to show the compatibility of their timeless doctrines with that minority of scientific theories which have already been accepted by the scientific community and which have not yet been rejected by that same community. With the question of origins, there has been a lot of work done to show that Christianity is far more compatible with evolutionary theory than a literal reading of Genesis 1 would suggest. It seems to have only been recently that gadflies within the intelligent design movement have suggested both that the scientific case for evolution is weaker that it has been made out to be, and there seems to be good reason to believe that Christianity and evolution are incompatible at a deep enough level that the literal details of Genesis 1 are almost superfluous. Nobody conceives the boundary issues to mean that theologians should demonstrate the compatibility of Christianity with that silent majority of scientific theories which have either been both accepted and discredited (like spontaneous generation) or not yet accepted (like the cognitive-theoretic model of the universe). The minority is different, but not as different as people often assume.

One of the questions which is debated is whether it is best to understand subject-matter from within or without. I am an M.Phil. student in theology with a master's and an adjunct professorship in the sciences. I have worked to understand the sciences from within, and from that base look and understand science from without as well as within. Someone who only sees science from without may lack appreciation of certain things that come with experience of science, whilst someone who only sees science from within may not be able to question enough of science's self-portrayal. This composite view may not be available to all, nor is it needed, but I believe it has helped me in another basic röle from showing religion's compatibility with current science: namely, serving as a critical observer and raising important questions that science is itself unlikely to raise, sometimes turning a scientific assumption on its head. Theology may have other things to offer in its discussion with science than simply offering assent: instead of solely being the recipient of claims from science, it should be an agent which adds to the conversation.

Are there reasons why the position I propose is to be preferred? Science's interpretation of the matter is deeply entrenched, enough so that it seems strange to connect science with the occult. One response is that this perspective should at least be listened to, because it is challenging a now entrenched cultural force, and it may be a cue to how we could avoid some of our own blind spots. Even if it is wrong, it could be wrong in an interesting way. A more positive response would be to say that this is by my own admission far from a complete picture, but it makes sense of part of the historical record that is meaningless if one says that modern science just happened to be born whilst a magical movement waxed strong, and some of science's founders just happened to be magicians. A more robust picture would see the early modern era as an interlocking whole that encompassed a continuing Reformation, Descartes, magic, nascent science, and the wake of the Renaissance polymath. They all interconnect, even if none is fully determined. Lack of time and space preclude me from more than mentioning what that broader picture might be. There is also another reason to question the validity of science's basic picture:

Artificial intelligence doesn't work, at least not for a working copy of human intelligence.

Billions of dollars have been expended in the pursuit of artificial intelligence, so it is difficult to say the artificial intelligence project has failed through lack of funding. The project has attracted many of the world's most brilliant minds, so it is difficult to say that the project has failed through lack of talent. Technology has improved a thousandfold or a millionfold since a giant like Turing thought computer technology was powerful enough for artificial intelligence, so it is difficult to say that today's computers are too underpowered for artificial intelligence. Computer science has matured considerably, so it's hard to say that artificial intelligence hasn't had a chance to mature. In 1950, one could have posited a number of reasons for the lack of success then, but subsequent experience has made many of these possibilities difficult to maintain. This leaves open the possibility that artificial intelligence has failed because the whole enterprise is based on a false assumption, perhaps an error so deep as to be cosmological.

The power of science-based technology is a side effect of learning something significant about the natural world, and both scientific knowledge and technology are impressive cultural achievements. Yet science is not a complete picture—and I do not mean simply that we can have our own private fantasies—and science does not capture the spiritual qualities of matter, let alone a human being. The question of whether science understands mechanical properties of physical things has been put to the test, and the outcome is a resounding yes. The question of whether science understands enough about humans to duplicate human thought is also being put to the test, and when the rubber meets the road, the answer to that question looks a lot like, 'No.' It's not definitive (it couldn't be), but the picture so far is that science is trying something that can't work. It can't work because of spiritual principles, as a perpetual motion machine can't work because of physical principles. It's not a matter of insufficient resources available so far, or still needing to find the right approach. It doesn't seem to be the sort of thing which could work.

We miss something about the artificial intelligence project if we frame it as something that began after computer scientists saw that computers can manipulate symbols. People have been trying to make intelligent computers for half a century, but artificial intelligence is a phenomenon that has been centuries in the making. The fact that people saw the brain as a telephone switchboard, when that was the new technology, is more a symptom than a beginning. There's more than artificial intelligence's surface resemblance to alchemists' artificial person ('homunculus'). A repeated feature of the occult enterprise is that you do not have people giving to society in the ways that people have always given to society; you have exceptional figures trying to delve into unexplored recesses and forge some new creation, some new power—some new technology or method—to achieve something mythic that has simply not been achieved before. The magus is endowed with a magic sword to powerfully slice through his day's Gordian knots, and with a messianic fantasy. This is true of Leibniz's Ars Combinatoria and it is true of more than a little of artificial intelligence. To the reader who suggests, 'But magic doesn't really work!' I would point out that artificial intelligence also doesn't really work—although its researchers find it to work, like Renaissance magi and modern neo-pagans. The vast gap between magic and science that exists in our imagination is a cultural prejudice rather than a historical conclusion. Some puzzles which emerge from an non-historical picture of science—in particular, why a discipline with modest claims about falsifying hypotheses is held in such awe—seem to make a lot more sense if science is investigated as a historical phenomenon partly stemming from magic.

If there is one unexpected theme running through this enquiry, it is what has emerged about relationships. The question of whether one relates to society (or the natural world) as to one's mother or as to raw material, in I-Thou or I-It fashion, first crept in as a minor clarification. The more I have thought about it, the more significant it seems. The Renaissance magus distinguished himself from his medieval predecessors by converting I-Thou relationships into I-It. How is modern science different? To start with, it is much more consistent in pursuing I-It relationships. The fact that science gives mechanisms instead of explanations is connected; an explanation is an I-Thou thing, whilst a bare mechanism is I-It: if you are going to relate to the world in I-It fashion, there is every reason to replace explanations with mechanisms. An I-Thou relationship understands in a holistic, teleological fashion: if you are going to push an I-It relationship far enough, the obvious approach is to try to expunge teleology as the Enlightenment tried. A great many things about magus and scientist alike hinge on the rejection of Orthodoxy's I-Thou relationship.

In Arthurian legend, the figure of Merlin is a figure who holds magical powers, not by spells and incantations, but by something deeper and fundamental. Merlin does not need spells and incantations because he relates to the natural world in a way that almost goes beyond I-Thou; he relates to nature as if it were human. I suggest that science provides a figure of an anti-Merlin who holds anti-magical powers, not by spells and incantations, but by something deeper and fundamental. Science does not need spells and incantations because it relates to the natural world and humans in a way that almost goes beyond I-It; it relates to even the human as if it were inanimate. In both cases, the power hinges on a relationship, and the power is epiphenomenal to that relationship.

If this is a problem, what all is to be done? Let me say what is not to be done. What is not to be done is to engineer a programme to enlist people in an I-Thou ideology. Why not? 'I-Thou ideology' is a contradiction in terms. The standard response of starting a political programme treats society as raw material to be transformed according to one's vision—and I am not just disputing the specific content of some visions, but saying that's the wrong way to start. Many of the obvious ways of 'making a difference' that present themselves to the modern mind work through an I-It relationship, calculating how to obtain a response from people, and are therefore tainted from the start. Does that mean that nothing is to be done? No; there are many things, from a walk of faith as transforming communion with God, to learning to relate to God, people, and the entire cosmos in I-Thou fashion, to using forms of persuasion that appeal to a whole person acting in freedom. But that is another thesis to explore.

Epilogue, 2010

I look back at this piece six years later, and see both real strengths and things I wince at. This was one of my first major works after being chrismated Orthodox, and while I am enthusiastic for Orthodoxy there are misunderstandings. My focus on cosmology is just one step away from Western, and in particular scientific, roots, and such pressure to get cosmology right is not found in any good Orthodox theologian I know. That was one of several areas where I had a pretty Western way of trying to be Orthodox, and I do not blame people who raise eyebrows at my heavy use of existentialist distinction between I-Thou and I-It relationship. And the amount of time and energy spent discussing magic almost deterred me from posting it from my website; for that reason alone, I spent time debating whether the piece was fit for human consumption. And it is possibly theology in the academic sense, but not so much the Orthodox sense: lots of ideas, cleverly put together, with little invitation to worship.

But for all this, I am still posting it. The basic points it raises, and much of the terrain, are interesting. There may be fewer true believers among scientists who still chase an artificial intelligence pot o' gold, but it remain an element of the popular imagination and belief even as people's interests turn more and more to finding a magic sword that will slice through society's Gordian knots—which is to say that there may be something relevant in this thesis besides the artificial intelligence critique.

I am posting it because I believe it is interesting and adds something to the convesation. I am also posting it in the hope that it might serve as a sort of gateway drug to some of my more recent works, and provide a contrast: this is how I approached theology just after being received into Holy Orthodoxy, and other works show what I would present as theology having had more time to steep in Orthodoxy, such as The Arena.

I pray that God will bless you.

Bibliography

Augustine, In Euangelium Ioannis Tractatus, in Nicene and Post-Nicene Fathers, Series I, Volume VII, Edinburgh: T & T Clarke, 1888.

Bianchi, Massimo Luigi, Signatum Rerum: Segni, Magia e Conoscenza da Paracelso a Leibniz, Edizioni dell'Ateneo, 1987.

Buber, Martin, Ich und Du, in Werke,Erster Band Schriften zur Philosophie, Heidelberg: Kösel-Verlag, 1962, 79-170.

Caroll, Lewis, Alice's Adventures in Wonderland, Cambridge: Candlewick Press, 2003.

Dixon, Thomas, 'Theology, Anti-Theology and Atheology: From Christian Passions to Secular Emotions,' in Modern Theology, Vol 15, No 3, Oxford: Blackwell 1999, 297-330.

Dreyfus, Hubert L., What Computers Still Can't Do: A Critique of Artificial Reason, London: MIT Press, 1992.

Edelman, Gerald, Bright Air, Brilliant Fire, New York: BasicBooks, 1992.

Fodor, Jerry, In Critical Condition: Polemical Essays on Cognitive Science and the Philosophy of Mind, London: MIT Press, 1998.

Foerst, Anne, 'Cog, a Humanoid Robot, and the Question of the Image of God,' in Zygon 33, no. 1, 1998, 91-111.

Gibson, William, Neuromancer, New York: Ace, 2003.

Harman, Gilbert, 'Some Philosophical Issues in Cognitive Science: Qualia, Intentionality, and the Mind-Body Problem,' in Posner 1989, pp. 831-848.

Hebb, D.O. Organization of Behavior: A Neuropsychological Theory, New York: Wiley, 1949.

Johnson, Paul, Intellectuals, New York: Perennial, 1990.

Layton, Bentley, The Gnostic Scriptures: Ancient Wisdom for the New Age, London: Doubleday, 1987.

Lee, Philip J., Against the Protestant Gnostics, New York: Oxford University Press, 1987.

VanLehn, Kurt, 'Problem Solving and Cognitive Skill Acquisition,' in Posner 1989, pp. 527-580.

Leibniz, Gottfried Wilhelm, Frieherr von, Ars Combinatoria, Francofurti: Henri Christopher Cröckerum, 1690.

Lewis, C.S., The Abolition of Man, Oxford: Oxford University Press 1950-6.

Lewis, C.S., That Hideous Strength, London: MacMillan, 1965.

Lewis, C.S., The Chronicles of Narnia, London: Harper Collins, 2001.

Margot Adler, Drawing Down the Moon: Witches, Druids, Goddess Worshippers and Other Pagans in America Today (Revised and Expanded Edition), Boston: Beacon Press, 1986,

Maximus Confessor, Capita Gnosticae (Capita Theologiae et OEconomiae), in Patrologiae Graeca 90: Maximus Confessor, Tome I, Paris: Migne, 1860, 1083-1462.

Maximus Confessor; Berthold, George (tr.), Maximus Confessor: Selected Writings, New York, Paulist Press,, 1985.

Maximus Confessor, Mystagogia, as published at Thesaurus Linguae Graecae, http://stephanus.tlg.uci.edu/inst/browser?uid=&lang=eng&work=2892049&context=21&rawescs=N&printable=N&betalink=Y&filepos=0&outline=N&GreekFont=Unicode. Citations from the Mystagogia will be referenced by chapter and line number as referenced by Thesaurus Linguae Graecae.

Midgley, Mary, Science as Salvation: A Modern Myth and Its Meaning, London: Routledge, 1992.

More, Thomas, Thomas More: Utopia, Digitale Rekonstruktion (online scan of 1516 Latin version), http://www.ub.uni-bielefeld.de/diglib/more/utopia/, as seen on 2 June 2004.

Norman, Donald, The Invisible Computer, London: MIT Press, 1998.

Norman, Donald, Things That Make Us Smart, Cambridge: Perseus 1994.

Von Neumann, John, The Computer and the Brain, London: Yale University Press, 1958.

Polanyi, Michael, Personal Knowledge, Chicago: University of Chicago Press, 1974.

Posner, Michael I. (ed.), Foundations of Cognitive Science, London: MIT, 1989.

Pseudo-Dionysius; Luibheid, Colm (tr.), Pseudo-Dionysius: The Complete Works, New York: Paulist Press, 1987.

Puddefoot, John, God and the Mind Machine: Computers, Artificial Intelligence and the Human Soul, London: SPCK1996.

Read, John, 'Alchimia e magia e la ''separazione delle due vie'',' in Cesare Vasoli (ed.), Magia e scienza nella civilté umanistica, Bologna: Societé editrice il Mulino 1976, 83-108.

Sacks, Oliver, The Man who Mistook his Wife for a Hat, Basingstroke: Picador, 1985.

Searle, John, Minds, Brains, and Science, London: British Broadcasting Corporation, 1984.

Searle, John, The Mystery of Consciousness, London: Granta Books, 1997.

Shakespeare, William, The Merchant of Venice, as seen on the Project Gutenberg archive at http://www.gutenberg.net/etext97/1ws1810.txt on 15 June 2004.

Skinner, B. F., Walden Two, New York: Macmillan, 1948.

Thomas, Keith, Religion and the Decline of Magic: Studies in Popular Beliefs in Sixteenth and Seventeenth Century England, Letchworth: Weidenfeld and Nicolson, 1971.

Turing, Alan M., 'Computing Machinery and Intelligence,' in Mind 49, 1950, pp. 433-60, as seen at http://cogprints.ecs.soton.ac.uk/archive/00000499/00/turing.html on 25 Feb 04.

Watts, Fraser, 'Artificial Intelligence' in Psychology and Theology, Aldercroft: Ashgate, 2002.

Webster, Charles, From Paracelsus to Newton: Magic and the Making of Modern Science, Cambridge: Cambridge University Press, 1982.

Yates, Frances A., The Occult Philosophy in the Elizabethan Age, London: Routledge, 1979.

Yates, Frances A., Selected Works, Volume III: The Art of Memory, London: Routledge, 1966, as reprinted 1999.

Footnotes

[1] These neural nets are modelled after biological neural nets but are organised differently and seem to take the concept of a neuron on something of a tangent from its organisation and function in a natural brain, be it insect or human.

[2] Cog, http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/images/cog-rod-slinky.gif, as seen on 11 June 2004 (enlarged).

[3] 2002, 50-1.

[4] Searle 1998, Edelman 1992, etc., including some of Dreyfus 1992. Edelman lists Jerome Brunner, Alan Gauld, Claes von Hofsten, George Lakoff, Ronald Langaker, Ruth Garrett Millikan, Hilary Putnam, John Searle, and Benny Shannon as convergent members of a realist camp (1992, 220).

[5] Lee 1987, 6.

[6] 'Intentionality' is a philosophy of mind term for the 'about-ness' of mental states.

[7] By 'teleology' I understand in a somewhat inclusive sense that branch of theology and philosophy that deals with goals, ends, and ultimate meanings.

[8] 'Cognitive faculty' is a philosophy of mind conception of a feature of the human mind that operates on mental representations to perform a specific function.

[9] The spiritual 'intellect' is a patristic concept that embraces thought, conceived on different terms from 'cognitive science,' and is inseparably the place where a person meets God. Augustine locates the image of God in the intellect (In Euangelium Ioannis Tractatus, III.4), and compares the intellect to Christ as illuminating both itself and everything else (In Euangelium Ioannis Tractatus, XLVII, 3).

[10] Watts 2002, 57-8. See the World Transhumanist Association website at http://www.transhumanist.org for further information on transhumanism.

[11] C.S. Lewis critiques this project in The Abolition of Man (1943) and That Hideous Strength (1965). He does not address the question of whether this is a possible goal, but argues that it is not a desirable goal: the glorious future it heralds is in fact a horror compared to the present it so disparages.

[12] Encyclopedia Mythica, 'Rabbi Loeb,' http://www.pantheon.org/articles/r/rabbi_loeb.html, as seen on 26 Mar 04.

[13] Foerst 1998, 109 also brings up this archetypal tendency in her conclusion.

[14] United States Postal Service 2003 annual report, http://www.usps.com/history/anrpt03/html/realkind.htm, as seen on 6 May 2004.

[15] Cog, as seen on http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/images/scaz-cog.gif, on 6 May 2004 (enlarged).

[16] 2002, 57.

[17] Cog, 'Theory of Mind for a Humanoid Robots,' http://www.ai.mit.edu/projects/humanoid-robotics/group/cog/Abstracts2000/scaz.pdf, as seen on 6 May 2004.

[18] Adler 1986, 319-321.

[19] 1992, 161-4.

[20] Utopias are often a satire more than a prescription literally conceived, but they are also far more prescriptive than one would gather from a simple statement that they are satire.

[21] Turing 1950.

[22] VanLehn 1989, in Posner 1989, 532.

[23] Ibid. in Posner 1989, 534.

[24] 1998, 101.

[25] 1992, 159.

[26] Foerst 1998, 103.

[27] Turing 1950.

[28] Hebb 1949, as quoted in the Linux 'fortune' program.

[29] Nominalism said that general categories are something in the mind drawn from real things, and not something things themselves arise from. This has profoundly shaped the course of Western culture.

[30] Lewis 1943, 46.

[31] Yates 1966, 380-382.

[32] Without submitting to the Church in the usual way, the magus is equal to its highest members (Webster 1982, 57).

[33] George Mason University's Modern & Classical Languages, 'Pico della Mirandola: Oratio de hominis dignitate,' http://www.gmu.edu/departments/fld/CLASSICS/mirandola.oratio.html, as seen on 18 May 2004. See Poim 27-9, CH7 1-2 in Bentley 1987 for texts reflecting an understanding of the world as evil and associated contempt for the hoi polloi.

[36] Lewis 1943, 46.

[37] Ibid., 33-35.

[38] Ibid., 23-24.

[39] Ibid., 295-299.

[40] Ibid.

[41] See Midgley, 1992, 80.

[42] 1990, 195, 197-224,337-41.

[43] 1950.

[44] References will be to the online Greek version at Thesaurus Linguae Graecae, http://stephanus.tlg.uci.edu/inst/wsearch?wtitle=2892+049&uid=&GreekFont=Unicode&mode=c_search, according to chapter and line. Unless otherwise specified, references in this section will be to the Mystagogia.

[45] 5.1-10. 'Intellect' in particular is used as a scholarly rendering of the Greek 'nous,' and is not equivalent to the layman's use of 'intellect,' particularly not as cognate to 'intelligence.' The 'reason' ('logos') is closer to today's use of the term, but not as close as you might think. This basic conceptualisation is common to other patristic and medieval authors, such as Augustine.

[46] 1992, 239.

[47] 'Punctuated equilibrium' is a variant on Darwin's theory of (gradual) evolution. It tries to retain an essentially Darwinian mechanism whilst acknowledging a fossil record and other evidence which indicate long periods of stability interrupted by the abrupt appearance and disappearance of life forms. It is called 'punk eek' by the irreverent.

[48] I.82. Material from the Capita Gnosticae, not available in Thesaurus Linguae Graecae, will be referenced by century and chapter number, i.e. I.82 abbreviates Century I, Chapter 82.

[49] See Lewis 2001, 522.

[50] What we usually mean by 'cause' today: something which mechanically brings about its effect, as time and favourable conditions cause an acorn to grow into an oak.

[51] The 'final cause' is the goal something is progressing towards: thus a mature oak is the final cause of the acorn that would one day grow into it.

[52] As seen on the Project Gutenberg archive at http://www.gutenberg.net/etext97/1ws1810.txt on 15 June 2004.

[53] 1992, 147-165.

[54] 1998, 104-7.

An Abstract Art of Memory

Surgeon General's Warning

I'm leaving this work up as a spectacular example of my barking up the wrong tree.

Some time centuries past, it was fashionable as a sort of rite of passage to create your "own art of memory," as there were other times when it was fashionable to produce a new proof to a particular well-known theorem (the Pythagorean theorem). And this marks a spectacular effort to resurrect that dead fashion, even if I'm not sure it's learnable enough to be useful. It also represents, to my knowledge, the first art of memory specifically optimized to work gracefully with abstractions, a point on which I have found little competition.

It also falls entirely into Barlaam's domain, where in one defining moment for the Orthodox Church, the champion of Orthodox hesychasm St. Gregory Palamas engaged the champion of Renaissance man secular learning Barlaam, and the Orthodox Church decisively recognized that the hesychastic or silent tradition still living in the East was its norm, and the Western book learning that puts logic behind the wheel has no place in living Orthodoxy.

I am leaving this up as an example of my being wrong, and as a point of hope that someone wrong may still be brought to saving grace.

Buy Profoundly Gifted Survival Guide on Amazon.

Abstract. Author briefly describes classic mnemotechnics, indicates a possible weakness in their ability to deal with abstractions, and suggests a parallel development of related principles designed to work well with abstractions.

Frances Yates opens The Art of Memory with a tale from ancient Greece[1]:

At a banquet given by a nobleman of Thessaly named Scopas, the poet Simonides of Ceos chanted a lyric poem in honor of his post but including a passage in praise of Castor and Pollux. Scopas meanly told the poet that he would only pay him half the sum agreed upon for the panegyric and that he must obtain the balance from the twin gods to whom he had devoted half the poem. A little later, a message was brought in to Simonides that two young men were waiting outside who wished to see him. He rose from the banquet and went out but could find no one. During his absence the roof of the banqueting hall fell in, crushing Scopas and all the guests beneath the ruins; the corpses were so mangled that the relatives who came to take them away for burial were unable to identify them. But Simonides remembered the places at which they had been sitting at the table and was therefore able to indicate to the relatives which were their dead.

After his spatial memory in this event, Simonides is credited with having created an art of memory: start with a building full of distinct places. If you want to remember something, imagine a striking image with a token of what you wish to remember at the place. To recall something naval, you might imagine a giant nail driven into your front door, with an anchor hanging from it; if you visualize this intensely, then when in your mind's eye you go through your house and imagine your front door, then the anchor will come to mind and you will remember the boats. Imagining a striking image on a remembered place is called pegging: when you do this, you fasten a piece of information on a given peg, and can pick it up later. Yates uses the terms art of memory and artificial memory as essentially interchangeable with mnemotechnics, and I will follow a similar usage.

There is a little more than this to the technique, and it allows people to do things that seem staggering to someone not familiar with the phenomenon[2]. Being able to look at a list of twenty items and recite it forwards and backwards is more than a party trick. The technique is phenomenally well-adapted to language acquisition. It is possible for a person skilled in the technique to learn to read a language in weeks. It is the foundation to some people learning an amount of folklore so that today they would be considered walking encyclopedias. This art of memory was an important part of the ancient Greek rhetorical tradition[3], drawn by medieval Europe into the cardinal virtue of wisdom[4], and then transformed into an occult art by the Renaissance[5]. Medieval and renaissance variations put the technique to vastly different use, and understood it to signify greatly different things, but outside of Lullism[6] and Ramism[7], the essential technique was the same.

In my own efforts to learn the classical form of the art of memory, I have noticed something curious. I'm better at remembering people's names, and I no longer need to write call numbers down when I go to the library. I was able, without difficulty, to deliver an hour-long speech from memory. Learning vocabulary for foreign languages has come much more quickly; it only took me about a month to learn to read the Latin Vulgate. My weaknesses in memory are not nearly so great as they were, and I know other people have been much better at the art than I am. At the same time, I've found one surprise, something different from the all-around better memory I suspected the art would give me. What is it? If there is a problem, it is most likely subtle: the system has obvious benefits. To tease it out, I'd like to recall a famous passage from Plato's Phaedrus[8]:

Socrates: At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis was sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days Thamus was the king of the whole of Upper Egypt, which is in the district surrounding that great city which is called by the Hellenes Egyptian Thebes, and they call the god himself Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he went through them, and Thamus inquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. There would be no use in repeating all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; for this is the cure of forgetfulness and folly. Thamus replied: O most ingenious Theuth, he who has the gift of invention is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance a paternal love of your own child has led you to say what is not the fact: for this invention of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters. You have found a specific, not for memory but for reminiscence, and you give your disciples only the pretence of wisdom; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome, having the reputation of knowledge without the reality.

There is clear concern that writing is not what it appears, and it will endanger or destroy the knowledge people keep in memory; a case can be made that the phenomenon of Renaissance artificial memory as an occult practice occurred because only someone involved in the occult would have occasion to keep such memory after books were so easily available.

What kind of things might one wish to have in memory? Let me quote one classic example: the argument by which Cantor proved that there are more real numbers between 0 and 1 than there are counting numbers (1, 2, 3...). I paraphrase the basic argument here:

  1. Two sets are said to have the same number of elements if you can always pair them up, with nothing left over on either side. If one set always has something left over after the matching up, it has more elements.
  2. Suppose, for the sake of argument, that there are at least as many counting numbers as real numbers between 0 and 1. Then you can make a list of the numbers between 0 and 1:
    1:  .012343289889...
    2:  .328932198323...
    3:  .438724328743...
    4:  .988733287923...
    5:  .324432003442...
    6:  .213443765001...
    7:  .321010320030...
    8:  .323983213298...
    9:  .982133982198...
    10: .321932198904...
    11: .000321321278...
    12: .032103217832...
    
  3. Now, take the first decimal place of the first number, the second of the second number, and so on and so forth, and make them into a number:
    1:  .012343289889...
    2:  .328932198323...
    3:  .438724328743...
    4:  .988733287923...
    5:  .324432003442...
    6:  .213443765001...
    7:  .321010320030...
    8:  .323983213298...
    9:  .982133982198...
    10: .321932198904...
    11: .000321321278...
    12: .032103217832...
    

    Result:

    .028733312972...
    
  4. Now make another number between 0 and 1 that is different at every decimal place from the number just computed:
    .139844423083...
    
  5. Now, remember that we assumed that the list has all the numbers between 0 and 1: every single one, without exception. Therefore, if this assumption is true, then the latter number we constructed must be on the list. But where?The number can't be the first number on the list, because it was constructed to be different at the first decimal place from the first number on the list. It can't be the second number on the list, because it was constructed to be different at the second decimal place from the second number on the list. Nor can it be the third, fourth, fifth... in fact, it can't be anywhere on the list because it was constructed to be different. So we have one number left over. (Can we put that number on the list? Certainly, but the argument shows that the new list will leave out another number.)
  6. The list of numbers between 0 and 1 doesn't have all the numbers between 0 and 1.
  7. We have a contradiction.
  8. We started by assuming that you can make a list that contains all the numbers between 0 and 1, but there's a contradiction: any list leaves numbers left over. Therefore, our assumption must be wrong. Therefore, there must be too many real numbers between 0 and 1 to assign a separate counting number to each of them.

Let's say we want to commit this argument to memory. A mathematician with artificial memory might say, "That's easy! You just imagine a chessboard with distorted mirrors along its diagonal." That is indeed a good image if you are a mathematician who already understands the concept. If you find the argument hard to follow, it is at best a difficult thing to store via the artificial memory. Even if it can be done, storing this argument in artificial memory is probably much more trouble than learning it as a mathematician would.

Let me repeat the quotation from the Phaedrus, while changing a few words:

Jefferson: At the Greek region of Thessaly, there was a famous old poet, whose name was Simonides; totems seen with the inner eye were devoted to him, and he was the inventor of a great art, greater than arithmetic and calculation and geometry and astronomy and draughts. Now in those days Rousseau was a sage revered throughout the West, and they called the god himself Rationis. To him came Simonides and showed his invention, desiring that the rest of the world might be allowed to have the benefit of it; he went through it, and Rousseau inquired about its several uses, and praised some of them and censured others, as he approved or disapproved of them. There would be no use in repeating all that Rousseau said to Simonides in praise or blame of various facets. But when they came to inner writing, This, said Simonides, will make the West wiser and give it better memory; for this is the cure of forgetfulness and of folly. Rousseau replied: O most ingenious Simonides, he who has the gift of invention is not always the best judge of utility or inutility of his own inventions to the users of them. And in this instance a paternal love of your own child has led you to say what is not the fact; for this invention will create forgetfulness in the learner's souls, because they will not remember abstract things; they will trust to mere mnemonic symbols and not remember things of depth. You have found a specific, not for memory but for reminiscence, and you give your disciples only the pretence of wisdom; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome, having the reputation and outer shell of knowledge without the reality of deep thought.

It is clear that if we follow Thomas Aquinas's instructions on memory to visualize a woman for wisdom, we may recall wisdom. What is less clear is that this inner writing particularly helps an abstract recollection of wisdom. It may be able to recall an understanding of wisdom acquired without the help of artificial memory, but this art which allows at times stunning performance in the memorization of concrete data is of more debatable merit in learning abstraction. It has been my own experience that abstractions can be forced through the gate of concreteness in artificial memory, but it is like forcing a sponge through a funnel. While I admittedly don't have a medieval practitioner's inner vocabulary to deal with abstractions, using the artificial memory to deal with abstractions seems awkward in much the same way that storing individual letters through artificial memory[9] is awkward. The standard artificial memory is a tool for being reminded of abstractions, but not for remembering them. It offers the abstract thinker a seductive way to recall a great many concrete facts instead of learning deep thought.

The overall impression I receive of the artificial memory is not so much a failed attempt at a tool to store abstractions as a successful attempt at a concrete tool which was not intended to store abstractions. It is my belief that some of its principles, in modified form, suggest the beginnings of an art of memory well-fitted to dealing with abstractions. The mature form of such an endeavor will not simply be an abstract mirror image of a concrete artificial memory, but it is appropriate enough for the first steps I might hazard.

Consider the following four paragraphs:

  1. Physics is like music. Both owe something of substance to the Pythagoreans. Both are aesthetic endeavors that in some way represent nature in highly abstracted form. Both are interested in mechanical waves. Many good physicists are closet musicians, and all musical instruments operate on physical principle.
  2. Physics is like literature. Both are written in books that vary from moderately easy to very hard. Both deal with a distinction between action and what is acted on, be it plot and character or force and particle, and both allow complex entities to be built of simpler ones. Practitioners of both want to be thought of as insightful people who understand reality.
  3. Physics is like an adventure. Both involve a venture into the unknown, where the protagonist tries to discover what is happening. Both have a mystique that exists despite most people's fear to experience such things themselves. To succeed in either, one is expected to have impressive strengths.
  4. Physics is like magic. Both flourished in the West, at the same time, out of the same desire: a desire to understand nature so as to control it. Both attract abstract thinkers, are practiced in part through the manipulation of arcane symbols, and may be found in the same person, from Newton to Feynman[10]. Magical theory claims matter to be composed of earth, air, fire, and water, while physics finds matter to be composed of solid, liquid, gas, and plasma.

What is the merit of these comparisons? They recall a story in which a literature professor asked Feynman if he thought physics was like literature. Feynman led him on with an elaborate analogy of how physics was like literature, and then said, "But it seems to me you can make such an analogy between any two subjects, so I don't find such analogies helpful." He observed that one can make a reasonably compelling analogy even if there's no philosophically substantial connection.

The laws of logic and philosophy are not the laws of memory. What is a liability to Feynman's implicit philosophical method is a strength to memory. The philosophical merit of the above comparisons is debatable. The benefit to memory is different: it appears to me that this is an abstract analogue to pegging. A connection, real or spurious, aids the memory even if it doesn't aid a rigorous philosophical understanding. In pegging, it is considered an advantage to visualize a ludicrously illogical scene: it is much more memorable than something routine and sensible. Early psychological experiments in memory involved memorization of nonsense syllables. The experimenters intentionally chose meaningless material to memorize. Why? Well, if the subject perceived meaning, that would provide a spurious way for the subject to remember the data, and so proper Ebbinghausian memory study meant investigating how people investigate memory material which was as meaningless as possible. Without pausing to develop an obvious critique, I'd suggest that this spurious route to memory is of great interest to us. Meaningful data is more memorable than meaningless, and this is true whether the meaning perceived is philosophically sound or obviously contrived. I might suggest that interesting meaning provides a direct abstract parallel to the striking, special-effect appearance of effective images in pegging.

I intentionally chose not to compare physics to astronomy, chemistry, computer science, engineering, mathematics, metaphysics, or statistics, because I wanted to show how a different concept can be used to establish connections to a new one. Or, more properly, different concepts. Having a new concept connected to three very different ones will capture different facets than one anchor point, and possibly cancel out some of each other's biases. A multiplicity of perspectives lends balance and depth. This isn't to say similar concepts can't be used, only that searching for a partial or full isomorphism to a known concept is easier than encoding from scratch. If memorable connections can be made between physics and adventure, music, English, and magic, what might be obtained from comparison with mathematics, chemistry, and engineering? A comparison between physics and these last three disciplines is left as an exercise to the reader, and one that may be quite fruitful.

Is this a desirable way to remember things? I would make two different comments on this score. First, when learning Latin words, I would first peg it to an English word with a vivid image, then later recall the image and reconstruct the English equivalent, then recall the image and remember the English, then the image would drop out so I would directly remember the English, and finally the English word would drop out too, leaving me with a Latin usage often different from the English equivalent used. Artificial memory does not circumvent natural memory; instead it streamlines the process and short-circuits many of the disruptive trips to the dictionary. Pegs vanish with use; they are not an alternate final product but a more efficient route for concepts more frequently used, and a cache of reference material. Therefore, even if remembered comparisons between physics and adventure/music/English/magic fall short of how one would desire to understand the concept, a similar flattening of the learning curve is possible. Second, I would say that even if you fail to peg something, you may succeed. How? In trying to peg a person's name, I hold that name and face in an intense focus—quite the opposite how I once reacted: "I'll never remember that," a belief which chased other people's names out of my mind in seconds. That focus is relevant to memory, and it has happened more than once that I completely failed to create a peg, but my failure used enough mental energy that I still remembered. If you search through your memory and fail to make even forced connections between a new concept and existing concepts, the mental focus given to the concept will leave you much better off than if you had thrown up your hands and thought the self-fulfilling prophecy: "I will never remember that!"

Certain kinds of emotional intelligence are part of the discipline. Learning to cultivate presence has to do with an emotional side, and I have written elsewhere about activities that can help to cultivate such presence[11]. We learn material better if we are interested in it; therefore consciously cultivating an interest in the material and seeing how it can be fascinating is another edge. Cultivating and guarding your inner emotional state can have substantial impact on memory and learning abstractions. Much of it has to do with keeping a state of presence. Shutting out abstractions is one obvious way to do this; another, perhaps less obvious, is to avoid cramming and simply ploughing through material unless it's something you don't really need to learn. Why?

If there is a sprinkler that disperses a fine mist, it will slowly moisten the ground. What if there's a high-volume sprinkler that shoots big, heavy drops of water high up in the air? With all that water pounding on the ground, it looks like the ground is quickly saturated. The appearance is deceptive. What has happened is that the heavy drops have pounded the surface of the ground into a beaten shield, so there really is water rolling off of a very wet surface, but go an inch down and the soil is as parched as ever. This sort of thing happens in studying, when people think that the more force they use, the better the results. Up to a point, definitely, and perseverance counts—but I have found myself to learn much more when I paid attention to my mental and emotional state and backed off if I sensed that I was leaving that optimal zone. I learn something if I say "This is important, so I'll plough through as much as I can as quickly as I can," but it's not as much, and keeping on task needs to be balanced with getting off task when that is helpful.

Consider the following problem:[12]

In the inns of certain Himalayan villages is practiced a most civilized and refined tea ceremony. The ceremony involves a host and exactly two guests, neither more nor less. When his guests have arrived and have seated themselves at his table, the host performs five services for them. These services are listed in order of the nobility which the Himalayan attribute to them: (1) Stoking the Fire, (2) Fanning the Flames, (3) Passing the Rice Cakes, (4) Pouring the Tea, and (5) Reciting Poetry. During the ceremony, any of those present may ask another, "Honored Sir, may I perform this onerous task for you?" However, a person may request of another only the least noble of the tasks which the other is performing. Further, if a person is performing any tasks, then he may not request a task which is nobler than the least noble task he is already performing. Custom requires that by the time the tea ceremony is over, all the tasks will have been transferred from the host to the most senior of the guests. How may this be accomplished?

Incomprehensible appearances notwithstanding, this is a very simple problem, the Towers of Hanoi. Someone who has learned the Towers of Hanoi may still solve the tea ceremony formulation as slowly as someone who's never seen any form of the problem[13]. A failure to recognize isomorphisms provides one of the more interesting passages in Feynman's memoirs[14]:

I often liked to play tricks on people when I was at MIT. One time, in a mechanical drawing class, some joker picked up a French curve (a piece of plastic for drawing smooth curves—a curly, funny-looking thing) and said, "I wonder if the curves on this thing have some special formula?"

I thought for a moment and said, "Sure they do. The curves are very special curves. Lemme show ya," and I picked up my French curve and began to turn it slowly. "The French curve is made so that at the lowest point on each curve, no matter how you turn it, the tangent is horizontal."

All the guys in the class were holding their French curve up at different angles, holding their pencil up to it at the lowest point and laying it along, and discovering that, sure enough, the tangent is horizontal. They were all excited by this "discovery"—even though they had already gone through a certain amount of calculus and had already "learned" that the derivative (tangent) of the minimum (lowest point) of any curve is zero (horizontal). They didn't put two and two together. They didn't even know what they "knew."

What is going on here is that Feynman perceives an isomorphism where the others do not. There may be a natural bent to or away from perceiving isomorphisms, and cognitive science suggests most people have a bent away. The finding, as best I can tell, is not so much that people can't look for isomorphisms, as that they don't. The practice of looking for and finding isomorphisms has something to give, because something can be treated as already known instead of learned from scratch. I might wonder in passing if the ultra-high-IQ rapid learning and interdisciplinary proclivities stem in part from the perception and application of isomorphisms, which may reduce the amount of material actually learned in picking up a new skill.

The classical art of memory derives strength from a mind that works visually; a background in abstract thought will help one learn abstractions. It has been thought[15] that people can more effectively encode and remember material in a given domain if it's one they have worked with; I would suggest that this abstract pegging also creates a way to encode material with background from other domains. An elaborate, intense, and distinct encoding is believed to help recall[16]. Heightening of memorable features, in what is striking or humorous[17], should help, and mimetics seems likely to contain jewels in its accounts of how a meme makes itself striking.

Someone familiar with artificial memory may ask, "What about places (loci)?" Part of the art of memory, be it ancient, medieval, or renaissance, involved having an inner building of sorts that one could imagine going through in order and recalling items. I have two basic comments here. First, a connection could use traditional artificial memory techniques as an index: imagine a muscular man with a tremendous physique running onto the scene, grabbing an adventurer's sword, shield, and pack, sitting down at a pipe organ which has a large illuminated manuscript on top, and clumsily playing music until a giant gold ring engraved with fiery letters falls on the scene and turns it to dust. You have pegged physics to adventure, music, literature, and magic; if you wanted to reconstruct an understanding of physics, you could see what it was pegged to, and then try to recall the given similarities. Second and more deeply, I believe that a person's entire edifice of previously acquired concepts may serve as an immense memory palace. It is not spatial in the traditional sense, and I am not here concerned with the senses in which it might be considered a topological space, but it is a deeply qualitative place, and accessible if one uses traditional artificial memory for an index: these adaptations are intended to expand the repertoire of what disciplined artificial memory can do, not abolish the traditional discipline.

Symbols are the last unexplored facet. Earlier I suggested that a chessboard with mirrors along its diagonal may be a good token to represent Cantor's diagonal argument, but does not bring memory of the whole proof. Now I would like to give the other side: an abstraction may not be fully captured by a symbol, but a good symbol helps. A sign/symbol distinction has been made, where a sign represents while a symbol represents and embodies. In this sense I suggest that tokens be as symbolic as possible.

Why use a token? Aren't the deepest thoughts beyond words? Yes, but recall depends on being able to encode. I have found my deepest thoughts to not be worded and often difficult to translate to words, but I have also found that I lose them if I cannot put them in words. As such, thinking and choosing a good, mentally manipulable symbol for an abstraction is both difficult and desirable. My own discipline of formation, mathematics, chooses names for variables like 'x', 'y', and 'z' which software engineers are taught not to use because they impede comprehension: a computer program with variable names like 'x' and 'y' is harder to understand or even write to completion than one which with names like 'trucks_remaining' or 'customers_last_name'. The authors of Design Patterns[18] comment that naming a pattern is one of the hardest parts of writing it down. The art of creating a manipulable symbol for an abstraction is hard, but worth the trouble. This, too, may also help you to probe an abstraction in a way that will aid recall.

To test these principles, I decided to spend a week[19] seeing what I could learn of a physics text[20] and Kant's Critique of Pure Reason[21]. I considered myself to have understood a portion of the physics text after being able to solve the last of the list of questions. I had originally decided to see how quickly I could absorb material. After working through 10% of the physics text in one day, I decided to shift emphasis and pursue depth more than speed. In reading Kant, the tendency to barely grasp a difficult concept forgotten in grasping the next difficult concept gave way, with artificial memory, to understanding the concepts better and grasping them in a way that had a more permanent effect. I read through page 108 of 607 in the physics text and 144 of 669 in Kant's Critique of Pure Reason.

The first day's physics ventures saw two interesting ways of storing concepts, and one comment worth mentioning. There is a classic skit, in which two rescuers are performing two-person CPR on a patient. Then one of the rescuers says, "I'm getting tired. Let's switch," and the patient gets up, the tired rescuer lies down, and the other two perform CPR on him. This was used to store the interchangeability of point of effort, point of resistance, and fulcrum on a lever, based on an isomorphism to the skit's humor element.

The rule given later, that along any axis the sum of forces for a body in equilibrium is always zero, was symbolized by an image of a knife cutting a circle through the center: no matter what angle of cutting there was, the cut leaves two equal halves.

These both involved images, but the images differed from pegging images as a schematic diagram differs from a computer animated advertisement. They seemed a combination of an isomorphism and a symbol, and in both cases the power stemmed not only from the resultant image but the process of creation. The images functioned in a sense related to pegging, but most of the images so far developed have been abstract images unlike anything I've read about in historical or how-to discussion of the art of memory.

The following was logged that night. The problem referred to is a somewhat complex lever problem given in three parts:

In reviewing the day's thoughts at night, I recognized that the problems seem to admit a shortcut solution that does not rigorously apply the principles but obtains the correct answer: problem 12 on page 31 gives two weights and other information, and all three subproblems can be answered by assuming that there are two parts in the same ratio [as] the weights, and applying a little horse sense as to which goes where. It's a bit like general relativity, which condenses to "Everything changes by a factor of the square root of (1 - (v^2/c^2))." I am not sure whether this is a property of physics itself or a socially emergent property of problems used in physics texts.

I believe this suggests that I was interacting with the material deeply and quite probably in a fashion not anticipated by the authors.

In reading Kant, I can't as easily say "I solved the last exercises in each section" and don't simply want to just say, "I read these pages." I would like to demonstrate interaction with the material with excerpts from my log:

...I am now in the introduction to the second edition, and there are two images in reference to Kant's treatment of subjective and objective. One is of a disc which has been cut in half, sliced again along a perpendicular axis and brought together along the first axis so that the direction of the cut has been changed. The other is of a sphere being turned out by [topologically] compactifying R3 [Euclidean three-space] by the addition of a single point, and then shifting so the vast outside has become the cramped inside and the cramped inside has become the vast outside. Both images are inadequate to the text, indicating at best what sort of thing may be thought about in what sort of shift Kant tries to introduce, and I want to reread the last couple of pages. Closer to the mark is a story about three umpires who say, in turn, "I calls them as they are," "I calls them as I see them," and "They may be strikes, they may be balls, but they ain't nothing until I calls them!"


Having reread, I believe that the topological example is truer than I realized. I made it on almost superficial grounds, after reading a footnote which gave as example scientific progress after Copernicus proposed, rather than that the observer be fixed and the heavens rotate, the heavens are fixed and the observer rotate. The deeper significance is this: prior accounts had apparently not given sufficient account to subjective factors, treating subjective differences as practically unimportant—what mattered for investigation was the things in themselves. Thus the subjective was the unexamined inside of the sphere. Then, after the transformation, the objective was the unexaminable inside of the new sphere: we may investigate what is now outside, our subjective states and the appearances conformed to them, but things in themselves are more sealed than our filters before: before, we didn't look; after, we can't look. What is stated [in Kant] so far is a gross overextension of a profound observation.

The below passages refer to pp. 68-70:

Kant's arguments that space is an a priori concept can be framed as showing that there exists a chicken-and-egg or bootstrapping gap between them and sense data.

What is a chicken-and-egg/bootstrapping gap? In assisting with English as a Second Language instruction, I was faced with a difficulty in explanation. Assuming certain background, it is possible for a person not to know something while there is a straightforward way of explaining—perhaps a very long way of explaining, but it's obvious enough how to explain it in terms of communicable concepts. Then there is the case where there is no direct way to explain something: one example is how to explain to a small child what air is. One can point to water, wood, metal, stone, food, and a great many other things, but the same procedure may not yield understanding of air. It may be possible with a Zen-like cleverness to circumvent it—in saying, for example, that air is what presses on your skin on a windy day—but it is not as straightforward as even an involved and difficult explanation where you know how to use the other person's concepts to build the one you want.

In English as a Second Language instruction, this kind of gap is a significant phenomenon in dealing with students who have no beginning English knowledge, and in dealing with concepts that cannot obviously be demonstrated: 'sister' and 'woman', when both terms refer to an adult, differ in a way that is almost certainly understood in the student's native tongue but is nonetheless extremely difficult to explain. When I first made the musing, I envisioned a Zen-like solution. Koans immortalize incidents in which Zen masters bypassed chicken-and-egg gaps in trying to convey enlightenment that cannot be straightforwardly explained, and therefore show a powerful kind of communication. That is what I envisioned, but it is not how English is taught to speakers of other languages. What happens in ESL classes, and with younger children, is a gradual emergence that is difficult to account for in the terms of analytic philosophy—a straightforward explanation sounds like hand-waving and sloppy thinking—but with enough repetition, material is picked up. It may have something to do with a mechanism of learning outlined in Polanyi's Personal Knowledge, which talks about how i.e. swimmers learn from coaches to inhale more air and exhale less completely so that their lungs act more as a flotation device than a non-swimmers, even though neither swimmer nor coach is likely aware of what is going on on any conscious level. People pick things up through at least one route besides grasping a concept consciously synthesized from sense data.

Kant's proof that a given concept is a priori essentially consists of argument that the concept that cannot be synthesized from sense data through the obvious means of central route processing. He is probably right in that the concepts he classifies as a priori, and presumably others as well, cannot just be synthesized from sense data through central route processing. It does not follow that a concept must be a priori: there are other possibilities besides the route Kant investigates that one can acquire a belief. I do believe, though, that we come with some kind of innate or a priori knowledge: the difficulties experienced in visualizing four dimensional objects suggest that our dealing with three-dimensional space is not simply the result of a completely amorphous central nervous system which we happen to condition to deal with three dimensions; there is something of substance, comparable in character to a psychologist's broader understanding of memory, that we are born to. An investigation of that would take me too far afield.


P. 87. "Now a thing in itself cannot be known throu[g]h mere relations; and we may therefore conclude that since outer science gives us nothing but mere relations, this sense can contain in its representation only the relation of an object to the subject, and not the inner properties of the object in itself."

There is a near-compatibility between this and realist philosophy of science. How?

Recall my observation about chicken-and-egg gaps and how they may be surmounted (here I think of Zenlike short-circuiting of the gap rather than the vaguely indicated gradual emergence of concepts which haven't been subject to a detailed and understood explanation). What goes on in a physics experiment? The truly famous ones since 1900—I think of the Millikin oil-drop experiment—include a very clever hack that tricks nature into revealing herself. People, not even experimental physicists, can grab a handful of household items and prove that electric charge is quantized.[22] Perhaps that was possible in Galileo's day, but a groundbreaking experiment involves a brilliant, clever, unexpected trickery of nature that is isomorphic to a Zen short-circuiting in a chicken-and-egg gap, or a clever hack, and so on and so forth. Even a routine classroom experiment uses technology that is the fruit of this kind of resourcefulness. People do something they "shouldn't" be able to do. This is possibly how we might learn intuitions Kant classifies as a priori, and how experimental scientists cleverly circumvent the roadblock Kant describes here. It might be said that understanding this basic problem is prerequisite to a good realist philosophy of science.

'Hack', in this context, refers to the programming cleverness described in Programming Pearls[23]. I analyzed that fundamental mode of problem solving and compared it with its counterpart in "Of Technology, Magic, and Channels"[24]. There are other observations and interactions with the text, but I believe these should adequately make the point.

I chose Kant because of his reputation as an impenetrable analytic philosopher. With the aid of a good translation and these principles, I was at times surprised at how easy it was to read. By the end of the week, I had another surprise when I decided to reread George MacDonald's Phantastes[25], a work which I have greatly enjoyed. This time, my experience was different. I felt my mind working differently despite a high degree of mental fatigue. The evocative metaphor fell dead, and I found myself reading the text as I would read Kant, thinking in a manner deeply influenced by reading Kant, and in the end setting it down because my mind had shifted deeply into a mode quite different from what allows me to enjoy Phantastes. I was surprised at how deeply using abstract memory to read Kant had affected not only conscious recall of ideas but also ways of thought itself.

I do not consider my recorded observations to be in any sense a rigorous experiment, but I believe the experience suggests it's interesting enough to be worth a good experiment.

Here are twelve proposed principles, or rules of thumb, of abstract memory:

  1. Be wholly present. Want to know the material. Make it emotionally relevant and connected to something that concerns you. Don't take notes[26].
  2. Encode material in multiple ways. Some different ways to encode are: analogies to different abstractions, list distinctions from similar abstractions, paraphrase, search for isomorphisms, use the concepts, and create visual symbols.[27]
  3. At least in the beginning, mix a little bit of reading material with a lot of processing. Don't plough through anything you want to remember. Work on drawing a lot of mist in, not pounding with heavy drops that will create a beaten shield.
  4. Don't read out of a desire to finish reading a text. Read to draw the materials through processed thought.
  5. Process in a way that is striking, stunning, novel, and counter-intuitive: in a word, memorable.
  6. Process material on as deep a level as you can.[28]
  7. Search for subtle distinctions between a concept under study and its near neighbors.
  8. Converse, interact with, and respond to the abstractions. What would you say if an acquaintance said that in a discussion? What questions would you ask? Write it down.
  9. Know how much mental energy you have, and choose battles wisely. Given a limited amount of energy, it is better to fully remember a smaller number of critical abstractions than to have diffuse knowledge of many random ideas.
  10. Guard your emotions. Be aware of what emotional states you learn well in, and put being in those states before passing your eyes over such-and-such many pages of reading material.
  11. Review material after study, seeking to find a different way of putting it.
  12. Metacogitate. Be your own coach.

Committing these principles to memory is left as an exercise to the reader.

What can I say to conclude this monograph? I can think of one or two brief addenda, such as the programmer's virtue of laziness[29], but in a very real sense I can't conclude now. I can sketch out a couple of critiques that may be of interest. Jerry Mander[30] critiques the artificial unusuality of television and especially advertising, in a way that has direct bearing on traditional mnemotechnics. He suggests that giving otherwise uninteresting sensation a strained and artificial unusuality has undesirable impact on how people perceive life as seen outside of TV, and the angle of his critique is the main reason why I was hesitant to learn artificial memory. There may be room for similar critiques about why making ridiculous comparisons to remember ideas creates a bad habit for someone who wishes to think rigorously. There is also the cognitive critique that the search for isomorphisms will introduce unnoted distortion. One thinks of the person who says, "All the religions in the world say the same thing." There is a common and problematic tendency to be astute in perceiving substantial similarities among world religions and all but blind in perceiving even more substantial differences. That is why I suggest comparing with multiple and different familiar concepts, rather than one. I could give other thoughts about critiques, but I'm trying to explain an art of memory, not especially to defend it.My intention here is not to settle all questions, but open the biggest one and suggest a direction of inquiry by which an emerging investigation may find a more powerful way to learn abstractions.[31]

Notes

    1. Yates, Frances A., The Art of Memory, hereafter AM, Chicago: University of Chicago Press, 1966, pp. 1-2. The text is a treasure trove on the development of mnemotechnics, also referred to here as artificial memory or the art of memory. Back
    2. Trudeau, Kevin, Kevin Trudeau's Mega Memory, hereafter KTMM, New York: William Morrow & Co., 1995 is one of several practical manuals for someone who thinks the classical art of memory interesting and would like to be able to use it. Back
    3. AM, pp. 27ff. Back
    4. Ibid., pp. 50ff. Back
    5. Ibid., pp. 129ff. Back
    6. Ibid., pp. 173ff. Back
    7. Ibid., pp. 231ff. Back
    8. Jowett, B., The Dialogues of Plato, Vol. III, hereafter DP, New York: National Library Company, pp. 442-443. Back
    9. AM, pp. 112ff describes one popularizer whose somewhat debased form advocated memorizing individual letters. This practice is awkward, much as it would be awkward to record the appearance of a room by taking a notepad and writing one letter on each sheet of paper. Back
    10. Feynman, Richard, Surely You're Joking, Mr. Feynman, hereafter SYJMF, New York: W. W. Norton & Company, 1985, pp. 338ff and other places in the text. He began his famous "Cargo Cult Science" address by talking about his occult diversions from scientific endeavors, and it is arguable that Newton's groundbreaking work in physics and optics was a scientific diversion from his main occult endeavors. I find it revealing that, even with Feynman's occult forays left in the book, the index shows curious lacunae for "ESP", "Hallicunation", "New Age", "Reflexology", "Sensory deprivation", etc. Back
    11. 100 Ways of Kything, hereafter 1WK, by CJS Hayward, at CJSH.name/kything describes a number of activities which can embody presence and focus. Back
  • Hayes, J.R., and Simon, H.A., "Understanding Written Problem Instructions", 1974, in Gregg, L.W. ed., Knowledge and Cognition, hereafter KC, Hillsdale: Erlbaum. Quoted in Posner, Michael I. ed., Foundations of Cognitive Science, hereafterFCS, Cambridge: The MIT Press, 1989, pp. 534-535. Back
  • FCS, pp. 559-560. Back
  • SYJMF, pp. 36-37. A more scholarly, if more pedestrian, mention of the phenomenon is provided in FCS, pp. 559-560. Back
  • FCS, p. 690. The authors do not necessarily subscribe to this view, but acknowledge influence among many in the field. Back
  • Ibid., p. 691. Back
  • "A Picture of Evil", hereafter APE, by CJS Hayward, at CJSH.name/evil/ provides an example of communication which is striking in this manner. Back
  • Gamma, Erich; Helm, Richard; Johnson, Ralph; Vlissides, John, Design Patterns: Elements of Reusable Object-Oriented Software, hereafter DP, Reading: Addison-Wesley, p. 3. The book describes recurring good practices that are known to many expert practitioners, but often only on a tacit level—and tries to explain how this tacit knowledge can be made explicit. The book is commonly called 'GoF' ("Gang of Four") by software developers. Thanks to Ron Miles for locating the page number. Back
  • February 9-15 2002. Testing abstract artificial and honing this article were juggled with other responsibilities. Back
  • Black, Newton Henry; Davis, Harvey Nathaniel, New Practical Physics: Fundamental Principles and Applications to Daily Life, hereafter NPP, New York: Macmillan, 1929. Given to me as a whimsical Christmas gift in 2001. At the time of beginning, I was significantly out of practice in both physics and mathematics. Back
  • Smith, Norman Kemp tr., Immanuel Kant's Critique of Pure Reason, hereafter IKCPR, London: Macmillan, 1929. I had not previously read Kant. Back
  • I knew that science doesn't deal in proof; experiments may corroborate a theory, but not establish it as something to never again doubt. I was thinking at that point along another dimension, to convey a quality of physics experiments today. Back
  • Bentley, Jon Louis, Programming Pearls, hereafter PP, Reading: Addison-Wesley, 1986. Back
  • Hayward, Jonathan, "Of Technology, Magic, and Channels", in Gift of Fire, June 2001, number 126. Back
  • MacDonald, George, Phantastes, hereafter P, reprinted Grand Rapids: Wm. B. Eerdmans, 1999. Back
  • Despite widespread endorsement of this practice, taking notes taxes limited mental energy that can better be used to understand the material, and acts to the mind as a signal of, "This can safely be forgotten." KTMM, very early on, makes a point of telling readers not to take notes (p. 5). The purpose of attending a lecture or reading a book is to make internal comprehension rather than external reference materials. Back
  • Tulving, Endel; Craik, Fergus I.M., The Oxford Handbook of Memory, hereafter OHM, Oxford: Oxford University Press, 2000, refers on p. 98 to the picture superiority effect, which states that pictures are better remembered because of a dual coding where they are encoded as image and words and therefore have two chances at being stored rather than the one chance when material is presented only as words. Back
  • OHM mentions on p. 94 the "levels of processing" view, a significant perspective which states that material is retained better the more deeply it is processed. Back
  • Wall, Larry; Christiansen, Tom; Schwartz, Randal L., Programming Perl, Second Edition, hereafter PP2, Sebastopol: O'Reilly, pp. 217ff and other places throughout the book. Known by the affectionate nickname of "the camel book" among software developers. (This book is distinct from PP). Back
  • Mander, Jerry, Four Arguments for the Elimination of Television, hereafter FAET, New York: Morrow Quill, 1978, pp. 299ff. Back
  • I would like to thank Robin Munn for giving me my first serious introduction to the art of memory, Linda Washington and Martin Harris for looking at my manuscript, William Struthers for valuable comments about source material, and Chris Tessone, Angela Zielinski, Kent and Theo Nebergall, and people from Wheaton College and International Christian Mensa for prayer. I would also like to thank those who read this article, apply it, perhaps extend it, and perhaps tell others about them. Back

Read more of Profoundly Gifted Survival Guide on Amazon!