Articles on Christian Faith and Other Things



From the "Collected Works of CJS Hayward" series

CJS Hayward

CJS Hayward Publications, Wheaton

 

©2000-2012 by CJS Hayward

Distribute freely.

Questions and contact information: CJSHayward.com/contact


The reader is invited to visit CJSHayward.com.

Table of Contents

Preface

An Abstract Art of Memory

The Administrator Who Cried, "Important!"

AI as an Arena for Magical Thinking Among Skeptics: Artificial Intelligence, Cognitive Science, and Orthodox Views on Personhood

Blessed Are the Peacemakers: Real Peace Through Real Strength

Dark Patterns / Anti-Patterns and Cultural Context in Study of Scriptural Texts: A Case Study in Craig Keener's Paul, Women, and Wives: Marriage and Women's Ministry in the Letters of Paul

Does Augustine Return to the Interpersonal Image of Love as Representing the Trinity, or Does He Abandon this in Favour of the Psychological Image?

The Evolution of a Perspective on Creation and Origins

Friendly, Win-Win Negotiations in Business: Interest-Based Negotiation and "Getting to Yes"

The Patriarchy We Object To

The Fulfillment of Feminism

A Glimpse into Eastern Orthodox Christianity

He Created Them Male and Female, Masculine and Feminine

Meat

On Mentorship

An Orthodox Looks at a Calvinist Looking at Orthodoxy

Orthodoxy, Contraception, and Spin Doctoring: A Look at an Influential but Disturbing Article

Un-man's Tales: C.S. Lewis's Perelandra, Fairy Tales, and Feminism

What the West Doesn't Get About Islam

Why Study Mathematics?

Why Young Earthers Aren't Completely Crazy

Preface

This collection includes articles of varying degrees of academic formality. Some of them are not academic at all, while others are enriched by the author's engagement with the university. They cover everything and nothing in faith and life. They include the author's first public speech ("Blessed Are the Peacemakers: Real Peace Through Real Strength"), the author's first dissertation in theology ("Dark Patterns / Anti-Patterns and Cultural Context in Study of Scriptural Texts: A Case Study in Craig Keener's Paul, Women, and Wives: Marriage and Women's Ministry in the Letters of Paul"), and one or two works intended for business ("Friendly, Win-Win Negotiations in Business: Interest-Based Negotiation and "Getting to Yes").

The articles here presented are guideposts along the way to works of mystical theology that would come later, perhaps years later, but they are interesting in themselves. They are taken from among the oldest and newest of works at Jonathan's Corner. And they provide a starting point for much else that was there: the author may not have written "Doxology" if he had not first written "AI as an Arena for Magical Thinking Among Skeptics: Artificial Intelligence, Cognitive Science, and Orthodox Views on Personhood."

An Abstract Art of Memory

Abstract. Author briefly describes classic mnemotechnics, indicates a possible weakness in their ability to deal with abstractions, and suggests a parallel development of related principles designed to work well with abstractions.

Frances Yates opens The Art of Memory with a tale from ancient Greece[1]:

At a banquet given by a nobleman of Thessaly named Scopas, the poet Simonides of Ceos chanted a lyric poem in honor of his post but including a passage in praise of Castor and Pollux. Scopas meanly told the poet that he would only pay him half the sum agreed upon for the panegyric and that he must obtain the balance from the twin gods to whom he had devoted half the poem. A little later, a message was brought in to Simonides that two young men were waiting outside who wished to see him. He rose from the banquet and went out but could find no one. During his absence the roof of the banqueting hall fell in, crushing Scopas and all the guests beneath the ruins; the corpses were so mangled that the relatives who came to take them away for burial were unable to identify them. But Simonides remembered the places at which they had been sitting at the table and was therefore able to indicate to the relatives which were their dead.

After his spatial memory in this event, Simonides is credited with having created an art of memory: start with a building full of distinct places. If you want to remember something, imagine a striking image with a token of what you wish to remember at the place. To recall something naval, you might imagine a giant nail driven into your front door, with an anchor hanging from it; if you visualize this intensely, then when in your mind's eye you go through your house and imagine your front door, then the anchor will come to mind and you will remember the boats. Imagining a striking image on a remembered place is called pegging: when you do this, you fasten a piece of information on a given peg, and can pick it up later. Yates uses the terms art of memory and artificial memory as essentially interchangeable with mnemotechnics, and I will follow a similar usage.

There is a little more than this to the technique, and it allows people to do things that seem staggering to someone not familiar with the phenomenon[2]. Being able to look at a list of twenty items and recite it forwards and backwards is more than a party trick. The technique is phenomenally well-adapted to language acquisition. It is possible for a person skilled in the technique to learn to read a language in weeks. It is the foundation to some people learning an amount of folklore so that today they would be considered walking encyclopedias. This art of memory was an important part of the ancient Greek rhetorical tradition[3], drawn by medieval Europe into the cardinal virtue of wisdom[4], and then transformed into an occult art by the Renaissance[5]. Medieval and renaissance variations put the technique to vastly different use, and understood it to signify greatly different things, but outside of Lullism[6] and Ramism[7], the essential technique was the same.

In my own efforts to learn the classical form of the art of memory, I have noticed something curious. I'm better at remembering people's names, and I no longer need to write call numbers down when I go to the library. I was able, without difficulty, to deliver an hour-long speech from memory. Learning vocabulary for foreign languages has come much more quickly; it only took me about a month to learn to read the Latin Vulgate. My weaknesses in memory are not nearly so great as they were, and I know other people have been much better at the art than I am. At the same time, I've found one surprise, something different from the all-around better memory I suspected the art would give me. What is it? If there is a problem, it is most likely subtle: the system has obvious benefits. To tease it out, I'd like to recall a famous passage from Plato's Phaedrus[8]:

Socrates: At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis was sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days Thamus was the king of the whole of Upper Egypt, which is in the district surrounding that great city which is called by the Hellenes Egyptian Thebes, and they call the god himself Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he went through them, and Thamus inquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. There would be no use in repeating all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; for this is the cure of forgetfulness and folly. Thamus replied: O most ingenious Theuth, he who has the gift of invention is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance a paternal love of your own child has led you to say what is not the fact: for this invention of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters. You have found a specific, not for memory but for reminiscence, and you give your disciples only the pretence of wisdom; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome, having the reputation of knowledge without the reality.

There is clear concern that writing is not what it appears, and it will endanger or destroy the knowledge people keep in memory; a case can be made that the phenomenon of Renaissance artificial memory as an occult practice occurred because only someone involved in the occult would have occasion to keep such memory after books were so easily available.

What kind of things might one wish to have in memory? Let me quote one classic example: the argument by which Cantor proved that there are more real numbers between 0 and 1 than there are counting numbers (1, 2, 3...). I paraphrase the basic argument here:

  1. Two sets are said to have the same number of elements if you can always pair them up, with nothing left over on either side. If one set always has something left over after the matching up, it has more elements.

  2. Suppose, for the sake of argument, that there are at least as many counting numbers as real numbers between 0 and 1. Then you can make a list of the numbers between 0 and 1:

    1:  .012343289889...
    2:  .328932198323...
    3:  .438724328743...
    4:  .988733287923...
    5:  .324432003442...
    6:  .213443765001...
    7:  .321010320030...
    8:  .323983213298...
    9:  .982133982198...
    10: .321932198904...
    11: .000321321278...
    12: .032103217832...
    
  3. Now, take the first decimal place of the first number, the second of the second number, and so on and so forth, and make them into a number:

    1:  .012343289889...
    2:  .328932198323...
    3:  .438724328743...
    4:  .988733287923...
    5:  .324432003442...
    6:  .213443765001...
    7:  .321010320030...
    8:  .323983213298...
    9:  .982133982198...
    10: .321932198904...
    11: .000321321278...
    12: .032103217832...
    

    Result:

    .028733312972...
    
  4. Now make another number between 0 and 1 that is different at every decimal place from the number just computed:

    .139844423083...
    
  5. Now, remember that we assumed that the list has all the numbers between 0 and 1: every single one, without exception. Therefore, if this assumption is true, then the latter number we constructed must be on the list. But where?

    The number can't be the first number on the list, because it was constructed to be different at the first decimal place from the first number on the list. It can't be the second number on the list, because it was constructed to be different at the second decimal place from the second number on the list. Nor can it be the third, fourth, fifth... in fact, it can't be anywhere on the list because it was constructed to be different. So we have one number left over. (Can we put that number on the list? Certainly, but the argument shows that the new list will leave out another number.)

  6. The list of numbers between 0 and 1 doesn't have all the numbers between 0 and 1.

  7. We have a contradiction.

  8. We started by assuming that you can make a list that contains all the numbers between 0 and 1, but there's a contradiction: any list leaves numbers left over. Therefore, our assumption must be wrong. Therefore, there must be too many real numbers between 0 and 1 to assign a separate counting number to each of them.

Let's say we want to commit this argument to memory. A mathematician with artificial memory might say, "That's easy! You just imagine a chessboard with distorted mirrors along its diagonal." That is indeed a good image if you are a mathematician who already understands the concept. If you find the argument hard to follow, it is at best a difficult thing to store via the artificial memory. Even if it can be done, storing this argument in artificial memory is probably much more trouble than learning it as a mathematician would.

Let me repeat the quotation from the Phaedrus, while changing a few words:

Jefferson: At the Greek region of Thessaly, there was a famous old poet, whose name was Simonides; totems seen with the inner eye were devoted to him, and he was the inventor of a great art, greater than arithmetic and calculation and geometry and astronomy and draughts. Now in those days Rousseau was a sage revered throughout the West, and they called the god himself Rationis. To him came Simonides and showed his invention, desiring that the rest of the world might be allowed to have the benefit of it; he went through it, and Rousseau inquired about its several uses, and praised some of them and censured others, as he approved or disapproved of them. There would be no use in repeating all that Rousseau said to Simonides in praise or blame of various facets. But when they came to inner writing, This, said Simonides, will make the West wiser and give it better memory; for this is the cure of forgetfulness and of folly. Rousseau replied: O most ingenious Simonides, he who has the gift of invention is not always the best judge of utility or inutility of his own inventions to the users of them. And in this instance a paternal love of your own child has led you to say what is not the fact; for this invention will create forgetfulness in the learner's souls, because they will not remember abstract things; they will trust to mere mnemonic symbols and not remember things of depth. You have found a specific, not for memory but for reminiscence, and you give your disciples only the pretence of wisdom; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome, having the reputation and outer shell of knowledge without the reality of deep thought.

It is clear that if we follow Thomas Aquinas's instructions on memory to visualize a woman for wisdom, we may recall wisdom. What is less clear is that this inner writing particularly helps an abstract recollection of wisdom. It may be able to recall an understanding of wisdom acquired without the help of artificial memory, but this art which allows at times stunning performance in the memorization of concrete data is of more debatable merit in learning abstraction. It has been my own experience that abstractions can be forced through the gate of concreteness in artificial memory, but it is like forcing a sponge through a funnel. While I admittedly don't have a medieval practitioner's inner vocabulary to deal with abstractions, using the artificial memory to deal with abstractions seems awkward in much the same way that storing individual letters through artificial memory[9] is awkward. The standard artificial memory is a tool for being reminded of abstractions, but not for remembering them. It offers the abstract thinker a seductive way to recall a great many concrete facts instead of learning deep thought.

The overall impression I receive of the artificial memory is not so much a failed attempt at a tool to store abstractions as a successful attempt at a concrete tool which was not intended to store abstractions. It is my belief that some of its principles, in modified form, suggest the beginnings of an art of memory well-fitted to dealing with abstractions. The mature form of such an endeavor will not simply be an abstract mirror image of a concrete artificial memory, but it is appropriate enough for the first steps I might hazard.

Consider the following four paragraphs:

  1. Physics is like music. Both owe something of substance to the Pythagoreans. Both are aesthetic endeavors that in some way represent nature in highly abstracted form. Both are interested in mechanical waves. Many good physicists are closet musicians, and all musical instruments operate on physical principle.

  2. Physics is like literature. Both are written in books that vary from moderately easy to very hard. Both deal with a distinction between action and what is acted on, be it plot and character or force and particle, and both allow complex entities to be built of simpler ones. Practitioners of both want to be thought of as insightful people who understand reality.

  3. Physics is like an adventure. Both involve a venture into the unknown, where the protagonist tries to discover what is happening. Both have a mystique that exists despite most people's fear to experience such things themselves. To succeed in either, one is expected to have impressive strengths.

  4. Physics is like magic. Both flourished in the West, at the same time, out of the same desire: a desire to understand nature so as to control it. Both attract abstract thinkers, are practiced in part through the manipulation of arcane symbols, and may be found in the same person, from Newton to Feynman[10]. Magical theory claims matter to be composed of earth, air, fire, and water, while physics finds matter to be composed of solid, liquid, gas, and plasma.

What is the merit of these comparisons? They recall a story in which a literature professor asked Feynman if he thought physics was like literature. Feynman led him on with an elaborate analogy of how physics was like literature, and then said, "But it seems to me you can make such an analogy between any two subjects, so I don't find such analogies helpful." He observed that one can make a reasonably compelling analogy even if there's no philosophically substantial connection.

The laws of logic and philosophy are not the laws of memory. What is a liability to Feynman's implicit philosophical method is a strength to memory. The philosophical merit of the above comparisons is debatable. The benefit to memory is different: it appears to me that this is an abstract analogue to pegging. A connection, real or spurious, aids the memory even if it doesn't aid a rigorous philosophical understanding. In pegging, it is considered an advantage to visualize a ludicrously illogical scene: it is much more memorable than something routine and sensible. Early psychological experiments in memory involved memorization of nonsense syllables. The experimenters intentionally chose meaningless material to memorize. Why? Well, if the subject perceived meaning, that would provide a spurious way for the subject to remember the data, and so proper Ebbinghausian memory study meant investigating how people investigate memory material which was as meaningless as possible. Without pausing to develop an obvious critique, I'd suggest that this spurious route to memory is of great interest to us. Meaningful data is more memorable than meaningless, and this is true whether the meaning perceived is philosophically sound or obviously contrived. I might suggest that interesting meaning provides a direct abstract parallel to the striking, special-effect appearance of effective images in pegging.

I intentionally chose not to compare physics to astronomy, chemistry, computer science, engineering, mathematics, metaphysics, or statistics, because I wanted to show how a different concept can be used to establish connections to a new one. Or, more properly, different concepts. Having a new concept connected to three very different ones will capture different facets than one anchor point, and possibly cancel out some of each other's biases. A multiplicity of perspectives lends balance and depth. This isn't to say similar concepts can't be used, only that searching for a partial or full isomorphism to a known concept is easier than encoding from scratch. If memorable connections can be made between physics and adventure, music, English, and magic, what might be obtained from comparison with mathematics, chemistry, and engineering? A comparison between physics and these last three disciplines is left as an exercise to the reader, and one that may be quite fruitful.

Is this a desirable way to remember things? I would make two different comments on this score. First, when learning Latin words, I would first peg it to an English word with a vivid image, then later recall the image and reconstruct the English equivalent, then recall the image and remember the English, then the image would drop out so I would directly remember the English, and finally the English word would drop out too, leaving me with a Latin usage often different from the English equivalent used. Artificial memory does not circumvent natural memory; instead it streamlines the process and short-circuits many of the disruptive trips to the dictionary. Pegs vanish with use; they are not an alternate final product but a more efficient route for concepts more frequently used, and a cache of reference material. Therefore, even if remembered comparisons between physics and adventure/music/English/magic fall short of how one would desire to understand the concept, a similar flattening of the learning curve is possible. Second, I would say that even if you fail to peg something, you may succeed. How? In trying to peg a person's name, I hold that name and face in an intense focus—quite the opposite how I once reacted: "I'll never remember that," a belief which chased other people's names out of my mind in seconds. That focus is relevant to memory, and it has happened more than once that I completely failed to create a peg, but my failure used enough mental energy that I still remembered. If you search through your memory and fail to make even forced connections between a new concept and existing concepts, the mental focus given to the concept will leave you much better off than if you had thrown up your hands and thought the self-fulfilling prophecy: "I will never remember that!"

Certain kinds of emotional intelligence are part of the discipline. Learning to cultivate presence has to do with an emotional side, and I have written elsewhere about activities that can help to cultivate such presence[11]. We learn material better if we are interested in it; therefore consciously cultivating an interest in the material and seeing how it can be fascinating is another edge. Cultivating and guarding your inner emotional state can have substantial impact on memory and learning abstractions. Much of it has to do with keeping a state of presence. Shutting out abstractions is one obvious way to do this; another, perhaps less obvious, is to avoid cramming and simply ploughing through material unless it's something you don't really need to learn. Why?

If there is a sprinkler that disperses a fine mist, it will slowly moisten the ground. What if there's a high-volume sprinkler that shoots big, heavy drops of water high up in the air? With all that water pounding on the ground, it looks like the ground is quickly saturated. The appearance is deceptive. What has happened is that the heavy drops have pounded the surface of the ground into a beaten shield, so there really is water rolling off of a very wet surface, but go an inch down and the soil is as parched as ever. This sort of thing happens in studying, when people think that the more force they use, the better the results. Up to a point, definitely, and perseverance counts—but I have found myself to learn much more when I paid attention to my mental and emotional state and backed off if I sensed that I was leaving that optimal zone. I learn something if I say "This is important, so I'll plough through as much as I can as quickly as I can," but it's not as much, and keeping on task needs to be balanced with getting off task when that is helpful.

Consider the following problem:[12]

In the inns of certain Himalayan villages is practiced a most civilized and refined tea ceremony. The ceremony involves a host and exactly two guests, neither more nor less. When his guests have arrived and have seated themselves at his table, the host performs five services for them. These services are listed in order of the nobility which the Himalayan attribute to them: (1) Stoking the Fire, (2) Fanning the Flames, (3) Passing the Rice Cakes, (4) Pouring the Tea, and (5) Reciting Poetry. During the ceremony, any of those present may ask another, "Honored Sir, may I perform this onerous task for you?" However, a person may request of another only the least noble of the tasks which the other is performing. Further, if a person is performing any tasks, then he may not request a task which is nobler than the least noble task he is already performing. Custom requires that by the time the tea ceremony is over, all the tasks will have been transferred from the host to the most senior of the guests. How may this be accomplished?

Incomprehensible appearances notwithstanding, this is a very simple problem, the Towers of Hanoi. Someone who has learned the Towers of Hanoi may still solve the tea ceremony formulation as slowly as someone who's never seen any form of the problem[13]. A failure to recognize isomorphisms provides one of the more interesting passages in Feynman's memoirs[14]:

I often liked to play tricks on people when I was at MIT. One time, in a mechanical drawing class, some joker picked up a French curve (a piece of plastic for drawing smooth curves—a curly, funny-looking thing) and said, "I wonder if the curves on this thing have some special formula?"

I thought for a moment and said, "Sure they do. The curves are very special curves. Lemme show ya," and I picked up my French curve and began to turn it slowly. "The French curve is made so that at the lowest point on each curve, no matter how you turn it, the tangent is horizontal."

All the guys in the class were holding their French curve up at different angles, holding their pencil up to it at the lowest point and laying it along, and discovering that, sure enough, the tangent is horizontal. They were all excited by this "discovery"—even though they had already gone through a certain amount of calculus and had already "learned" that the derivative (tangent) of the minimum (lowest point) of any curve is zero (horizontal). They didn't put two and two together. They didn't even know what they "knew."

What is going on here is that Feynman perceives an isomorphism where the others do not. There may be a natural bent to or away from perceiving isomorphisms, and cognitive science suggests most people have a bent away. The finding, as best I can tell, is not so much that people can't look for isomorphisms, as that they don't. The practice of looking for and finding isomorphisms has something to give, because something can be treated as already known instead of learned from scratch. I might wonder in passing if the ultra-high-IQ rapid learning and interdisciplinary proclivities stem in part from the perception and application of isomorphisms, which may reduce the amount of material actually learned in picking up a new skill.

The classical art of memory derives strength from a mind that works visually; a background in abstract thought will help one learn abstractions. It has been thought[15] that people can more effectively encode and remember material in a given domain if it's one they have worked with; I would suggest that this abstract pegging also creates a way to encode material with background from other domains. An elaborate, intense, and distinct encoding is believed to help recall[16]. Heightening of memorable features, in what is striking or humorous[17], should help, and mimetics seems likely to contain jewels in its accounts of how a meme makes itself striking.

Someone familiar with artificial memory may ask, "What about places (loci)?" Part of the art of memory, be it ancient, medieval, or renaissance, involved having an inner building of sorts that one could imagine going through in order and recalling items. I have two basic comments here. First, a connection could use traditional artificial memory techniques as an index: imagine a muscular man with a tremendous physique running onto the scene, grabbing an adventurer's sword, shield, and pack, sitting down at a pipe organ which has a large illuminated manuscript on top, and clumsily playing music until a giant gold ring engraved with fiery letters falls on the scene and turns it to dust. You have pegged physics to adventure, music, literature, and magic; if you wanted to reconstruct an understanding of physics, you could see what it was pegged to, and then try to recall the given similarities. Second and more deeply, I believe that a person's entire edifice of previously acquired concepts may serve as an immense memory palace. It is not spatial in the traditional sense, and I am not here concerned with the senses in which it might be considered a topological space, but it is a deeply qualitative place, and accessible if one uses traditional artificial memory for an index: these adaptations are intended to expand the repertoire of what disciplined artificial memory can do, not abolish the traditional discipline.

Symbols are the last unexplored facet. Earlier I suggested that a chessboard with mirrors along its diagonal may be a good token to represent Cantor's diagonal argument, but does not bring memory of the whole proof. Now I would like to give the other side: an abstraction may not be fully captured by a symbol, but a good symbol helps. A sign/symbol distinction has been made, where a sign represents while a symbol represents and embodies. In this sense I suggest that tokens be as symbolic as possible.

Why use a token? Aren't the deepest thoughts beyond words? Yes, but recall depends on being able to encode. I have found my deepest thoughts to not be worded and often difficult to translate to words, but I have also found that I lose them if I cannot put them in words. As such, thinking and choosing a good, mentally manipulable symbol for an abstraction is both difficult and desirable. My own discipline of formation, mathematics, chooses names for variables like 'x', 'y', and 'z' which software engineers are taught not to use because they impede comprehension: a computer program with variable names like 'x' and 'y' is harder to understand or even write to completion than one which with names like 'trucks_remaining' or 'customers_last_name'. The authors of Design Patterns[18] comment that naming a pattern is one of the hardest parts of writing it down. The art of creating a manipulable symbol for an abstraction is hard, but worth the trouble. This, too, may also help you to probe an abstraction in a way that will aid recall.

To test these principles, I decided to spend a week[19] seeing what I could learn of a physics text[20] and Kant's Critique of Pure Reason[21]. I considered myself to have understood a portion of the physics text after being able to solve the last of the list of questions. I had originally decided to see how quickly I could absorb material. After working through 10% of the physics text in one day, I decided to shift emphasis and pursue depth more than speed. In reading Kant, the tendency to barely grasp a difficult concept forgotten in grasping the next difficult concept gave way, with artificial memory, to understanding the concepts better and grasping them in a way that had a more permanent effect. I read through page 108 of 607 in the physics text and 144 of 669 in Kant's Critique of Pure Reason.

The first day's physics ventures saw two interesting ways of storing concepts, and one comment worth mentioning. There is a classic skit, in which two rescuers are performing two-person CPR on a patient. Then one of the rescuers says, "I'm getting tired. Let's switch," and the patient gets up, the tired rescuer lies down, and the other two perform CPR on him. This was used to store the interchangeability of point of effort, point of resistance, and fulcrum on a lever, based on an isomorphism to the skit's humor element.

The rule given later, that along any axis the sum of forces for a body in equilibrium is always zero, was symbolized by an image of a knife cutting a circle through the center: no matter what angle of cutting there was, the cut leaves two equal halves.

These both involved images, but the images differed from pegging images as a schematic diagram differs from a computer animated advertisement. They seemed a combination of an isomorphism and a symbol, and in both cases the power stemmed not only from the resultant image but the process of creation. The images functioned in a sense related to pegging, but most of the images so far developed have been abstract images unlike anything I've read about in historical or how-to discussion of the art of memory.

The following was logged that night. The problem referred to is a somewhat complex lever problem given in three parts:

In reviewing the day's thoughts at night, I recognized that the problems seem to admit a shortcut solution that does not rigorously apply the principles but obtains the correct answer: problem 12 on page 31 gives two weights and other information, and all three subproblems can be answered by assuming that there are two parts in the same ratio [as] the weights, and applying a little horse sense as to which goes where. It's a bit like general relativity, which condenses to "Everything changes by a factor of the square root of (1 - (v^2/c^2))." I am not sure whether this is a property of physics itself or a socially emergent property of problems used in physics texts.

I believe this suggests that I was interacting with the material deeply and quite probably in a fashion not anticipated by the authors.

In reading Kant, I can't as easily say "I solved the last exercises in each section" and don't simply want to just say, "I read these pages." I would like to demonstrate interaction with the material with excerpts from my log:

...I am now in the introduction to the second edition, and there are two images in reference to Kant's treatment of subjective and objective. One is of a disc which has been cut in half, sliced again along a perpendicular axis and brought together along the first axis so that the direction of the cut has been changed. The other is of a sphere being turned out by [topologically] compactifying R3 [Euclidean three-space] by the addition of a single point, and then shifting so the vast outside has become the cramped inside and the cramped inside has become the vast outside. Both images are inadequate to the text, indicating at best what sort of thing may be thought about in what sort of shift Kant tries to introduce, and I want to reread the last couple of pages. Closer to the mark is a story about three umpires who say, in turn, "I calls them as they are," "I calls them as I see them," and "They may be strikes, they may be balls, but they ain't nothing until I calls them!"


Having reread, I believe that the topological example is truer than I realized. I made it on almost superficial grounds, after reading a footnote which gave as example scientific progress after Copernicus proposed, rather than that the observer be fixed and the heavens rotate, the heavens are fixed and the observer rotate. The deeper significance is this: prior accounts had apparently not given sufficient account to subjective factors, treating subjective differences as practically unimportant—what mattered for investigation was the things in themselves. Thus the subjective was the unexamined inside of the sphere. Then, after the transformation, the objective was the unexaminable inside of the new sphere: we may investigate what is now outside, our subjective states and the appearances conformed to them, but things in themselves are more sealed than our filters before: before, we didn't look; after, we can't look. What is stated [in Kant] so far is a gross overextension of a profound observation.

The below passages refer to pp. 68-70:

Kant's arguments that space is an a priori concept can be framed as showing that there exists a chicken-and-egg or bootstrapping gap between them and sense data.

What is a chicken-and-egg/bootstrapping gap? In assisting with English as a Second Language instruction, I was faced with a difficulty in explanation. Assuming certain background, it is possible for a person not to know something while there is a straightforward way of explaining—perhaps a very long way of explaining, but it's obvious enough how to explain it in terms of communicable concepts. Then there is the case where there is no direct way to explain something: one example is how to explain to a small child what air is. One can point to water, wood, metal, stone, food, and a great many other things, but the same procedure may not yield understanding of air. It may be possible with a Zen-like cleverness to circumvent it—in saying, for example, that air is what presses on your skin on a windy day—but it is not as straightforward as even an involved and difficult explanation where you know how to use the other person's concepts to build the one you want.

In English as a Second Language instruction, this kind of gap is a significant phenomenon in dealing with students who have no beginning English knowledge, and in dealing with concepts that cannot obviously be demonstrated: 'sister' and 'woman', when both terms refer to an adult, differ in a way that is almost certainly understood in the student's native tongue but is nonetheless extremely difficult to explain. When I first made the musing, I envisioned a Zen-like solution. Koans immortalize incidents in which Zen masters bypassed chicken-and-egg gaps in trying to convey enlightenment that cannot be straightforwardly explained, and therefore show a powerful kind of communication. That is what I envisioned, but it is not how English is taught to speakers of other languages. What happens in ESL classes, and with younger children, is a gradual emergence that is difficult to account for in the terms of analytic philosophy—a straightforward explanation sounds like hand-waving and sloppy thinking—but with enough repetition, material is picked up. It may have something to do with a mechanism of learning outlined in Polanyi's Personal Knowledge, which talks about how i.e. swimmers learn from coaches to inhale more air and exhale less completely so that their lungs act more as a flotation device than a non-swimmers, even though neither swimmer nor coach is likely aware of what is going on on any conscious level. People pick things up through at least one route besides grasping a concept consciously synthesized from sense data.

Kant's proof that a given concept is a priori essentially consists of argument that the concept that cannot be synthesized from sense data through the obvious means of central route processing. He is probably right in that the concepts he classifies as a priori, and presumably others as well, cannot just be synthesized from sense data through central route processing. It does not follow that a concept must be a priori: there are other possibilities besides the route Kant investigates that one can acquire a belief. I do believe, though, that we come with some kind of innate or a priori knowledge: the difficulties experienced in visualizing four dimensional objects suggest that our dealing with three-dimensional space is not simply the result of a completely amorphous central nervous system which we happen to condition to deal with three dimensions; there is something of substance, comparable in character to a psychologist's broader understanding of memory, that we are born to. An investigation of that would take me too far afield.


P. 87. "Now a thing in itself cannot be known throu[g]h mere relations; and we may therefore conclude that since outer science gives us nothing but mere relations, this sense can contain in its representation only the relation of an object to the subject, and not the inner properties of the object in itself."

There is a near-compatibility between this and realist philosophy of science. How?

Recall my observation about chicken-and-egg gaps and how they may be surmounted (here I think of Zenlike short-circuiting of the gap rather than the vaguely indicated gradual emergence of concepts which haven't been subject to a detailed and understood explanation). What goes on in a physics experiment? The truly famous ones since 1900—I think of the Millikin oil-drop experiment—include a very clever hack that tricks nature into revealing herself. People, not even experimental physicists, can grab a handful of household items and prove that electric charge is quantized.[22] Perhaps that was possible in Galileo's day, but a groundbreaking experiment involves a brilliant, clever, unexpected trickery of nature that is isomorphic to a Zen short-circuiting in a chicken-and-egg gap, or a clever hack, and so on and so forth. Even a routine classroom experiment uses technology that is the fruit of this kind of resourcefulness. People do something they "shouldn't" be able to do. This is possibly how we might learn intuitions Kant classifies as a priori, and how experimental scientists cleverly circumvent the roadblock Kant describes here. It might be said that understanding this basic problem is prerequisite to a good realist philosophy of science.

'Hack', in this context, refers to the programming cleverness described in Programming Pearls[23]. I analyzed that fundamental mode of problem solving and compared it with its counterpart in "Of Technology, Magic, and Channels"[24]. There are other observations and interactions with the text, but I believe these should adequately make the point.

I chose Kant because of his reputation as an impenetrable analytic philosopher. With the aid of a good translation and these principles, I was at times surprised at how easy it was to read. By the end of the week, I had another surprise when I decided to reread George MacDonald's Phantastes[25], a work which I have greatly enjoyed. This time, my experience was different. I felt my mind working differently despite a high degree of mental fatigue. The evocative metaphor fell dead, and I found myself reading the text as I would read Kant, thinking in a manner deeply influenced by reading Kant, and in the end setting it down because my mind had shifted deeply into a mode quite different from what allows me to enjoy Phantastes. I was surprised at how deeply using abstract memory to read Kant had affected not only conscious recall of ideas but also ways of thought itself.

I do not consider my recorded observations to be in any sense a rigorous experiment, but I believe the experience suggests it's interesting enough to be worth a good experiment.

Here are twelve proposed principles, or rules of thumb, of abstract memory:

  1. Be wholly present. Want to know the material. Make it emotionally relevant and connected to something that concerns you. Don't take notes[26].

  2. Encode material in multiple ways. Some different ways to encode are: analogies to different abstractions, list distinctions from similar abstractions, paraphrase, search for isomorphisms, use the concepts, and create visual symbols.[27]

  3. At least in the beginning, mix a little bit of reading material with a lot of processing. Don't plough through anything you want to remember. Work on drawing a lot of mist in, not pounding with heavy drops that will create a beaten shield.

  4. Don't read out of a desire to finish reading a text. Read to draw the materials through processed thought.

  5. Process in a way that is striking, stunning, novel, and counter-intuitive: in a word, memorable.

  6. Process material on as deep a level as you can.[28]

  7. Search for subtle distinctions between a concept under study and its near neighbors.

  8. Converse, interact with, and respond to the abstractions. What would you say if an acquaintance said that in a discussion? What questions would you ask? Write it down.

  9. Know how much mental energy you have, and choose battles wisely. Given a limited amount of energy, it is better to fully remember a smaller number of critical abstractions than to have diffuse knowledge of many random ideas.

  10. Guard your emotions. Be aware of what emotional states you learn well in, and put being in those states before passing your eyes over such-and-such many pages of reading material.

  11. Review material after study, seeking to find a different way of putting it.

  12. Metacogitate. Be your own coach.

Committing these principles to memory is left as an exercise to the reader.

What can I say to conclude this monograph? I can think of one or two brief addenda, such as the programmer's virtue of laziness[29], but in a very real sense I can't conclude now. I can sketch out a couple of critiques that may be of interest. Jerry Mander[30] critiques the artificial unusuality of television and especially advertising, in a way that has direct bearing on traditional mnemotechnics. He suggests that giving otherwise uninteresting sensation a strained and artificial unusuality has undesirable impact on how people perceive life as seen outside of TV, and the angle of his critique is the main reason why I was hesitant to learn artificial memory. There may be room for similar critiques about why making ridiculous comparisons to remember ideas creates a bad habit for someone who wishes to think rigorously. There is also the cognitive critique that the search for isomorphisms will introduce unnoted distortion. One thinks of the person who says, "All the religions in the world say the same thing." There is a common and problematic tendency to be astute in perceiving substantial similarities among world religions and all but blind in perceiving even more substantial differences. That is why I suggest comparing with multiple and different familiar concepts, rather than one. I could give other thoughts about critiques, but I'm trying to explain an art of memory, not especially to defend it.My intention here is not to settle all questions, but open the biggest one and suggest a direction of inquiry by which an emerging investigation may find a more powerful way to learn abstractions.[31]

Notes

  1. Yates, Frances A., The Art of Memory, hereafter AM, Chicago: University of Chicago Press, 1966, pp. 1-2. The text is a treasure trove on the development of mnemotechnics, also referred to here as artificial memory or the art of memory. Back

  2. Trudeau, Kevin, Kevin Trudeau's Mega Memory, hereafter KTMM, New York: William Morrow & Co., 1995 is one of several practical manuals for someone who thinks the classical art of memory interesting and would like to be able to use it. Back

  3. AM, pp. 27ff. Back

  4. Ibid., pp. 50ff. Back

  5. Ibid., pp. 129ff. Back

  6. Ibid., pp. 173ff. Back

  7. Ibid., pp. 231ff. Back

  8. Jowett, B., The Dialogues of Plato, Vol. III, hereafter DP, New York: National Library Company, pp. 442-443. Back

  9. AM, pp. 112ff describes one popularizer whose somewhat debased form advocated memorizing individual letters. This practice is awkward, much as it would be awkward to record the appearance of a room by taking a notepad and writing one letter on each sheet of paper. Back

  10. Feynman, Richard, Surely You're Joking, Mr. Feynman, hereafter SYJMF, New York: W. W. Norton & Company, 1985, pp. 338ff and other places in the text. He began his famous "Cargo Cult Science" address by talking about his occult diversions from scientific endeavors, and it is arguable that Newton's groundbreaking work in physics and optics was a scientific diversion from his main occult endeavors. I find it revealing that, even with Feynman's occult forays left in the book, the index shows curious lacunae for "ESP", "Hallicunation", "New Age", "Reflexology", "Sensory deprivation", etc. Back

  11. 100 Ways of Kything, hereafter 1WK, by Jonathan Hayward, at CJSHayward.com/kything describes a number of activities which can embody presence and focus. Back

  12. Hayes, J.R., and Simon, H.A., "Understanding Written Problem Instructions", 1974, in Gregg, L.W. ed., Knowledge and Cognition, hereafter KC, Hillsdale: Erlbaum. Quoted in Posner, Michael I. ed., Foundations of Cognitive Science, hereafter FCS, Cambridge: The MIT Press, 1989, pp. 534-535. Back

  13. FCS, pp. 559-560. Back

  14. SYJMF, pp. 36-37. A more scholarly, if more pedestrian, mention of the phenomenon is provided in FCS, pp. 559-560. Back

  15. FCS, p. 690. The authors do not necessarily subscribe to this view, but acknowledge influence among many in the field. Back

  16. Ibid., p. 691. Back

  17. >"A Picture of Evil", hereafter APE, by Jonathan Hayward, at CJSHayward.com/evil provides an example of communication which is striking in this manner. Back

  18. Gamma, Erich; Helm, Richard; Johnson, Ralph; Vlissides, John, Design Patterns: Elements of Reusable Object-Oriented Software, hereafter DP, Reading: Addison-Wesley, p. 3. The book describes recurring good practices that are known to many expert practitioners, but often only on a tacit level—and tries to explain how this tacit knowledge can be made explicit. The book is commonly called 'GoF' ("Gang of Four") by software developers. Thanks to Ron Miles for locating the page number. Back

  19. February 9-15 2002. Testing abstract artificial and honing this article were juggled with other responsibilities. Back

  20. Black, Newton Henry; Davis, Harvey Nathaniel, New Practical Physics: Fundamental Principles and Applications to Daily Life, hereafter NPP, New York: Macmillan, 1929. Given to me as a whimsical Christmas gift in 2001. At the time of beginning, I was significantly out of practice in both physics and mathematics. Back

  21. Smith, Norman Kemp tr., Immanuel Kant's Critique of Pure Reason, hereafter IKCPR, London: Macmillan, 1929. I had not previously read Kant. Back

  22. I knew that science doesn't deal in proof; experiments may corroborate a theory, but not establish it as something to never again doubt. I was thinking at that point along another dimension, to convey a quality of physics experiments today. Back

  23. Bentley, Jon Louis, Programming Pearls, hereafter PP, Reading: Addison-Wesley, 1986. Back

  24. Hayward, Jonathan, "Of Technology, Magic, and Channels", in Gift of Fire, June 2001, number 126. Back

  25. MacDonald, George, Phantastes, hereafter P, reprinted Grand Rapids: Wm. B. Eerdmans, 1999. Back

  26. Despite widespread endorsement of this practice, taking notes taxes limited mental energy that can better be used to understand the material, and acts to the mind as a signal of, "This can safely be forgotten." KTMM, very early on, makes a point of telling readers not to take notes (p. 5). The purpose of attending a lecture or reading a book is to make internal comprehension rather than external reference materials. Back

  27. Tulving, Endel; Craik, Fergus I.M., The Oxford Handbook of Memory, hereafter OHM, Oxford: Oxford University Press, 2000, refers on p. 98 to the picture superiority effect, which states that pictures are better remembered because of a dual coding where they are encoded as image and words and therefore have two chances at being stored rather than the one chance when material is presented only as words. Back

  28. OHM mentions on p. 94 the "levels of processing" view, a significant perspective which states that material is retained better the more deeply it is processed. Back

  29. Wall, Larry; Christiansen, Tom; Schwartz, Randal L., Programming Perl, Second Edition, hereafter PP2, Sebastopol: O'Reilly, pp. 217ff and other places throughout the book. Known by the affectionate nickname of "the camel book" among software developers. (This book is distinct from PP). Back

  30. Mander, Jerry, Four Arguments for the Elimination of Television, hereafter FAET, New York: Morrow Quill, 1978, pp. 299ff. Back

  31. I would like to thank Robin Munn for giving me my first serious introduction to the art of memory, Linda Washington and Martin Harris for looking at my manuscript, William Struthers for valuable comments about source material, and Chris Tessone, Angela Zielinski, Kent and Theo Nebergall, and people from Wheaton College and International Christian Mensa for prayer. I would also like to thank those who read this article, apply it, perhaps extend it, and perhaps tell others about them. Back

The Administrator Who Cried, "Important!"

Once upon a time, there was a new employee, hired fresh out of college by a big company. The first day on the job, he attended a pep rally, filled out paperwork concerning taxes and insurance, and received a two page document that said at the top, "Sexual Harassment Policy: Important. Read Very Carefully!"

So our employee read the sexual harassment policy with utmost care, and signed at the bottom indicating that he had read it. The policy was a remedial course in common sense, although parts of it showed a decided lack of common sense. It was an insult to both his intelligence and his social maturity.

Our employee was slightly puzzled as to why he was expected to read such a document that carefully, but soon pushed doubts out of his mind. He trotted over to his new cubicle, sat down, and began to read the two inch thick manual on core essentials that every employee needs to know. He was still reading core essentials two hours later when his boss came by and said, "Could you take a break from that? I want to introduce you to your new co-workers, and show you around."

So our employee talked with his boss — a knowledgeable, competent, and understanding woman — and enjoyed meeting his co-workers, trying to learn their names. He didn't have very much other work yet, so he dutifully read everything that the administrators sent him — even the ones that didn't say "Important — please read" at the top. He read about ISO 9001 certification, continual changes and updates to company policy, new technologies that the company was adopting, employee discounts, customer success stories, and other oddments totalling to at least a quarter inch of paper each day, not counting e-mails.

His boss saw that he worked well, and began to assign more difficult tasks appropriate to his talent. He took on this new workload while continuing to read everything the administration told him to read, and worked longer and longer days.

One day, a veteran came and put a hand on his shoulder, saying, "Kid, just between the two of us, you don't have to read every piece of paper that says 'Important' at the top. None of us read all that."

And so our friend began to glance at the first pages of long memos, to see if they said anything helpful for him to know, and found that most of them did not. Some time after that, he realized that his boss or one of his co-workers would explicitly tell him if there was a memo that said something he needed to know. The employee found his workload reduced to slightly less than fifty hours per week. He was productive and happy.

One day, a memo came. It said at the top, "Important: Please Read." A little more than halfway through, on page twenty-seven, there was a description of a new law that had been passed, and how it required several jobs (including his own) to be done in a slightly different manner. Unfortunately, our friend's boss was in bed with a bad stomach flu, and so she wasn't able to tell him he needed to read the memo. So he continued doing his job as usual.

A year later, the company found itself the defendant in a forty million dollar lawsuit, and traced the negligence to the action of one single employee — our friend. He was fired, and made the central villain in the storm of bad publicity.

But he definitely was in the wrong, and deserved what was coming to him. The administration very clearly explained the liability and his responsibility, in a memo very clearly labelled "Important". And he didn't even read the memo. It's his fault, right?

No.

Every communication that is sent to a person constitutes an implicit claim of, "This concerns you and is worth your attention." If experience tells other people that we lie again and again when we say this, then what right do we have to be believed when we really do have something important to say?

I retold the story of the boy who cried wolf as the story of the administrator who cried important, because administrators are among the worst offenders, along with lawyers, spammers, and perhaps people who pass along e-mail forwards. Among the stack of paper I was expected to sign when I moved in to my apartment was a statement that I had tested my smoke detector. The apartment staff was surprised that I wanted to test my smoke detector before signing my name to that statement. When an authority figure is surprised when a person reads a statement carefully and doesn't want to sign a claim that all involved know to be false, it's a bad sign.

There is communication that concerns the person it's directed to, but says too much — for example, most of the legal contracts I've seen. The tiny print used to print many of those contracts constitutes an implicit acknowledment that the signer is not expected to read it: they don't even use the additional sheets of paper necessary to print text at a size that a person who only has 20/20 vision can easily read. There is also communication that is broadcast to many people who have no interest in it. To that communication, I would propose the following rule: Do not, without exceptionally good reason, broadcast a communication that concerns only a minority of its recipients. It's OK every now and then to announce that the blue Toyota with license place ABC 123 has its lights on. It's not OK to have a regular announcement that broadcasts anything that is approved as having interest to some of the recipients.

My church, which I am in general very happy with, has succumbed to vice by adding a section to the worship liturgy called "Announcements", where someone reads a list of events and such just before the end of the service, and completely dispels the moment that has been filling the sanctuary up until the announcements start. They don't do this with other things — the offering is announced by music (usually good music) that contributes to the reverent atmosphere of the service. But when the service is drawing to a close, the worshipful atmosphere is disrupted by announcements which I at least almost never find useful. If the same list were printed on a sheet of paper, I could read it after the service, in less time, with greater comprehension, with zero disruption to the moment that every other part of the service tries so carefully to build — and I could skip over any announcements that begin "For Married Couples:" or "Attention Junior High and High Schoolers!" The only advantage I can see to the present practice, from the church leadership's perspective, is that many people will not read the announcements at all if they have a choice about it — and maybe, just maybe, there's a lesson in that.

As well as pointing out examples of a rampant problem in communication, where an administrator cries "Important!" over many things that are not worth reading, and then wonders why people don't believe him when he cries "Important!" about something which is important, I would like to suggest an alternative for communities that have access to the internet. A web server could use a form to let people select areas of concern and interest, and announcements submitted would be categorized, optionally cleared with a moderator, and sent only to those people who are interested in them. Another desirable feature might let end receivers select how much announcement information they can receive in a day — providing a discernible incentive to the senders to minimize trivial communication. In a sense, this is what happens already — intercom litanies of announcements ignored by school students in a classroom, employees carrying memos straight from their mailboxes to the recycle bins — but in this case, administrators receive clear incentive and choice to conserve bandwidth and only send stuff that is genuinely important.

While I'm giving my Utopian dreams, I'd like to comment that at least some of this functionality is already supported by the infrastructure developed by UseNet. Probably there are refinements that can be implemented in a web interface — all announcements for one topic shown from a single web page, since they shouldn't be nearly as long as a normal UseNet post arguing some obscure detail in an ongoing discussion. Perhaps other and better can be done — I am suggesting "Here's something better than the status quo," not "Here's something so perfect that there's no room for improvement."

In one UseNet newsgroup, an exchange occurred that broadcasters of announcements would be well-advised to keep in mind. One person said, "I'm trying to decide whether to give the UseNet Bore of the Year Award to [name] or [name]. The winner will receive, as his prize, a copy of all of their postings, minutely inscribed, and rolled up inside a two foot poster tube."

Someone else posted a reply asking, "Length or diameter?"

To those of you who broadcast to people whom you are able to address because of your position and not because they have chosen to receive your broadcasts, I have the following to say: In each communication you send, you are deciding the basis by which people will decide if future communications are worth paying attention to, or just unwanted noise. If your noise deafens their ears, you have no right to complain that the few truly important things you have to tell them fall on deaf ears. Only you can prevent spam!

AI as an Arena for Magical Thinking Among Skeptics

Artificial Intelligence, Cognitive Science, and Eastern Orthodox Views on Personhood

Cog, portrayed as 'Robo Sapiens'

AI as an Arena for Magical Thinking Among Skeptics
Artificial Intelligence, Cognitive Science, and Eastern Orthodox Views on Personhood

M.Phil. Dissertation

Jonathan Hayward
christos.jonathan.hayward@gmail.com
CJSHayward.com

15 June 2004

Table of Contents

Abstract

Introduction

Artificial Intelligence

The Optimality Assumption

Just Around the Corner Since 1950

The Ghost in the Machine

Occult Foundations of Modern Science

Renaissance and Early Modern Magic

Science, Psychology, and Behaviourism

I-Thou and Humanness

Orthodox Anthropology in Maximus Confessor's Mystagogia

Intellect and Reason

Intellect, Principles, and Cosmology

The Intelligible and the Sensible

Knowledge of the Immanent

Intentionality and Teleology

Conclusion

Epilogue

Bibliography

Abstract

I explore artificial intelligence as failing in a way that is characteristic of a faulty anthropology. Artificial intelligence has had excellent funding, brilliant minds, and exponentially faster computers, which suggests that any failures present may not be due to lack of resources, but arise from an error that is manifest in anthropology and may even be cosmological. Maximus Confessor provides a genuinely different background to criticise artificial intelligence, a background which shares far fewer assumptions with the artificial intelligence movement than figures like John Searle. Throughout this dissertation, I will be looking at topics which seem to offer something interesting, even if cultural factors today often obscure their relevance. I discuss Maximus's use of the patristic distinction between 'reason' and spiritual 'intellect' as providing an interesting alternative to 'cognitive faculties.' My approach is meant to be distinctive both by reference to Greek Fathers and by studying artificial intelligence in light of the occult foundations of modern science, an important datum omitted in the broader scientific movement's self-presentation. The occult serves as a bridge easing the transition between Maximus Confessor's worldview and that of artificial intelligence. The broader goal is to make three suggestions: first, that artificial intelligence provides an experimental test of scientific materialism's picture of the human mind; second, that the outcome of the experiment suggests we might reconsider scientific materialism's I-It relationship to the world; and third, that figures like Maximus Confessor, working within an I-Thou relationship, offer more wisdom to us today than is sometimes assumed. I do not attempt to compare Maximus Confessor's Orthodoxy with other religious traditions, however I do suggest that Orthodoxy has relevant insights into personhood which the artificial intelligence community still lacks.

Introduction

Some decades ago, one could imagine a science fiction writer asking, 'What would happen if billions of dollars, dedicated laboratories with some of the world's most advanced equipment, indeed an important academic discipline with decades of work from some of the world's most brilliant minds—what if all of these were poured into an attempt to make an artificial mind based on an understanding of personhood that came out of a framework of false assumptions?' We could wince at the waste, or wonder that after all the failures the researchers still had faith in their project. And yet exactly this philosophical experiment has been carried out, in full, and has been expanded. This philosophical experiment is the artificial intelligence movement.

What relevance does AI have to theology? Artificial intelligence assumes a particular anthropology, and failures by artificial intelligence may reflect something of interest to theological anthropology. It appears that the artificial intelligence project has failed in a substantial and characteristic way, and furthermore that it has failed as if its assumptions were false—in a way that makes sense given some form of Christian theological anthropology. I will therefore be using the failure of artificial intelligence as a point of departure for the study of theological anthropology. Beyond a negative critique, I will be exploring a positive alternative. The structure of this dissertation will open with critiques, then trace historical development from an interesting alternative to the present problematic state, and then explore that older alternative. I will thus move in the opposite of the usual direction.

For the purposes of this dissertation, artificial intelligence (AI) denotes the endeavour to create computer software that will be humanly intelligent, and cognitive science the interdisciplinary field which seeks to understand the mind on computational terms so it can be re-implemented on a computer. Artificial intelligence is more focused on programming, whilst cognitive science includes other disciplines such as philosophy of mind, cognitive psychology, and linguistics. Strong AI is the classical approach which has generated chess players and theorem provers, and tries to create a disembodied mind. Other areas of artificial intelligence include the connectionist school, which works with neural nets,[1] and embodied AI, which tries to take our mind's embodiment seriously. The picture on the cover[2] is from an embodied AI website and is interesting for reasons which I will discuss below under the heading of 'Artificial Intelligence.'

Fraser Watts (2002) and John Puddefoot (1996) offer similar and straightforward pictures of AI. I will depart from them in being less optimistic about the present state of AI, and more willing to find something lurking beneath appearances. I owe my brief remarks about AI and its eschatology, under the heading of 'Artificial Intelligence' below, to a line of Watts' argument.[3]

Other critics[4] argue that artificial intelligence neglects the body as mere packaging for the mind, pointing out ways in which our intelligence is embodied. They share many of the basic assumptions of artificial intelligence but understand our minds as biologically emergent and therefore tied to the body.

There are two basic points I accept in their critiques:

First, they argue that our intelligence is an embodied intelligence, often with specific arguments that are worth attention.

Second, they often capture a quality, or flavour, to thought that beautifully illustrates what sort of thing human thought might be besides digital symbol manipulation on biological hardware.

There are two basic points where I will be departing from their line of argument:

First, they think outside the box, but may not go far enough. They are playing on the opposite team to cognitive science researchers, but they are playing the same game, by the same rules. The disagreement between proponents and critics is not whether mind may be explained in purely materialist terms, but only whether that assumption entails that minds can be re-implemented on computers.

Second, they see the mind's ties to the body, but not to the spirit, which means that they miss out on half of a spectrum of interesting critiques. I will seek to explore what, in particular, some of the other half of the spectrum might look like. As their critiques explore what it might mean to say that the mind is embodied, the discussion of reason and intellect under the heading 'Intellect and Reason' below may give some sense of what it might mean to say that the mind is spiritual. In particular, the conception of the intellects offers an interesting base characterisation of human thought that competes with cognitive faculties. Rather than saying that the critics offer false critiques, I suggest that they are too narrow and miss important arguments that are worth exploring.

I will explore failures of artificial intelligence in connection with the Greek Fathers. More specifically, I will look at the seventh century Maximus Confessor's Mystagogia. I will investigate the occult as a conduit between the (quasi-Patristic) medieval West and the West today. The use of Orthodox sources could be a particularly helpful light, and one that is not explored elsewhere. Artificial intelligence seems to fail along lines predictable to the patristic understanding of a spirit-soul-body unity, essentially connected with God and other creatures. The discussion becomes more interesting when one looks at the implications of the patristic distinction between 'reason' and the spiritual 'intellect.' I suggest that connections with the Orthodox doctrine of divinisation may make an interesting a direction for future enquiry. I will only make a two-way comparison between Orthodox theological anthropology and one particular quasi-theological anthropology. This dissertation is in particular not an attempt to compare Orthodoxy with other religious traditions.

One wag said that the best book on computer programming for the layperson was Alice's Adventures in Wonderland, but that's just because the best book on anything for the layperson was Alice's Adventures in Wonderland. One lesson learned by a beginning scholar is that many things that 'everybody knows' are mistaken or half-truths, as 'everybody knows' the truth about Galileo, the Crusades, the Spanish Inquisition, and other select historical topics which we learn about by rumour. There are some things we will have trouble understanding unless we can question what 'everybody knows.' This dissertation will be challenging certain things that 'everybody knows,' such as that we're making progress towards achieving artificial intelligence, that seventh century theology belongs in a separate mental compartment from AI, or that science is a different kind of thing from magic. The result is bound to resemble a tour of Wonderland, not because I am pursuing strangeness for its own sake, but because my attempt to understand artificial intelligence has taken me to strange places. Renaissance and early modern magic is a place artificial intelligence has been, and patristic theology represents what we had to leave to get to artificial intelligence.

The artificial intelligence project as we know it has existed for perhaps half a century, but its roots reach much further back. This picture attests to something that has been a human desire for much longer than we've had digital computers. In exploring the roots of artificial intelligence, there may be reason to look at a topic that may seem strange to mention in connection with science: the Renaissance and early modern occult enterprise.

Why bring the occult into a discussion of artificial intelligence? It doesn't make sense if you accept science's own self-portrayal and look at the past through its eyes. Yet this shows bias and insensitivity to another culture's inner logic, almost a cultural imperialism—not between two cultures today but between the present and the past. A part of what I will be trying to do in this thesis is look at things that have genuine relevance to this question, but whose relevance is obscured by cultural factors today. Our sense of a deep divide between science and magic is more cultural prejudice than considered historical judgment. We judge by the concept of scientific progress, and treating prior cultures' endeavours as more or less successful attempts to establish a scientific enterprise properly measured by our terms.

We miss how the occult turn taken by some of Western culture in the Renaissance and early modern period established lines of development that remain foundational to science today. Many chasms exist between the mediaeval perspective and our own, and there is good reason to place the decisive break between the mediaeval way of life and the Renaissance/early modern occult development, not placing mediaeval times and magic together with an exceptionalism for our science. I suggest that our main differences with the occult project are disagreements as to means, not ends—and that distinguishes the post-mediaeval West from the mediaevals. If so, there is a kinship between the occult project and our own time: we provide a variant answer to the same question as the Renaissance magus, whilst patristic and mediaeval Christians were exploring another question altogether. The occult vision has fragmented, with its dominion over the natural world becoming scientific technology, its vision for a better world becoming political ideology, and its spiritual practices becoming a private fantasy.

One way to look at historical data in a way that shows the kind of sensitivity I'm interested in, is explored by Mary Midgley in Science as Salvation (1992); she doesn't dwell on the occult as such, but she perceptively argues that science is far more continuous with religion than its self-understanding would suggest. Her approach pays a certain kind of attention to things which science leads us to ignore. She looks at ways science is doing far more than falsifying hypotheses, and in so doing observes some things which are important. I hope to develop a similar argument in a different direction, arguing that science is far more continuous with the occult than its self-understanding would suggest. This thesis is intended neither to be a correction nor a refinement of her position, but development of a parallel line of enquiry.

It is as if a great island, called Magic, began to drift away from the cultural mainland. It had plans for what the mainland should be converted into, but had no wish to be associated with the mainland. As time passed, the island fragmented into smaller islands, and on all of these new islands the features hardened and became more sharply defined. One of the islands is named Ideology. The one we are interested in is Science, which is not interchangeable with the original Magic, but is even less independent: in some ways Science differs from Magic by being more like Magic than Magic itself. Science is further from the mainland than Magic was, even if its influence on the mainland is if anything greater than what Magic once held. I am interested in a scientific endeavour, and in particular a basic relationship behind scientific enquiry, which are to a substantial degree continuous with a magical endeavour and a basic relationship behind magic. These are foundationally important, and even if it is not yet clear what they may mean, I will try to substantiate these as the thesis develops. I propose the idea of Magic breaking off from a societal mainland, and sharpening and hardening into Science, as more helpful than the idea of science and magic as opposites.

There is in fact historical precedent for such a phenomenon. I suggest that a parallel with Eucharistic doctrine might illuminate the interrelationship between Orthodoxy, Renaissance and early modern magic, and science (including artificial intelligence). When Aquinas made the Christian-Aristotelian synthesis, he changed the doctrine of the Eucharist. The Eucharist had previously been understood on Orthodox terms that used a Platonic conception of bread and wine participating in the body and blood of Christ, so that bread remained bread whilst becoming the body of Christ. One substance had two natures. Aristotelian philosophy had little room for one substance which had two natures, so one thing cannot simultaneously be bread and the body of Christ. When Aquinas subsumed real presence doctrine under an Aristotelian framework, he managed a delicate balancing act, in which bread ceased to be bread when it became the body of Christ, and it was a miracle that the accidents of bread held together after the substance had changed. I suggest that when Zwingli expunged real presence doctrine completely, he was not abolishing the Aristotelian impulse, but carrying it to its proper end. In like fashion, the scientific movement is not a repudiation of the magical impulse, but a development of it according to its own inner logic. It expunges the supernatural as Zwingli expunged the real presence, because that is where one gravitates once the journey has begun. What Aquinas and the Renaissance magus had was composed of things that did not fit together. As I will explore below under the heading 'Renaissance and Early Modern Magic,' the Renaissance magus ceased relating to society as to one's mother and began treating it as raw material; this foundational change to a depersonalised relationship would later secularise the occult and transform it into science. The parallel between medieval Christianity/magic/science and Orthodoxy/Aquinas/Zwingli seems to be fertile: real presence doctrine can be placed under an Aristotelian framework, and a sense of the supernatural can be held by someone who is stepping out of a personal kind of relationship, but in both cases it doesn't sit well, and after two or so centuries people finished the job by subtracting the supernatural.

Without discussing the principles in Thomas Dixon's 1999 delineation of theology, anti-theology, and atheology that can be un-theological or quasi-theological, regarding when one is justified in claiming that theology is present, I adopt the following rule:

A claim is considered quasi-theological if it can conflict with theological claims.

Given this rule, patristic theology, Renaissance and early modern magic (hereafter 'magic' or 'the occult'), and artificial intelligence claims are all considered to be theological or quasi-theological.

I will not properly trace an historical development so much as show the distinctions between archetypal scientific, occult, and Orthodox worldviews as seen at different times, and briefly discuss their relationships with some historical remarks. Not only are there surprisingly persistent tendencies, but Lee repeats Weber's suggestion that there is real value to understand ideal types.[5]

I will be attempting to bring together pieces of a puzzle—pieces scattered across disciplines and across centuries, often hidden by today's cultural assumptions about what is and is not connected—to show their interconnections and the picture that emerges from their fit. I will be looking at features including intentionality,[6] teleology,[7] cognitive faculties,[8] the spiritual intellect,[9] cosmology, and a strange figure who wields a magic sword with which to slice through society's Gordian knots. Why? In a word, all of this connected. Cosmology is relevant if there is a cosmological error behind artificial intelligence. There are both an organic connection and a distinction between teleology and intentionality, and the shift from teleology to intentionality is an important shift; when one shifts from teleology to intentionality one becomes partly blind to what the artificial intelligence picture is missing. Someone brought up on cognitive faculties may have trouble answering, 'How else could it be?'; the patristic understanding of the spiritual intellect gives a very interesting answer, and offers a completely different way to understand thought. And the figure with the magic sword? I'll let this figure remain mysterious for the moment, but I'll hint that without that metaphorical magic sword we would never have a literal artificial intelligence project. I do not believe I am forging new connections among these things, so much as uncovering something that was already there, overlooked but worth investigating.

This is an attempt to connect some very diverse sources, even if the different sections are meant primarily as philosophy of religion. This brings problems of coherence and disciplinary consistency, but the greater risk is tied to the possibility of greater reward. It will take more work to show connections than in a more externally focused enquiry, but if I can give a believable case for those interconnections, this will ipso facto be a more interesting enquiry.

All translations from French, German, Latin, and Greek are my own.

Artificial Intelligence

Artificial intelligence is not just one scientific project among others. It is a cultural manifestation of a timeless dream. It does not represent the repudiation of the occult impulse, but letting that impulse work out according to its own inner logic. Artificial intelligence is connected with a transhumanist vision for the future[10] which tries to create a science-fiction-like future of an engineered society of superior beings.[11] This artificial intelligence vision for the future is similar to the occult visions for the future we will see below. Very few members of the artificial intelligence movement embrace the full vision—but I may suggeste that its spectre is rarely absent, and that that spectre shows itself by a perennial sense of, 'We're making real breakthroughs today, and full AI is just around the corner.' Both those who embrace the fuller enthusiasm and those who are more modestly excited by current project have a hope that we are making progress towards creating something fundamentally new under the sun, of bequeathing humanity with something that has never before been available, machines that genuinely think. Indeed, this kind of hope is one of magic's most salient features. The exact content and features vary, but the sometimes heady excitement and the hope to bestow something powerful and new mark a significant point contact between the artificial intelligence and the magic that enshrouded science's birth.

There is something timeless and archetypal about the desire to create humans through artifice instead of procreation. Jewish legend tells of a rabbi who used the Kaballah to create a clay golem to defend a city against anti-semites in 1581.[12] Frankenstein has so marked the popular imagination that genetically modified foods are referred to as 'Frankenfoods,' and there are many (fictional) stories of scientists creating androids who rebel against and possibly destroy their creators. Robots who have artificial bodies but think and act enough like humans never to cause culture shock are a staple of science fiction. [13] There is a timeless and archetypal desire to create humans by artifice rather than procreation. Indeed, this desire has more than a little occult resonance.

We should draw a distinction between what may be called 'pretentious AI' and 'un-pretentious AI.' The artificial intelligence project has managed technical feats that are sometimes staggering, and from a computer scientist's perspective, the state of computer science is richer and more mature than if there had been no artificial intelligence project. Without making any general claim that artificial intelligence achieves nothing or achieves nothing significant, I will explore a more specific and weaker claim that artificial intelligence does not and cannot duplicate human intelligence.

A paradigm example of un-pretentious AI is the United States Postal Service handwriting recognition system. It succeeds in reading the addresses on 85% of postal items, and the USPS annual report is justifiably proud of this achievement.[14] However, there is nothing mythic claimed for it: the USPS does not claim a major breakthrough in emulating human thought, nor does it give people the impression that artificial mail carriers are just around the corner. The handwriting recognition system is a tool—admittedly, quite an impressive tool—but it is nothing more than a tool, and no one pretends it is anything more than a tool.

For a paradigm example of pretentious AI, I will look at something different. The robot Cog represents equally impressive feats in artificial hand-eye coordination and motor control, but its creators claim something deeper, something archetypal and mythic:

The robot Cog, portrayed as Robo sapiens
Fig. 2: Cog, portrayed as Robo sapiens[15]

The scholar places his hand on the robots' shoulder as if they had a longstanding friendship. At almost every semiotic level, this picture constitutes an implicit claim that the researcher has a deep friendship with what must be a deep being. The unfortunately blurred caption reads, '©2000 Peter Menzel / Robo sapiens.' On the Cog main website area, every picture with Cog and a person theatrically shows the person treating the robot as quite lifelike—giving the impression that the robot must be essentially human.

But how close is Cog to being human? Watts writes,

The weakness of Cog at present seems to be that it cannot actually do very much. Even its insect-like computer forebears do not seem to have had the intelligence of insects, and Cog is clearly nowhere near having human intelligence.[16]

The somewhat light-hearted frequently-asked-questions list acknowledges that the robot 'has no idea what it did two minutes ago,' answers 'Can Cog pass the Turing test?' by saying, 'No... but neither could an infant,' and interestingly answers 'Is Cog conscious?' by saying, 'We try to avoid using the c-word in our lab. For the record, no. Off the record, we have no idea what that question even means. And still, no.' The response to a very basic question is ambiguous, but it seems to joke that 'consciousness' is obscene language, and gives the impression that this is not an appropriate question to ask: a mature adult, when evaluating our AI, does not childishly frame the question in terms of consciousness. Apparently, we should accept the optimistic impression of Cog, whilst recognising that it's not fair to the robot to ask about features of human personhood that the robot can't exhibit. This smells of begging the question.

Un-pretentious AI makes an impressive technical achievement, but recognises and acknowledges that they've created a tool and not something virtually human. Pretentious AI can make equally impressive technical achievements, and it recognises that what it's created is not equivalent to human, but it does not acknowledge this. The answer to 'Is Cog conscious?' is a refusal to acknowledge something the researchers have to recognise: that Cog has no analogue to human consciousness. Is it a light-hearted way of making a serious claim of strong agnosticism about Cog's consciousness? It doesn't read much like a mature statement that 'We could never know if Cog were conscious.' The researcher in Figure 2 wrote an abstract on how to give robots a theory of other minds[17], which reads more like psychology than computer science.

There's something going on here that also goes on in the occult. In neo-paganism, practitioners find their magic to work, not exactly as an outsider would expect, by making incantations and hoping that something will happen that a skeptic would recognise as supernatural, but by doing what they can and then interpreting reality as if the magic had worked. They create an illusion and subconsciously embrace it. This mechanism works well enough, in fact, that large segments of today's neo-paganism started as jokes and then became real, something their practitioners took quite seriously.[18] There's power in trying to place a magical incantation or a computer program (or, in programmer slang, 'incantation') to fill a transcendent hope: one finds ways that it appears to work, regardless of what an outsider's interpretation may be. This basic technique appears to be at work in magic as early as the Renaissance, and it appears to be exactly what's going on in pretentious AI. The basic factor of stepping into an illusion after you do what you can makes sense of the rhetoric quoted above and why Cog is portrayed not merely as a successful experiment in coordination but as Robo sapiens, the successful creation of a living golem. Of course we don't interpret it as magic because we assume that artificial and intelligence and magic are very different things, but the researchers' self-deception falls into a quite venerable magical tradition.

Computers seem quite logical. Are they really that far from human rationality? Computers are logical without being rational. Programming a computer is like explaining a task to someone who follows directions very well but has no judgment and no ability to recognise broader intentions in a request. It follows a list of instructions without any recognition or a sense of what is being attempted. The ability to understand a conversation, or recognise another person's intent—even with mistakes—or any of a number of things humans take for granted, belongs to rationality. A computer's behaviour is built up from logical rules that do certain precise manipulations of symbols without any sense of meaning whatsoever: it is logical without being rational. The discipline of usability is about how to write well-designed computer programs; these programs usually let the user forget that computers aren't rational. For instance, a user can undo something when the computer logically and literally follows an instruction, and the user rationally realises that that isn't really what was intended. But even the best of this design doesn't let the computer understand what one meant to say. One frustration people have with computers stems from the fact that there is a gist to what humans say, and other people pick up that gist. Computers do not have even the most rudimentary sense of gist, only the ability to logically follow instructions. This means that the experience of bugs and debugging in programming is extremely frustrating to those learning how to program; the computer's response to what seems a correct program goes beyond nitpicking. This logicality without rationality is deceptive, for it presents something that looks very much like rationality at first glance, but produces unpleasant surprises when you treat it as rational. There's something interesting going on here. When we read rationality into a computer's logicality, we are in part creating the illusion of artificial intelligence. 'Don't anthropomorphise computers,' one tells novice programmers. 'They hate that.' A computer is logical enough that we tend to treat it as rational, and in fact if you want to believe that you've achieved artificial intelligence, you have an excellent basis to use in forming a magician's self-deception.

Artificial intelligence is a mythic attempt to create an artificial person, and it does so in a revealing way. Thought is assumed to be a private manipulation of mental representations, not something that works in terms of spirit. Embodied AI excluded, the body is assumed to be packaging, and the attempt is not just to duplicate the 'mind' in a complete sense, but our more computer-like rationality: this assumes a highly significant division of what is essential, what is packaging, and what comes along for free if you duplicate the essential bits. None of this is simply how humans have always thought, nor is it neutral. Maximus Confessor's assumptions are different enough from AI's that a comparison makes it easier to see some of AI's assumptions, and furthermore what sort of coherent picture could deny them. I will explore how exactly he does so below under the heading 'Orthodox Anthropology in Maximus Confessor's Mystagogia,' More immediately, I wish to discuss a basic type of assumption shared by artificial intelligence and the occult.

The Optimality Assumption

One commonality that much of magic and science share is that broad visions often include the assumption that what they don't understand must be simple, and be easy to modify or improve. Midgley discusses Bernal's exceedingly optimistic hope for society to transform itself into a simplistically conceived scientific Utopia (if perhaps lacking most of what we value in human society);[19] I will discuss later, under various headings, how society simply works better in Thomas More's and B.F. Skinner's Utopias if only it is re-engineered according to their simple models.[20] Aren't Utopian visions satires, not prescriptions? I would argue that the satire itself has a strong prescriptive element, even if it's not literal. The connection between Utopia and AI is that the same sort of thinking feeds into what, exactly, is needed to duplicate a human mind. For instance, let us examine a sample of dialogue which Turing imagined going on in a Turing test:

Q: Please write me a sonnet on the subject of the Forth Bridge.

A: Count me out on this one. I never could write poetry.

Q: Add 34957 to 70764.

A: (Pause about 30 seconds and then give as answer) 105621.

Q: Do you play chess?

A: Yes.

Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

A: (After a pause of 15 seconds) R-R8 mate.[21]

Turing seems to assume that if you duplicate his favoured tasks of arithmetic and chess, the task of understanding natural language comes along, more or less for free. The subsequent history of artificial intelligence has not been kind to this assumption. Setting aside the fact that most people do not strike up a conversation by strangely requesting the other person to solve a chess problem and add five-digit numbers, Turing is showing an occult way of thinking by assuming there's nothing really obscure, or deep, about the human person, and that the range of cognitive tasks needed to do AI is the range of tasks that immediately present themselves to him. This optimism may be damped by subsequent setbacks which the artificial intelligence movement has experienced, but it's still present. It's hard to see an artificial intelligence researcher saying, 'The obvious problem looks hard to solve, but there are probably hidden problems which are much harder,' let alone consider whether human thought might be non-computational.

Given the difficulties they acknowledge, artificial intelligence researchers seem to assume that the problem is as easy as possible to solve. As I will discuss later, this kind of assumption has profound occult resonance. I will call this assumption the optimality assumption: with allowances and caveats, the optimality assumption states that artificial intelligence is an optimally easy problem to solve. This doesn't mean an optimally easy problem to solve given the easiest possible world, but rather, taking into the difficulties and nuances recognised by the practitioner, the problem is then assumed to be optimally easy, and thenit could be said that we live in the (believable) possible world where artificial intelligence would be easiest to implement. Anything that doesn't work like a computer is assumedly easy, or a matter of unnecessary packaging. There are variations on the theme of begging the question. One basic strategy of ensuring that computers can reach the bar of human intelligence is to lower the bar until it is already met. Another strategy is to try to duplicate human intelligence on computer-like tasks. Remember the Turing test which Turing imagined, which seemed to recognise only the cognitive tasks of writing a poem, doing arithmetic, and solving a chess problem: Turing apparently assumed that natural language understanding would come along for free by the time computers could do both arithmetic and chess. Now we have computer calculators and chess players that can beat humans, whilst natural language understanding tasks which are simple to humans represent an unscaled Everest to artificial intelligence.

We have a situation very much like the attempt to make a robot that can imitate human locomotion—if the attempt is tested by having a robot race a human athlete on a racetrack ergonomically designed for robots. Chess is about as computer-like a human skill as one could find.

Turing's script for an imagined Turing test is one manifestation of a tendency to assume that the problem is optimally easy: the optimality assumption. Furthermore, Turing sees only three tasks of composing a sonnet, adding two numbers, and making a move in chess. But in fact this leaves out a task of almost unassailable difficulty for AI: understanding and appropriately acting on natural language requests. This is part of human rationality that cannot simply be assumed to come with a computer's logicality.

Four decades after Turing imagined the above dialogue, Kurt VanLehn describes a study of problem solving that used a standard story problem.[22] The ensuing discussion is telling. Two subjects' interpretations are treated as problems to be resolved, apparently chosen for their departure from how a human 'should' think about these things. One is a nine year old girl, Cathy: '...It is apparent from [her] protocol that Cathy solves this problem by imagining the physical situation and the actions taken in it, as opposed to, say, converting the puzzle to a directed graph then finding a traversal of the graph.' The purpose of the experiment was to understand how humans solve problems, but it was approached with a tunnel vision that gave a classic kind of computer science 'graph theory' problem, wrapped up in words, and treated any other interpretation of those words as an interesting abnormality. It seems that it is not the theory's duty to approach the subject matter, but the subject matter's duty to approach the theory—a signature trait of occult projects. Is this merely VanLehn's tunnel vision? He goes on to describe the state of cognitive science itself:

For instance, one can ask a subject to draw a pretty picture... [such] Problems whose understanding is not readily represented as a problem space are called ill-defined. Sketching pretty pictures is an example of an ill-defined problem... There have only been a few studies of ill-defined problem solving.[23]

Foerst summarises a tradition of feminist critique:[24] AI was started by men who chose a particular kind of abstract task as the hallmark of intelligence; women might value disembodied abstraction less and might choose something like social skills. The critique may be pushed one step further than that: beyond any claim that AI researchers, when looking for a basis for computer intelligence, tacitly crystallised intelligence out of men's activities rather than women's, it seems that their minds were so steeped in mathematics and computers that they crystallised intelligence out of human performance more in computer-like activities than anything essentially human, even in a masculine way. Turing didn't talk about making artificial car mechanics or deer hunters any more than he had plans for artificial hostesses or childminders.

Harman's 1989 account of functionalism, for instance, provides a more polished-looking version of an optimality assumption: 'According to functionalism, it does not matter what mental states and processes are made of any more than it matters what a carburetor or heart or a chess king is made of.' (832). Another suggestion may be made, not as an axiom but as an answer to the question, 'How else could it be?' This other suggestion might be called the tip of the iceberg conception.

A 'tip of the iceberg' conception might reply, 'Suppose for the sake of argument that it doesn't matter what an iceberg is made of, so long as it sticks up above the surface and is hard enough to sink a ship. The task is then to make an artificial iceberg. One can hire engineers to construct a hard shell to function as a surrogate iceberg. What has been left out is that these properties of something observable from the surface rest on something that lies much, much deeper than the surface. (A mere scrape with an iceberg sunk the Titanic, not only because the iceberg was hard, but because it had an iceberg's monumental inertia behind that hardness.) One can't make a functional tip of the iceberg that way, because a functional tip of an iceberg requires a functional iceberg, and we have very little idea of how to duplicate those parts of an iceberg that aren't visible from a ship. You are merely assuming that one can try hard enough to duplicate what you can see from a ship, and if you duplicate those observables, everything else will follow.' This is not a fatal objection, but it is intended to suggest what the truth could be besides the repeated assumption that intelligence is as easy as possible to duplicate in a computer. Here again is the optimality assumption, and it is a specific example of a broader optimality assumption which will appear in occult sources discussed under the 'Renaissance and Early Modern Magic' heading below. The 'tip of the iceberg' conception is notoriously absent in occult and artificial intelligence sources alike. In occult sources, the endeavour is to create a magically sharp sword that will slice all of the Gordian knots of society's problems; in artificial intelligence the Gordian knots are not societal problems but obstacles to creating a thinking machine, and researchers may only be attempting to use razor blades to cut tangled shoelaces, but researchers are still trying to get as close to that magic sword as they believe possible.

Just Around the Corner Since 1950

The artificial intelligence movement has a number of reasonably stable features, including an abiding sense of 'Today's discoveries are a real breakthrough; artificial minds are just around the corner.' This mood may even be older than digital computers; Dreyfus writes,

In the period between the invention of the telephone relay and its apotheosis in the digital computer, the brain, always understood in terms of the latest technological inventions, was understood as a large telephone switchboard, or more recently, as an electronic computer.[25]

The discoveries and the details of the claim may change, and experience has battered some of strong AI's optimism, but in pioneers and today's embodied AI advocates alike there is a similar mood: 'What we've developed now is effacing the boundary between machine and human.' This mood is quite stable. There is a striking similarity between the statements,

These emotions [discomfort and shock at something so human-like] might arise because in our interactions with Cog, little distinguishes us from the robot, and the differences between a machine and its human counterparts fade.[26]

and:

The reader must accept it as a fact that digital computers can be constructed, and indeed have been constructed, according to the principles we have described, and that they can in fact mimic the actions of a human computer very closely.[27]

What is interesting here is that the second was made by Turing in 1950, and the first by Foerst in 1998. As regards Turing, no one now believes 1950 computers could perform any but the most menial of mathematicians' tasks, and some of Cog's weaknesses have been discussed above ("Cog... cannot actually very much. Even its insect-like forebears do not seem to have had the intelligence of insects..."). The more artificial intelligence changes, the more it seems to stay the same. The overall impression one receives is that for all the surface progress of the artificial intelligence, the underlying philosophy and spirit remain the same—and part of this underlying spirit is the conviction, 'We're making real breakthroughs now, and full artificial intelligence is just around the corner.' This self-deception is sustained in classically magical fashion. Artificial intelligence's self-presentation exudes novelty, a sense that today's breakthroughs are decisive—whilst its actual rate of change is much slower. The 'It's just around the corner.' rhetoric is a longstanding feature. For all the changes in processor power and greater consistency in a materialist doctrine of mind, there are salient features which seem to repeat in 1950's and today's cognitive science. In both, the strategy to ensure that computers could jump the bar of human intelligence is by lowering the bar until it had already been jumped.

The Ghost in the Machine

It has been suggested in connection with Polanyi's understanding of tacit knowledge that behaviourists did not teach, 'There is no soul.' Rather, they draw students into a mode of enquiry where the possibility of a soul is never considered.

Modern psychology takes completely for granted that behavior and neural function are perfectly correlated, that one is completely caused by the other. There is no separate soul or lifeforce to stick a finger into the brain now and then and make neural cells do what they would not otherwise. Actually, of course, this is a working assumption only....It is quite conceivable that someday the assumption will have to be rejected. But it is important also to see that we have not reached that day yet: the working assumption is a necessary one and there is no real evidence opposed to it. Our failure to solve a problem so far does not make it insoluble. One cannot logically be a determinist in physics and biology, and a mystic in psychology.[28]

This is a balder and more provocative way of stating what writers like Turing lead the reader to never think of questioning. The assumption is that the soul, if there is one, is by nature external and separate from the body, so that any interaction between the two is a violation of the body's usual way of functioning. Thus what is denied is a 'separate soul or lifeforce to stick a finger into the brain now and then and make neural cells do what they would not do otherwise.' The Orthodox and others' doctrine of unified personhood is very different from an affirmation of a ghost in the machine. To affirm a ghost in the machine is to assume the soul's basic externality to the body: the basic inability of a soul to interact with a body creates the problem of the ghost in the machine. By the time one attempts to solve the problem of the ghost in the machine, one is already outside of an Orthodox doctrine of personhood in which spirit, soul, and body are united and the whole unit is not an atom.

The objective here is not mainly to criticise AI, but to see what can be learned: AI seems to fail in a way that is characteristic. It does not fail because of insufficient funding or lack of technical progress, but on another plane: it is built on an erroneous quasi-theological anthropology, and its failures may suggest something about being human. The main goal is to answer the question, 'How else could it be?' in a way that is missed by critics working in materialist confines.

What can we say in summary?

First, artificial intelligence work may be divided into un-pretentious and pretentious AI. Un-pretentious AI makes tools that no one presents as anything more than tools. Pretentious AI is presented as more human than is properly warranted.

Second, there are stable features to the artificial intelligence movement, including a claim of, 'We have something essentially human. With today's discoveries, full artificial intelligence is just around the corner.' The exact form of this assertion may change, but the basic claim does not.

Third, artificial intelligence research posits a multifarious 'optimality assumption,' namely that, given the caveats recognised by the researcher, artificial intelligence is an optimally easy assumption to solve. The human mind is assumed to be the sort of thing that is optimally easy to re-create on a computer.

Fourth, artificial intelligence comes from the same kind of thinking as the ghost in the machine problem.

There is more going on in the artificial intelligence project than an attempt to produce scientific results. The persistent rhetoric of 'It's just around the corner.' is not because artificial intelligence scientists have held that sober judgment since the project began, but because there's something else going on. For reasons that I hope will become clearer in the next section, this is beginning to look like an occult project—a secularised occult project, perhaps, but 'secularised occult' is not an empty term in that you take all of the occult away if you take away spellbooks. There is much more to the occult than crystal balls, and a good deal of this 'much more' is at play even if artificial intelligence doesn't do things the Skeptical Enquirer would frown on.

Occult Foundations of Modern Science

With acknowledgment of the relevance of the Reformation, the wake of Aristotelianism, and the via moderna of nominalism,[29] I will be looking at a surprising candidate for discussion on this topic: magic. Magic was a large part of what shaped modernity, a much larger factor than one would expect from modernity's own self-portrayal, and it has been neglected for reasons besides than the disinterested pursuit of truth. It is more attractive to our culture to say that our science exists in the wake of Renaissance learning or brave Reformers than to say that science has roots in it decries as superstition. For reasons that I will discuss below under the next heading, I suggest that what we now classify as the artificial intelligence movement is a further development of some of magic's major features.

There is a major qualitative shift between Newton's development of physics being considered by some to be a diversion from his alchemical and other occult endeavours, and 'spooky' topics today being taboo for scientific research. Yet it is still incomplete to enter a serious philosophical discussion of science without understanding the occult, as as it incomplete to enter a serious discussion of Christianity without understanding Judaism. Lewis points out that the popular understanding of modern science displacing the magic of the middle ages is at least misleading; there was very little magic in the middle ages, and then science and magic flourished at the same time, for the same reason, often in the same people: the reason science became stronger than magic is purely Darwinian: it worked better.[30] One may say that medieval religion is the matrix from which Renaissance magic departed, and early modern magic is the matrix from which science departed.

What is the relationship between the mediaeval West and patristic Christianity? In this context, the practical difference is not yet a great one. The essential difference is that certain seeds have been sown—such as nominalism and the rediscovered Aristotelianism—which in the mediaeval West would grow into something significant, but had not in much of any practical sense affected the fabric of society. People still believed that the heavens told the glory of God; people lived a life oriented towards contemplation rather than consumption; monasteries and saints were assumed so strongly that they were present even—especially—as they retreated from society. Certain seeds had been sown in the mediaeval West, but they had not grown to any significant stature. For this discussion, I will treat mediaeval and patristic Christianity as more alike than different.

Renaissance and Early Modern Magic

Magic in this context is much more than a means of casting spells or otherwise manipulating supernatural powers to obtain results. That practice is the token of an entire worldview and enterprise, something that defines life's meaning and what one ought to seek. To illustrate this, I will look at some details of work by a characteristic figure, Leibniz. Then I will look at the distinctive way the Renaissance magus related to the world and the legacy this relationship has today. Alongside this I will look at a shift from understanding this life as a contemplative apprenticeship to Heaven, to understanding this life as something for us to make more pleasurable.

Leibniz, a 17th century mathematician and scientist who co-discovered calculus, appears to have been more than conversant with the occult memory tradition,[31] and his understanding of calculus was not, as today, a tool used by engineers to calculate volumes. Rather, it was part of an entire Utopian vision, which could encompass all knowledge and all thoughts, an apparently transcendent tool that would obviate the need for philosophical disagreements:

If we had this [calculus], there would be no more reason for disputes between philosophers than between accountants. It would be enough for them to take their quills and say, 'Let us calculate!'

Leibniz's 1690 Ars Combinatoria contains some material that is immediately accessible to a modern mathematician. It also contains material that is less accessible. Much of the second chapter (9-48) discusses combinations of the letters U, P, J, S, A, and N; these letters are tied to concepts ranging from philosophy to theology, jurisprudence and mathematics: another table links philosophical concepts with numbers (42-3). The apparent goal was to validly manipulate concepts through mechanical manipulations of words, but I was unable to readily tell what (mathematico-logical?) principle was supposed to make this work. (The principle is apparently unfamiliar to me.) This may reflect the influence of Ramon Lull, thirteenth century magician and doctor of the Catholic Church who adapted a baptised Kaballah which involved manipulating combinations of (Latin) letters. Leibniz makes repeated reference to Lull (28, 31, 34, 46), and specifically mentions his occult ars magna (28). Like Lull, Leibniz is interested in the occult, and seeks to pioneer some new tool that will obviate the need for this world's troubles. He was an important figure in the creation of science, and his notation is still used for calculus today. Leibniz is not trying to be just another member of society, or to contribute to society's good the way members have always contributed to society's good: he stands above it, and his intended contribution is to reorder the fabric of society according to his endowed vision. Leibniz provides a characteristic glimpse of how early modern magic has left a lasting imprint.

If the person one should be in Orthodoxy is the member of Church and society, the figure in magic is the magus, a singular character who stands outside of the fabric of society and seeks to transform it. What is the difference? The member of the faithful is an integrated part of society, and lives in submission and organic connection to it. The magus, by contrast, stands above society, superior to it, having a relation to society as one whose right and perhaps duty is to tear apart and reconstruct society along better lines. We have a difference between humility and pride, between relating to society as to one's mother and treating society as raw material for one to transform. The magus is cut off from the common herd by two closely related endowments: a magic sword to cut through society's Gordian knots, and a messianic fantasy.[32] In Leibniz's case the magic sword is an artificial language which will make philosophical disagreements simply obsolete. For the artificial intelligence movement, the magic sword is artificial intelligence itself. The exact character of the sword, knot, and fantasy may differ, but their presence does not.

The character of the Renaissance magus may be seen as as hinging on despair with the natural world. This mood seems to be woven into Hermetic texts that were held in such esteem in the Renaissance and were connected at the opening of pre-eminent Renaissance neo-Platonist Pico della Mirandola's Oration on the Dignity of Man.[33] If there is good to be had, it is not met in the mundane world of the hoi polloi. It must be very different from their reality, something hidden that is only accessible to an elite. The sense in which this spells out an interest in the occult means far more than carrying around a rabbit's foot. The specific supernatural contact was valued because the occult was far hidden from appearances and the unwashed masses. (The Christian claim that one can simply pray to God and be heard is thus profoundly uninteresting. Supernatural as it may be, it is ordinary, humble, and accessible in a way that the magus is trying to push past.) This desire for what is hidden or very different from the ordinary means that the ideal future must be very different from the present. Therefore Thomas More, Renaissance author, canonised saint, and strong devotee of Mirandola's writing, himself writes Utopia. In this work, the philosophic sailor Raphael establishes his own reason as judge over the appropriateness of executing thieves,[34] and describes a Utopia where society simply works better: there seem to be no unpleasant surprises or unintended consequences. [35] There is little sense of a complex inner logic to society that needs to be respected, or any kind of authority to submit to. Indeed, Raphael abhors authority and responds to the suggestion that he attach himself to a king's court by saying, 'Happier! Is that to follow a path that my soul abhors?' This Utopian vision, even if it is from a canonised Roman saint, captures something deep of the occult currents that would later feed into the development of political ideology. The content of an occult vision for constructing a better tomorrow may vary, but it is a vision that seeks to tear up the world as we now know it and reconstructs it along different lines.

Magic and science alike relate to what they are interested in via an I-It rather than an I-Thou relationship. Relating to society as to one's mother is an I-Thou relationship; treating society as raw material is an I-It relationship. An I-Thou relationship is receptive to quality. It can gain wisdom and insight. It can connect out of the whole person. The particular kind of I-It relationship that undergirds science has a powerful and narrow tool that deals in what can be mathematically represented. The difference between those two is misunderstood if one stops after saying, 'I-It can make technology available much better than I-Thou.' That is how things look through I-It eyes. But I-Thou allows a quality of relationship that does not exist with I-It. 'The fundamental word I-Thou can only be spoken with one's whole being. The fundamental word I-It can never be spoken with one's whole being.' I-Thou allows a quality-rich relationship that always has another layer of meaning. In the Romance languages there are two different words for knowledge: in French, connaissance and savoir. They both mean 'knowledge,' but in different ways: savoir is knowledge of fact (or know-how); one can sait que ('know that') something is true. Connaissance is the kind of knowledge of a person, a 'knowledge of' rather than a 'knowledge that' or 'knowledge how.' It can never be a complete knowledge, and one cannot connait que ('know-of that') something is true. It is personal in character. An I-It relationship is not just true of magic; as I will discuss below under the heading of 'Science, Psychology, and Behaviourism,' psychology seeks a baseline savoir of people where it might seek a connaissance , and its theories are meant to be abstracted from relationships with specific people. Like magic, the powers that are based on science are epiphenomenal to the relationship science is based on. Relating in an I-Thou rather than I-It fashion is not simply less like magic and science; it is richer, fuller, and more human.

In the patristic and medieval eras, the goal of living had been contemplation and the goal of moral instruction was to conform people to reality. Now there was a shift from conforming people to reality, towards conforming reality to people.[36] This set the stage, centuries later, for a major and resource-intensive effort to create an artificial mind, a goal that would not have fit well with a society oriented to contemplation. This is not to say that there is no faith today, nor that there was no technology in the middle ages, nor that there has been no shift between the early modern period and today. Rather, it is to say that a basic trajectory was established in magic that significantly shapes science today.

The difference between the Renaissance magus and the mediaeval member of the Church casts a significant shadow today. The scientist seems to live more in the shadow of the Renaissance magus than of the member of mediaeval society. This is not to say that scientists cannot be humble and moral, nor that they cannot hold wonder at what they study. But it is to say that there are a number of points of contact between the Renaissance magus's way of relating to the world and that of a scientist and those who live in science's shadow. Governments today consult social scientists before making policy decisions: the relationship seems to be how to best deal with material rather than a relationship as to one's mother. We have more than a hint of secularised magic in which substantial fragments of Renaissance and early modern magic have long outlived some magical practices.

Under the patristic and medieval conception, this life was an apprenticeship to the life in Heaven, the beginning of an eternal glory contemplating God. Magic retained a sense of supernatural reality and a larger world, but its goal was to improve this life, understood as largely self-contained and not as beginning of the next. That was the new chief end of humanity. That shift is a shift towards the secular, magical as its beginning may be. Magic contains the seeds of its own secularisation, in other words of its becoming scientific. The shift from contemplation of the next world to power in this world is why the occult was associated with all sorts of Utopian visions to transform the world, a legacy reflected in our political ideologies. One of the tools developed in that magical milieu was science: a tool that, for Darwinian reasons, was to eclipse all the rest. The real magic that has emerged is science.

Science, Psychology, and Behaviourism

What is the niche science has carved out for itself? I'd like to look at an academic discipline that is working hard to be a science, psychology. I will more specifically look at behaviourism, as symptomatic within the history of psychology. Is it fair to look at behaviourism, which psychology itself rejected? It seems that behaviourism offers a valuable case study by demonstrating what is more subtly present elsewhere in psychology. Behaviourism makes some basic observations about reward and punishment and people repeating behaviours, and portrays this as a comprehensive psychological theory: behaviourism does not acknowledge beliefs, for instance. Nonetheless, I suggest that behaviourism is a conceivable development in modern psychology which would have been impossible in other settings. Behaviourism may be unusual in the extreme simplicity of its vision and its refusal to recognise internal states, but not in desiring a Newton who will make psychology a full-fledged science and let psychology know its material with the same kind of knowing as physics has for its material.

Newton and his kin provided a completely de-anthropomorphised account of natural phenomena, and behaviourism provided a de-anthropomorphised account of humans. In leading behaviourist B.F. Skinner's Walden Two (1948), we have a Utopian vision where every part of society seems to work better: artists raised under Skinner's conditioning produce work which is 'extraordinarily good,' the women are more beautiful,[37] and Skinner's alter ego expresses the hope of controlling the weather,[38] and compares himself with God.[39] Skinner resemble seems to resemble a Renaissance magus more than a mediaeval member: society is raw material for him to transform. Skinner is, in a real sense, a Renaissance magus whose magic has become secularised. Quite a lot of the magus survives the secularisation of Skinner's magic.

Even without these more grandiose aspirations, psychology is symptomatic of something that is difficult to discern by looking at the hard sciences. Psychological experiments try to find ways in which the human person responds in terms comparable to a physics experiment—and by nature do not relate to their subjects as human agents. These experiments study one aspect of human personhood, good literature another, and literature offers a different kind of knowing from a psychological experiment. If we assume that psychology is the best way to understand people—and that the mind is a mechanism-driven thing—then the assumed burden of proof falls on anyone saying, 'But a human mind isn't the sort of thing you can duplicate on a computer.' The cultural place of science constitutes a powerful influence on how people conceive the question of artificial intelligence.

Behaviourism offers a very simple and very sharp magic sword to cut the Gordian knot of unscientific teleology, a knot that will be discussed under the heading of 'Intentionality and Teleology' below. It removes suspicion of the reason being attached to a spiritual intellect by refusing to acknowledge reason. It removes the suspicion of emotions having a spiritual dimension by refusing to acknowledge emotions. He denies enough of the human person that even psychologists who share those goals would want to distance themselves from him. And yet Skinner does more than entertain messianic fantasies: Walden Two is a Utopia, and when Skinner's alter ego compares himself with God, God ends up second best.[40] I suggest that this is no a contradiction at all, or more properly it is a blatant contradiction as far as common sense is concerned, but as far as human human phenomena go, we have two sides of the same coin. The magic sword and the messianic fantasy belong to one and the same magus.

There is in fact an intermediate step between the full-fledged magus and the mortal herd. One can be a magician's assistant, clearing away debris and performing menial tasks to support the real magi. [41] The proportion of the Western population who are scientists is enormous compared to science's founding, and the vast majority of the increase is in magician's assistants. If one meets a scientist at a social gathering, the science is in all probability not a full-fledged magus, but a magician's assistant, set midway between the magus and the commoner. The common scientist is below the magus in knowledge of science but well above most commoners. In place of a personal messianic fantasy is a more communal tendency to assume that the scientific enterprise is our best hope for the betterment of society. (Commoners may share this belief.) There is a significant difference between the magus and most assistants today. Nonetheless, the figure of the magus is alive today—secularised, in most cases, but alive and well. Paul Johnson's Augustinian account of Intellectuals includes such eminent twentieth century scientific figures as Bertrand Russell, Noam Chompsky, and Albert Einstein;[42] the figures one encounters in his pages are steeped in the relationship to society as to raw material instead as to one's mother, the magic sword, and the messianic fantasy.

I-Thou and Humanness

I suggest that the most interesting critiques of artificial intelligence are not obtained by looking through I-It eyes in another direction, but in using other eyes to begin with, looking through I-Thou eyes. Let us consider Turing's 'Arguments from Various Disabilities'.[43] Perhaps the people who furnished Turing with these objections were speaking out of something deeper than they could explain:

Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.

Be kind:
Kindness is listed by Paul as the fruit of the Spirit (Gal. 5:22) in other words, an outflow of a person living in the Spirit. Disregarding the question of whether all kindness is the fruit of the Spirit, in humans kindness is not merely following rules, but the outflow of a concern for the other person. Even counterfeit kindness is a counterfeit from someone who knows the genuine article. It thus uses some faculty of humanity other than the reasoning ability, which classical AI tries to duplicate and which is assumed to be the one thing necessary to duplicate human cognition.

Be resourceful:
The artificial intelligence assumption is that if something is non-deterministic, it is random, because deterministic and pseudo-random are the only options one can use in programming a computer. This leaves out a third possibility, that by non-computational faculties someone may think, not merely 'outside the box,' in a random direction, but above it. The creative spark comes neither from continuing a systematic approach, nor simply picking something random ('because I can't get my computer to turn on, I'll pour coffee on it and see if that helps'), but something that we don't know how to give a computer.

Be beautiful:
Beauty is a spiritual quality that is not perceived by scientific enquiry and, given our time's interpretation of scientific enquiry, is in principle not recognised. Why not? If we push materialist assumptions to the extreme, it is almost a category error to look at a woman and say, 'She is beautiful.' What is really being said—if one is not making a category error—is, 'I have certain emotions when I look at her.' Even if there is not a connection between physical beauty and intelligence, there seems to be some peasant shrewdness involved. It is a genuine, if misapplied, appeal to look at something that has been overlooked.

Be friendly:
True as opposed to counterfeit friendliness is a manifestation of love, which has its home in the will, especially if the will is not understood as a quasi-muscular power of domination, but part of the spirit which lets us turn towards another in love.

Remarks could easily be multiplied. What is meant to come through all this is that science is not magic, but science works in magic's wake. Among relevant features may be mentioned relating as a magus would (in many ways distilling an I-It relationship further), and seeking power over the world in this life rather living an apprenticeship to the next.

Orthodox Anthropology in Maximus Confessor's Mystagogia

I will begin detailed enquiry in the Greek Fathers by considering an author who is foundational to Eastern Orthodoxy, the seventh century Greek Father Maximus Confessor. Out of the existing body of literature, I will focus on one work, his Mystagogia,[44] with some reference to the Capita Gnosticae. Maximus Confessor is a synthetic thinker, and the Mystagogia is an anthropological work; its discussion of Church mystagogy is dense in theological anthropology as the training for a medical doctor is dense in human biology.

Orthodox Christians have a different cosmology from the Protestant division of nature, sin, and grace. Nature is never un-graced, and the grace that restores from sin is the same grace that provides continued existence and that created nature in the first place. That is to say, grace flows from God's generosity, and is never alien to nature. The one God inhabits the whole creation: granted, in a more special and concentrated way in a person than in a rock, but the same God is really present in both.

Already, without having seriously engaged theological anthropology, we have differences with how AI looks at things. Not only are the answers different, but the questions themselves are posed in a different way. 'Cold matter,' such as is assumed by scientific materialism, doesn't exist, not because matter is denied in Berkeleyan fashion but because it is part of a spiritual cosmology and affirmed to be something more. It is mistaken to think of cold matter, just as it is mistaken to think of tepid fire. Even matter has spiritual attributes and is graced. Everything that exists, from God and the spiritual creation to the material creation, from seraphim to stone, is the sort of thing one connects to in an I-Thou relationship. An I-It relationship is out of place, and from this perspective magic and science look almost the same, different signposts in the process of establishing a progressively purer I-It relationship.

Intellect and Reason

Maximus' anthropology is threefold: the person is divided into soul and body, and the soul itself is divided into a higher part, the intellect, and a lower part, the reason:[45]

[Pseudo-Dionysius] used to teach that the whole person is a synthesis of soul and body joined together, and furthermore the soul itself can be examined by reason. (The person is an image which reflects teaching about the Holy Church.) Thus he said that the soul had an intellectual and living faculty that were essentially united, and described the moving, intellectual, authoritative power—with the living part described according its will-less nature. And again, the whole mind deals with intelligible things, with the intelligible power being called intellect, whilst the sensible power is called reason.

This passage shows a one-word translation difficulty which is symptomatic of a difference between his theology and the quasi-theological assumptions of the artificial intelligence project. The word in question, which I have rendered as 'authoritative power,' is 'exousiastikws,' with root word 'exousia.' The root and its associated forms could be misconstrued today as having a double meaning of 'power' and 'authority,' with 'authority' as the basic sense. In both classical and patristic usage, it seems debatable whether 'exousia' is tied to any concept of power divorced from authority. In particular this passage's 'exousiastikws' is most immediately translated as power rather than any kind of authority that is separate from power. Yet Maximus Confessor's whole sense of power here is one that arises from a divine authorisation to know the truth. This sense of power is teleologically oriented and has intrinsic meaning. This is not to say that Maximus could only conceive of power in terms of authority. He repeatedly uses 'dunamis,' (proem.15-6, 26, 28, etc), a word for power without significant connotations of authority. However, he could conceive of power in terms of authority, and that is exactly what he does when describing the intellect's power.

What is the relationship between 'intellect'/'reason' and cognitive faculties? Which, if either, has cognitive faculties a computer can't duplicate? Here we run into another difficulty. It is hard to say that Maximus Confessor traded in cognitive faculties. For Maximus Confessor the core sense of 'cognitive faculties' is inadequate, as it is inadequate to define an eye as something that provides nerve impulses which the brain uses to generate other nerve impulses. What is missing from this picture? This definition does not provide any sense that the eye interacts with the external world, so that under normal circumstances its nerve impulses are sent because photons strike photoreceptors in an organ resembling a camera. Even this description hides most teleology and evaluative judgment. It does not say that an eye is an organ for perceiving the external world through an image reconstructed in the brain, and may be called 'good' if it sees clearly and 'bad' if it doesn't. This may be used as a point of departure to comment on Maximus Confessor and the conception of cognitive faculties.

Maximus Confessor does not, in an amoral or self-contained fashion, see faculties that operate on mental representations. He sees an intellect that is where one meets God, and where one encounters a Truth that is no more private than the world one sees with the eye is private.

Intellect and reason compete with today's cognitive faculties, but Maximus Confessor understands the intellect in particular as something fundamentally moral, spiritual, and connected to spiritual realities. His conception of morality is itself different from today's private choice of ethical code; morality had more public and more encompassing boundaries, and included such things as Jesus' admonition not to take the place of highest honour so as not to receive public humiliation (Luke 14:7-10): it embraced practical advice for social conduct, because the moral and spiritual were not separated from the practical. It is difficult to Maximus Confessor conceiving of practicality as hampered by morality. In Maximus Confessor's day what we separate into cognitive, moral, spiritual, and practical domains were woven into a seamless tapestry.

Intellect, Principles, and Cosmology

Chapter twenty-three opens by emphasising that contemplation is more than looking at appearances (23.1-10), and discusses the Principles of things. The concept of a Principle is important to his cosmology. There is a foundational difference between the assumed cosmologies of artificial intelligence and Maximus Confessor. Maximus Confessor's cosmology is not the artificial intelligence cosmology with a spiritual dimension added, as a living organism is not a machine modified to use foodstuffs as fuel.

Why do I speak of the 'artificial intelligence cosmology'? Surely one can have a long debate about artificial intelligence without adding cosmology to the discussion. This is true, but it is true because cosmology has become invisible, part of the assumed backdrop of discussion. In America, one cultural assumption is that 'culture' and 'customs' are for faroff and exotic people, not for 'us'—'we' are just being human. It doesn't occur to most Americans to think of eating Turkey on Thanksgiving Day or removing one's hat inside a building as customs, because 'custom' is a concept that only applies to exotic people. I suggest that Maximus Confessor has an interesting cosmology, not because he's exotic, but because he's human.

Artificial intelligence proponents and (most) critics do not differ on cosmology, but because that is because it is an important assumption which is not questioned even by most people who deny the possibility of artificial intelligence. Searle may disagree with Fodor about what is implied by a materialist cosmology, but not whether one should accept materialism. I suggest that some artificial intelligence critics miss the most interesting critiques of artificial intelligence because they share that project's cosmology. If AI is based on a cosmological error, then no amount of fine-tuning within the system will rectify the error. We need to consider cosmology if we are to have any hope of correcting an error that basic. (Bad metaphysics does not create good physics.) I will describe Maximus Confessor's cosmology in this section, not because he has cosmology and AI doesn't, but because his cosmology seems to suggest a correction to the artificial intelligence cosmology.

At the base of Maximus's cosmology is God. God holds the Principles in his heart, and they share something of his reality. Concrete beings (including us) are created through the Principles, and we share something of their reality and of God. The Principles are a more concrete realisation of God, and we are a more concrete realisation of the Principles. Thought (nohsis) means beholding God and the Principles ( logoi) through the eye of the intellect. Thinking of a tree means connecting with something that is more tree-like than the tree itself.

It may be easier to see what the important Principles in Maximus Confessor's cosmology if we see how they are being dismantled today. Without saying that Church Fathers simply grafted in Platonism, I believe it safe to say that Plato resembled some of Church doctrine, and at any rate Plato's one finger pointing up to God offers a closer approximation to Christianity than Aristotle's fingers pointing down. I would suggest further that looking at Plato can suggest how Christianity differs from Aristotelianism's materialistic tendencies, tendencies that are still unfolding today. Edelman describes the assumptions accompanying Darwin's evolution as the 'death blow' to the essentialism, the doctrine that there are fixed kinds of things, as taught by Plato and other idealists.[46] Edelman seems not to appreciate why so many biologists assent to punctuated equilibrium.[47] However, if we assume that there is solid evidence establishing that all life gradually evolved from a common ancestor, then this remark is both apropos and perceptive.

When we look around, we see organisms that fit neatly into different classes: human, housefly, oak. Beginning philosophy students may find it quaint to hear of Plato's Ideas, and the Ideal horse that is copied in all physical horses, but we tend to assume Platonism at least in that horses are similar 'as if' there were an Ideal horse: we don't believe in the Ideal horse any more, but we still treat its shadow as if it were the Ideal horse's shadowy copy.

Darwin's theory of evolution suggests that all organisms are connected via slow, continuous change to a common ancestor and therefore to each other. If this is true, there are dire implications for Platonism. It is as if we had pictures of wet clay pottery, and posited a sharp divide between discrete classes of plates, cups, and bowls. Then someone showed a movie of a potter deforming one and the same clay from one shape to another, so that the divisions are now shown to be arbitrary. There are no discrete classes of vessels, just one lump of clay being shaped into different things. Here we are pushing a picture to the other end of a spectrum, further away from Platonism. It is a push from tacitly assuming there is a shadow, to expunging the remnant of belief in the horse and its shadow.

But this doesn't mean we're perfect Platonists, or can effortlessly appreciate the Platonic mindset. There are things we have to understand before we can travel in the other direction. If anything, there is more work involved. We act as if the Ideas' shadows are real things, but we don't genuinely believe in the shadows qua shadows, let alone the Ideas. We've simply inherited the habit of treating shadows as a convenient fiction. But Maximus Confessor believed the Principles (Ideas) represented something fuller and deeper than concrete things.

This is foundational to why Maximus Confessor would not have understood thought as manipulating mental representations in the inescapable privacy of one's mind. Contemplation is not a matter of closing one's eyes and fantasising, but of opening one's eyes and beholding something deeper and more real than reality itself. The sensible reason can perceive the external physical world through the senses, but this takes a very different light from Kant's view.

Maximus Confessor offers a genuinely interesting suggestion that we know things not only because of our power-to-know, but because of their power-to-be-known, an approach that I will explore later under the heading 'Knowledge of the Immanent.' The world is not purely transcendent, but immanent. For Kant the mind is a box that is hermetically sealed on top but has a few frustratingly small holes on the bottom: the senses. Maximus Confessor doesn't view the senses very differently, but the top of the box is open.

This means that the intellect is most basically where one meets God. Its powerful ability to know truth is connected to this, and it connects with the Principles of things, as the senses connect with mere things. Is it fair to the senses to compare the intellect's connection with Principles with the senses' experience of physical things? The real question is not that, but whether it is fair to the intellect, and the answer is 'no.' The Principles are deeper, richer, and fuller than the mere visible things, as a horse is richer than its shadow. The knowledge we have through the intellect's connection with the Principles is of a deeper and richer sort than what is merely inferred from the senses.

The Intelligible and the Sensible

Maximus Confessor lists, and connects, several linked pairs, which I have incorporated into a schema below. The first column of this schema relates to the second column along lines just illustrated: the first member of each pair is transcendent and eminent to the second, but also immanent to it.

Head Body
Heaven earth (3.1-6)
holy of holies sanctuary (2.8-9)
intelligible sensible (7.5-10)
contemplative active (5.8-9)
intellect reason (5.9-10)
spiritual wisdom practical wisdom (5.13-15)
knowledge virtue (5.58)
unforgettable knowledge faith (5.58-60)
truth goodness (5.58-9)
archetype image (5.79-80)
New Testament Old Testament (6.4-6)
spiritual meaning of a text literal meaning of a text (6.14-5)
bishop's seating on throne bishop's entrance into Church (8.5-6, 20-21)
Christ's return in glory Christ's first coming, glory veiled (8.6-7, 18)

Maximus Confessor's cosmology sees neither a disparate collection of unconnected things, nor an undistinguished monism that denies differences. Instead, he sees a unity that sees natures (1.16-17) in which God not only limits differences, as a circle limits its radii (1.62-67), but transcends all differences. Things may be distinguished, but they are not divided. This is key to understanding both doctrine and method. He identifies the world with a person, and connects the Church with the image of God. Doctrine and method are alike synthetic, which suggests that passages about his cosmology and ecclesiology illuminate anthropology.

One recurring theme shows in his treatment of heaven and earth, the soul and the body, the intelligible (spiritual) and the sensible (material). The intelligible both transcends the sensible, and is immanent to it, present in it. The intelligible is what can be apprehended by the part of us that meets God; the sensible is what presents itself to the world of senses. (The senses are not our only connection with the world.) This is a different way of thinking about matter and spirit from the Cartesian model, which gives rise to the ghost in the machine problem. Maximus Confessor's understanding of spirit and matter does not make much room for this dilemma. Matter and spirit interpenetrate. This is true not just in us but in the cosmos, which is itself 'human': he considers '...the three people: the cosmos (let us say), the Holy Scriptures, and this is true with us' (7.40-1). The attempt to connect spirit and matter might have struck him like an attempt to forge a link between fire and heat, two things already linked.

Knowledge of the Immanent

The word which I here render 'thought' is 'nohsis', cognate to 'intellect' ('nous') which has been discussed as that which is inseparably the home of thought and of meeting God. We already have a hint of a conceptual cast in which thought will be understood in terms of connection and contemplation.

In contrast to understanding thought as a process within a mind, Maximus describes thought in terms of a relationship: a thought can exist because there is a power to think of in the one thinking, and a power to be thought of in what is thought of.[48] We could no more know an absolutely transcendent creature than we could know an absolutely transcendent Creator. Even imperfect thought exists because we are dealing with something that 'holds power to be apprehended by the intellect' (I.82). We say something is purple because its manifest purpleness meets our ability to perceive purple. What about the claim that purple is a mental experience arising from a certain wavelength of light striking our retinas? One answer that might be given is that those are the mechanisms by which purple is delivered, not the nature of what purple is.[49] The distinction is important.

We may ask, what about capacity for fantasy and errors? The first response I would suggest is cultural. The birth of modernity was a major shift, and its abstraction introduced new things into the Western mind, including much of what supports our concept of fantasy (in literature, etc.). The category of fantasy is a basic category to our mindset but not to the patristic or medieval mind. Therefore, instead of speculating how Maximus Confessor would have replied to these objections, we can point out that they aren't the sort of thing that he would ever think of, or perhaps even understand.

But in fact a more positive reply can be taken. It can be said of good and evil that good is the only real substance. Evil is not its own substance, but a blemish in good substance. This parallels error. Error is not something fundamentally new, but a blurred or distorted form of truth. Fantasy does not represent another fundamentally independent, if hypothetical, reality; it is a funhouse mirror refracting this world. We do not have a representation that exists in one's mind alone, but a dual relationship that arises both from apprehending intellect and an immanent thing. The possibility of errors and speculation make for a longer explanation but need not make us discard this basic picture.

Intentionality and Teleology

One of the basic differences in cosmology between Maximus Confessor and our own day relates to intentionality. As it is described in cognitive science's philosophy of mind, 'intentionality' refers to an 'about-ness' of human mental states, such as beliefs and emotions. The word 'tree' is about an object outside the mind, and even the word 'pegasus' evokes something that one could imagine existing outside of the mind, even if it does not. Intentionality does not exist in computer programs: a computer chess program manipulates symbols in an entirely self-enclosed system, so 'queen' cannot refer to any external person or carry the web of associations we assume. Intentionality presents a philosophical problem for artificial intelligence. Human mental states and symbol manipulation are about something that reach out to the external world, whilst computer symbol manipulation is purely internal. A computer may manipulate symbols that are meaningful to humans using it, but the computer has no more sense of what a webpage means than a physical book has a sense that its pages contain good or bad writing. Intentionality is a special feature of living minds, and does not exist outside of them. Something significant will be achieved if ever a computer program first embodies intentionality outside of a living mind.

Maximus Confessor would likely have had difficulty understanding this perspective as he would have had difficulty understanding the problem of the ghost in the machine: this perspective makes intentionality a special exception as the ghost in the machine made our minds' interaction with our bodies a special exception, and to him both 'exceptions' are in fact the crowning jewel of something which permeates the cosmos.

The theory of evolution is symptomatic of a difference between the post-Enlightenment West and the patristic era. This theory is on analytic grounds not a true answer to the question, 'Why is there life as we know it?' because it does not address the question, 'Why is there life as we know it?' At best it is a true answer to the question, 'How is there life as we know it?' which people often fail to distinguish from the very different question, 'Why is there life as we know it?' The Enlightenment contributed to an effort to expunge all trace of teleology from causality, all trace of 'Why?' from 'How?' Of Aristotle's four causes, only the efficient cause[50] is familiar; a beginning philosophy student is liable to misconstrue Aristotle's final cause[51] as being an efficient cause whose effect curiously precedes the cause. The heavy teleological scent to final causation is liable to be missed at first by a student in the wake of reducing 'why' to 'how'; in Maximus Confessor, causation is not simply mechanical, but tells what purpose something serves, what it embodies, what meaning and relationships define it, and why it exists.

Strictly speaking, one should speak of 'scientific mechanisms' rather than 'scientific explanations.' Why? 'Scientific proof' is an oxymoron: science does not deal in positive proof any more than mathematics deals in experiment, so talk of 'scientific proof' ordinarily signals a speaker who has more faith in science than understanding of what science really does. 'Scientific explanation' is a less blatant contradiction in terms, but it reflects a misunderstanding, perhaps one that is more widespread, as it often present among people who would never speak of 'scientific proof.' Talk of 'scientific explanation' is not simply careless speech; there needs to be a widespread category error before there is any reason to write a book like Mary Midgley's Science as Salvation (1992). Science is an enterprise which provides mechanisms and has been given the cultural place of providing explanations. This discrepancy has the effect that people searching for explanations turn to scientific mechanisms, and may not be receptive when a genuine explanation is provided, because 'explanation' to them means 'something like what science gives.' This may not be the only factor, but it casts a long shadow. The burden of proof is born by anyone who would present a non-scientific explanation as being as real as a scientific explanation. An even heavier burden of proof falls on the person who would claim that a non-scientific explanation—not just as social construction, but a real claim about the external world—offers something that science does not.

The distinction between mechanism and explanation is also relevant because the ways in which artificial intelligence has failed may reflect mechanisms made to do the work of explanations. In other words, the question of 'What is the nature of a human?' is answered by, 'We are able to discern these mental mechanisms in a human.' If this is true, the failure to duplicate a human mind in computers may be connected to researchers answering the wrong question in the first place. These are different, as the question, 'What literary devices can you find in The Merchant of Venice[52]?' is different from 'Why is The Merchant of Venice powerful drama?' The devices aren't irrelevant, but neither are they the whole picture.

Of the once great and beautiful land of teleology, a land once brimming in explanations, all has been conquered, all has been levelled, all has been razed and transformed by the power of I-It. All except two stubborn, embattled holdouts. The first holdout is intentionality: if it is a category error to project things in the human mind onto the outer world, nonetheless we recognise that intentionality exists in the mind—but about-ness of intentionality is far less than the about-ness once believed to fill the cosmos. The second and last holdout is evolution: if there is to be no mythic story of origins that gives shape and meaning to human existence, if there cannot be an answer to 'Why is there life as we know it?' because there is no reason at all for life, because housefly, horse, and human are alike the by-product of mindless forces that did not have us in mind, nonetheless there is still an emaciated spectre, an evolutionary mechanism that does just enough work to keep away a teleological approach to origins questions. The land of teleology has been razed, but there is a similarity between these two remnants, placeholders which are granted special permission to do what even the I-It approach recognises it cannot completely remove of teleology. That is the official picture, at least. Midgley is liable to pester us with counterexamples of a teleology that is far more persistent than the official picture gives credit for: she looks at evolution doing the work of a myth instead of a placeholder that keeps myths away, for instance.[53] Let's ignore her for the moment and stick with the official version. Then looking at both intentionality and evolution can be instructive in seeing what has happened to teleology, and appreciating what teleology was and could be. Now Midgley offers us reasons why it may not be productive to pretend we can excise teleology: the examples of teleology she discusses do not seem to be improved by being driven underground and presented as non-teleological.

Maximus's picture, as well as being teleological, is moral and spiritual. As well as having intentions, we are living manifestations of a teleological, moral and spiritual Intention in God's heart. Maximus Confessor held a cosmology, and therefore an anthropology, that did not see the world in terms of disconnected and meaningless things. He exhibited a number of traits that the Enlightenment stripped out: in particular, a pervasive teleology in both cosmology and anthropology. He believed in a threefold anthropology of intellect/spirit, reason/soul, and body, all intimately tied together. What cognitive science accounts for through cognitive faculties, manipulating mental representations, were accounted for quite differently by an intellect that sees God and the Principles of beings, and a reason that works with the truths apprehended by intellect. The differences between the respective cosmologies and anthropologies are not the differences between two alternate answers to the same question, but answers to two different questions, differently conceived. They are alike in that they can collide because they are wrestling with the same thing: where they disagree, at least one of them must be wrong. They are different in that they are looking at the same aspect of personhood from two different cultures, and Maximus Confessor seems to have enough distance to provide a genuinely interesting critique.

Conclusion

Maximus Confessor was a synthetic thinker, and I suggest that his writings, which are synthetic both in method and in doctrine, are valuable not only because he was brilliant but because synthetic enquiry can be itself valuable. I have pursued a synthetic enquiry, not out of an attempt to be like Maximus Confessor, but because I think an approach that is sensitive to connections could be productive here. I'm not the only critic who has the resources to interpret AI as floundering in a way that may be symptomatic of a cosmological error. It's not hard to see that many religious cosmologies offer inhospitable climates to machines that think: Foerst's reinterpretation of the image of God[54] seems part of an effort to avoid seeing exactly this point. The interesting task is understanding and conveying an interconnected web. So I have connected science with magic, for instance, because although the official version is that they're completely unrelated, there is a strong historic link between them, and cultural factors today obscure the difference, and for that matter obscure several other things that interest us.

This dissertation falls under the heading of boundary issues between religion and science, and some readers may perceive me to approach boundary issues in a slightly different fashion. That perception is correct. One of the main ways that boundary issues are framed seems to be for Christian theologians to show the compatibility of their timeless doctrines with that minority of scientific theories which have already been accepted by the scientific community and which have not yet been rejected by that same community. With the question of origins, there has been a lot of work done to show that Christianity is far more compatible with evolutionary theory than a literal reading of Genesis 1 would suggest. It seems to have only been recently that gadflies within the intelligent design movement have suggested both that the scientific case for evolution is weaker that it has been made out to be, and there seems to be good reason to believe that Christianity and evolution are incompatible at a deep enough level that the literal details of Genesis 1 are almost superfluous. Nobody conceives the boundary issues to mean that theologians should demonstrate the compatibility of Christianity with that silent majority of scientific theories which have either been both accepted and discredited (like spontaneous generation) or not yet accepted (like the cognitive-theoretic model of the universe). The minority is different, but not as different as people often assume.

One of the questions which is debated is whether it is best to understand subject-matter from within or without. I am an M.Phil. student in theology with a master's and an adjunct professorship in the sciences. I have worked to understand the sciences from within, and from that base look and understand science from without as well as within. Someone who only sees science from without may lack appreciation of certain things that come with experience of science, whilst someone who only sees science from within may not be able to question enough of science's self-portrayal. This composite view may not be available to all, nor is it needed, but I believe it has helped me in another basic röle from showing religion's compatibility with current science: namely, serving as a critical observer and raising important questions that science is itself unlikely to raise, sometimes turning a scientific assumption on its head. Theology may have other things to offer in its discussion with science than simply offering assent: instead of solely being the recipient of claims from science, it should be an agent which adds to the conversation.

Are there reasons why the position I propose is to be preferred? Science's interpretation of the matter is deeply entrenched, enough so that it seems strange to connect science with the occult. One response is that this perspective should at least be listened to, because it is challenging a now entrenched cultural force, and it may be a cue to how we could avoid some of our own blind spots. Even if it is wrong, it could be wrong in an interesting way. A more positive response would be to say that this is by my own admission far from a complete picture, but it makes sense of part of the historical record that is meaningless if one says that modern science just happened to be born whilst a magical movement waxed strong, and some of science's founders just happened to be magicians. A more robust picture would see the early modern era as an interlocking whole that encompassed a continuing Reformation, Descartes, magic, nascent science, and the wake of the Renaissance polymath. They all interconnect, even if none is fully determined. Lack of time and space preclude me from more than mentioning what that broader picture might be. There is also another reason to question the validity of science's basic picture:

Artificial intelligence doesn't work, at least not for a working copy of human intelligence.

Billions of dollars have been expended in the pursuit of artificial intelligence, so it is difficult to say the artificial intelligence project has failed through lack of funding. The project has attracted many of the world's most brilliant minds, so it is difficult to say that the project has failed through lack of talent. Technology has improved a thousandfold or a millionfold since a giant like Turing thought computer technology was powerful enough for artificial intelligence, so it is difficult to say that today's computers are too underpowered for artificial intelligence. Computer science has matured considerably, so it's hard to say that artificial intelligence hasn't had a chance to mature. In 1950, one could have posited a number of reasons for the lack of success then, but subsequent experience has made many of these possibilities difficult to maintain. This leaves open the possibility that artificial intelligence has failed because the whole enterprise is based on a false assumption, perhaps an error so deep as to be cosmological.

The power of science-based technology is a side effect of learning something significant about the natural world, and both scientific knowledge and technology are impressive cultural achievements. Yet science is not a complete picture—and I do not mean simply that we can have our own private fantasies—and science does not capture the spiritual qualities of matter, let alone a human being. The question of whether science understands mechanical properties of physical things has been put to the test, and the outcome is a resounding yes. The question of whether science understands enough about humans to duplicate human thought is also being put to the test, and when the rubber meets the road, the answer to that question looks a lot like, 'No.' It's not definitive (it couldn't be), but the picture so far is that science is trying something that can't work. It can't work because of spiritual principles, as a perpetual motion machine can't work because of physical principles. It's not a matter of insufficient resources available so far, or still needing to find the right approach. It doesn't seem to be the sort of thing which could work.

We miss something about the artificial intelligence project if we frame it as something that began after computer scientists saw that computers can manipulate symbols. People have been trying to make intelligent computers for half a century, but artificial intelligence is a phenomenon that has been centuries in the making. The fact that people saw the brain as a telephone switchboard, when that was the new technology, is more a symptom than a beginning. There's more than artificial intelligence's surface resemblance to alchemists' artificial person ('homunculus'). A repeated feature of the occult enterprise is that you do not have people giving to society in the ways that people have always given to society; you have exceptional figures trying to delve into unexplored recesses and forge some new creation, some new power—some new technology or method—to achieve something mythic that has simply not been achieved before. The magus is endowed with a magic sword to powerfully slice through his day's Gordian knots, and with a messianic fantasy. This is true of Leibniz's Ars Combinatoria and it is true of more than a little of artificial intelligence. To the reader who suggests, 'But magic doesn't really work!' I would point out that artificial intelligence also doesn't really work—although its researchers find it to work, like Renaissance magi and modern neo-pagans. The vast gap between magic and science that exists in our imagination is a cultural prejudice rather than a historical conclusion. Some puzzles which emerge from an non-historical picture of science—in particular, why a discipline with modest claims about falsifying hypotheses is held in such awe—seem to make a lot more sense if science is investigated as a historical phenomenon partly stemming from magic.

If there is one unexpected theme running through this enquiry, it is what has emerged about relationships. The question of whether one relates to society (or the natural world) as to one's mother or as to raw material, in I-Thou or I-It fashion, first crept in as a minor clarification. The more I have thought about it, the more significant it seems. The Renaissance magus distinguished himself from his medieval predecessors by converting I-Thou relationships into I-It. How is modern science different? To start with, it is much more consistent in pursuing I-It relationships. The fact that science gives mechanisms instead of explanations is connected; an explanation is an I-Thou thing, whilst a bare mechanism is I-It: if you are going to relate to the world in I-It fashion, there is every reason to replace explanations with mechanisms. An I-Thou relationship understands in a holistic, teleological fashion: if you are going to push an I-It relationship far enough, the obvious approach is to try to expunge teleology as the Enlightenment tried. A great many things about magus and scientist alike hinge on the rejection of Orthodoxy's I-Thou relationship.

In Arthurian legend, the figure of Merlin is a figure who holds magical powers, not by spells and incantations, but by something deeper and fundamental. Merlin does not need spells and incantations because he relates to the natural world in a way that almost goes beyond I-Thou; he relates to nature as if it were human. I suggest that science provides a figure of an anti-Merlin who holds anti-magical powers, not by spells and incantations, but by something deeper and fundamental. Science does not need spells and incantations because it relates to the natural world and humans in a way that almost goes beyond I-It; it relates to even the human as if it were inanimate. In both cases, the power hinges on a relationship, and the power is epiphenomenal to that relationship.

If this is a problem, what all is to be done? Let me say what is not to be done. What is not to be done is to engineer a programme to enlist people in an I-Thou ideology. Why not? 'I-Thou ideology' is a contradiction in terms. The standard response of starting a political programme treats society as raw material to be transformed according to one's vision—and I am not just disputing the specific content of some visions, but saying that's the wrong way to start. Many of the obvious ways of 'making a difference' that present themselves to the modern mind work through an I-It relationship, calculating how to obtain a response from people, and are therefore tainted from the start. Does that mean that nothing is to be done? No; there are many things, from a walk of faith as transforming communion with God, to learning to relate to God, people, and the entire cosmos in I-Thou fashion, to using forms of persuasion that appeal to a whole person acting in freedom. But that is another thesis to explore.

Epilogue, 2010

I look back at this piece six years later, and see both real strengths and things I wince at. This was one of my first major works after being chrismated Orthodox, and while I am enthusiastic for Orthodoxy there are misunderstandings. My focus on cosmology is just one step away from Western, and in particular scientific, roots, and such pressure to get cosmology right is not found in any good Orthodox theologian I know. That was one of several areas where I had a pretty Western way of trying to be Orthodox, and I do not blame people who raise eyebrows at my heavy use of existentialist distinction between I-Thou and I-It relationship. And the amount of time and energy spent discussing magic almost deterred me from posting it from my website; for that reason alone, I spent time debating whether the piece was fit for human consumption. And it is possibly theology in the academic sense, but not so much the Orthodox sense: lots of ideas, cleverly put together, with little invitation to worship.

But for all this, I am still posting it. The basic points it raises, and much of the terrain, are interesting. There may be fewer true believers among scientists who still chase an artificial intelligence pot o' gold, but it remain an element of the popular imagination and belief even as people's interests turn more and more to finding a magic sword that will slice through society's Gordian knots—which is to say that there may be something relevant in this thesis besides the artificial intelligence critique.

I am posting it because I believe it is interesting and adds something to the convesation. I am also posting it in the hope that it might serve as a sort of gateway drug to some of my more recent works, and provide a contrast: this is how I approached theology just after being received into Holy Orthodoxy, and other works show what I would present as theology having had more time to steep in Orthodoxy, such as The Arena.

I pray that God will bless you.

Bibliography

Augustine, In Euangelium Ioannis Tractatus, in Nicene and Post-Nicene Fathers, Series I, Volume VII, Edinburgh: T & T Clarke, 1888.

Bianchi, Massimo Luigi, Signatum Rerum: Segni, Magia e Conoscenza da Paracelso a Leibniz, Edizioni dell'Ateneo, 1987.

Buber, Martin, Ich und Du, in Werke,Erster Band Schriften zur Philosophie, Heidelberg: Kösel-Verlag, 1962, 79-170.

Caroll, Lewis, Alice's Adventures in Wonderland, Cambridge: Candlewick Press, 2003.

Dixon, Thomas, 'Theology, Anti-Theology and Atheology: From Christian Passions to Secular Emotions,' in Modern Theology, Vol 15, No 3, Oxford: Blackwell 1999, 297-330.

Dreyfus, Hubert L., What Computers Still Can't Do: A Critique of Artificial Reason, London: MIT Press, 1992.

Edelman, Gerald, Bright Air, Brilliant Fire, New York: BasicBooks, 1992.

Fodor, Jerry, In Critical Condition: Polemical Essays on Cognitive Science and the Philosophy of Mind, London: MIT Press, 1998.

Foerst, Anne, 'Cog, a Humanoid Robot, and the Question of the Image of God,' in Zygon 33, no. 1, 1998, 91-111.

Gibson, William, Neuromancer, New York: Ace, 2003.

Harman, Gilbert, 'Some Philosophical Issues in Cognitive Science: Qualia, Intentionality, and the Mind-Body Problem,' in Posner 1989, pp. 831-848.

Hebb, D.O. Organization of Behavior: A Neuropsychological Theory, New York: Wiley, 1949.

Johnson, Paul, Intellectuals, New York: Perennial, 1990.

Layton, Bentley, The Gnostic Scriptures: Ancient Wisdom for the New Age, London: Doubleday, 1987.

Lee, Philip J., Against the Protestant Gnostics, New York: Oxford University Press, 1987.

VanLehn, Kurt, 'Problem Solving and Cognitive Skill Acquisition,' in Posner 1989, pp. 527-580.

Leibniz, Gottfried Wilhelm, Frieherr von, Ars Combinatoria, Francofurti: Henri Christopher Cröckerum, 1690.

Lewis, C.S., The Abolition of Man, Oxford: Oxford University Press 1950-6.

Lewis, C.S., That Hideous Strength, London: MacMillan, 1965.

Lewis, C.S., The Chronicles of Narnia, London: Harper Collins, 2001.

Margot Adler, Drawing Down the Moon: Witches, Druids, Goddess Worshippers and Other Pagans in America Today (Revised and Expanded Edition), Boston: Beacon Press, 1986,

Maximus Confessor, Capita Gnosticae (Capita Theologiae et OEconomiae), in Patrologiae Graeca 90: Maximus Confessor, Tome I, Paris: Migne, 1860, 1083-1462.

Maximus Confessor; Berthold, George (tr.), Maximus Confessor: Selected Writings, New York, Paulist Press,, 1985.

Maximus Confessor, Mystagogia, as published at Thesaurus Linguae Graecae, http://stephanus.tlg.uci.edu/inst/browser?uid=&lang=eng&work=2892049&context=21&rawescs=N&printable=N&betalink=Y&filepos=0&outline=N&GreekFont=Unicode. Citations from the Mystagogia will be referenced by chapter and line number as referenced by Thesaurus Linguae Graecae.

Midgley, Mary, Science as Salvation: A Modern Myth and Its Meaning, London: Routledge, 1992.

More, Thomas, Thomas More: Utopia, Digitale Rekonstruktion (online scan of 1516 Latin version), http://www.ub.uni-bielefeld.de/diglib/more/utopia/, as seen on 2 June 2004.

Norman, Donald, The Invisible Computer, London: MIT Press, 1998.

Norman, Donald, Things That Make Us Smart, Cambridge: Perseus 1994.

Von Neumann, John, The Computer and the Brain, London: Yale University Press, 1958.

Polanyi, Michael, Personal Knowledge, Chicago: University of Chicago Press, 1974.

Posner, Michael I. (ed.), Foundations of Cognitive Science, London: MIT, 1989.

Pseudo-Dionysius; Luibheid, Colm (tr.), Pseudo-Dionysius: The Complete Works, New York: Paulist Press, 1987.

Puddefoot, John, God and the Mind Machine: Computers, Artificial Intelligence and the Human Soul, London: SPCK1996.

Read, John, 'Alchimia e magia e la ''separazione delle due vie'',' in Cesare Vasoli (ed.), Magia e scienza nella civilté umanistica, Bologna: Societé editrice il Mulino 1976, 83-108.

Sacks, Oliver, The Man who Mistook his Wife for a Hat, Basingstroke: Picador, 1985.

Searle, John, Minds, Brains, and Science, London: British Broadcasting Corporation, 1984.

Searle, John, The Mystery of Consciousness, London: Granta Books, 1997.

Shakespeare, William, The Merchant of Venice, as seen on the Project Gutenberg archive at http://www.gutenberg.net/etext97/1ws1810.txt on 15 June 2004.

Skinner, B. F., Walden Two, New York: Macmillan, 1948.

Thomas, Keith, Religion and the Decline of Magic: Studies in Popular Beliefs in Sixteenth and Seventeenth Century England, Letchworth: Weidenfeld and Nicolson, 1971.

Turing, Alan M., 'Computing Machinery and Intelligence,' in Mind 49, 1950, pp. 433-60, as seen at http://cogprints.ecs.soton.ac.uk/archive/00000499/00/turing.html on 25 Feb 04.

Watts, Fraser, 'Artificial Intelligence' in Psychology and Theology, Aldercroft: Ashgate, 2002.

Webster, Charles, From Paracelsus to Newton: Magic and the Making of Modern Science, Cambridge: Cambridge University Press, 1982.

Yates, Frances A., The Occult Philosophy in the Elizabethan Age, London: Routledge, 1979.

Yates, Frances A., Selected Works, Volume III: The Art of Memory, London: Routledge, 1966, as reprinted 1999.

Footnotes

[1] These neural nets are modelled after biological neural nets but are organised differently and seem to take the concept of a neuron on something of a tangent from its organisation and function in a natural brain, be it insect or human.

[2] Cog, http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/images/cog-rod-slinky.gif, as seen on 11 June 2004 (enlarged).

[3] 2002, 50-1.

[4] Searle 1998, Edelman 1992, etc., including some of Dreyfus 1992. Edelman lists Jerome Brunner, Alan Gauld, Claes von Hofsten, George Lakoff, Ronald Langaker, Ruth Garrett Millikan, Hilary Putnam, John Searle, and Benny Shannon as convergent members of a realist camp (1992, 220).

[5] Lee 1987, 6.

[6] 'Intentionality' is a philosophy of mind term for the 'about-ness' of mental states.

[7] By 'teleology' I understand in a somewhat inclusive sense that branch of theology and philosophy that deals with goals, ends, and ultimate meanings.

[8] 'Cognitive faculty' is a philosophy of mind conception of a feature of the human mind that operates on mental representations to perform a specific function.

[9] The spiritual 'intellect' is a patristic concept that embraces thought, conceived on different terms from 'cognitive science,' and is inseparably the place where a person meets God. Augustine locates the image of God in the intellect (In Euangelium Ioannis Tractatus, III.4), and compares the intellect to Christ as illuminating both itself and everything else (In Euangelium Ioannis Tractatus, XLVII, 3).

[10] Watts 2002, 57-8. See the World Transhumanist Association website at http://www.transhumanist.org for further information on transhumanism.

[11] C.S. Lewis critiques this project in The Abolition of Man (1943) and That Hideous Strength (1965). He does not address the question of whether this is a possible goal, but argues that it is not a desirable goal: the glorious future it heralds is in fact a horror compared to the present it so disparages.

[12] Encyclopedia Mythica, 'Rabbi Loeb,' http://www.pantheon.org/articles/r/rabbi_loeb.html, as seen on 26 Mar 04.

[13] Foerst 1998, 109 also brings up this archetypal tendency in her conclusion.

[14] United States Postal Service 2003 annual report, http://www.usps.com/history/anrpt03/html/realkind.htm, as seen on 6 May 2004.

[15] Cog, as seen on http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/images/scaz-cog.gif, on 6 May 2004 (enlarged).

[16] 2002, 57.

[17] Cog, 'Theory of Mind for a Humanoid Robots,' http://www.ai.mit.edu/projects/humanoid-robotics/group/cog/Abstracts2000/scaz.pdf, as seen on 6 May 2004.

[18] Adler 1986, 319-321.

[19] 1992, 161-4.

[20] Utopias are often a satire more than a prescription literally conceived, but they are also far more prescriptive than one would gather from a simple statement that they are satire.

[21] Turing 1950.

[22] VanLehn 1989, in Posner 1989, 532.

[23] Ibid. in Posner 1989, 534.

[24] 1998, 101.

[25] 1992, 159.

[26] Foerst 1998, 103.

[27] Turing 1950.

[28] Hebb 1949, as quoted in the Linux 'fortune' program.

[29] Nominalism said that general categories are something in the mind drawn from real things, and not something things themselves arise from. This has profoundly shaped the course of Western culture.

[30] Lewis 1943, 46.

[31] Yates 1966, 380-382.

[32] Without submitting to the Church in the usual way, the magus is equal to its highest members (Webster 1982, 57).

[33] George Mason University's Modern & Classical Languages, 'Pico della Mirandola: Oratio de hominis dignitate,' http://www.gmu.edu/departments/fld/CLASSICS/mirandola.oratio.html, as seen on 18 May 2004. See Poim 27-9, CH7 1-2 in Bentley 1987 for texts reflecting an understanding of the world as evil and associated contempt for the hoi polloi.

[34] Thomas More: Utopia, Digitale Rekonstruktion, http://www.ub.uni-bielefeld.de/cgi-bin/button.cgi?pfad=/diglib/more/utopia/jpeg/&seite=00000017.jpg&jump=1, http://www.ub.uni-bielefeld.de/cgi-bin/button.cgi?pfad=/diglib/more/utopia/jpeg/&seite=00000018.jpg&jump=1, etc. (pp. 35-6), as seen on 2 June 2004.

[35] Thomas More: Utopia, Digitale Rekonstruktion, http://www.ub.uni-bielefeld.de/cgi-bin/button.cgi?pfad=/diglib/more/utopia/jpeg/&seite=00000039.jpg&jump=1, http://www.ub.uni-bielefeld.de/cgi-bin/button.cgi?pfad=/diglib/more/utopia/jpeg/&seite=00000040.jpg&jump=1, etc., (pp. 79-86), as seen on 2 June 2004. This runs through most of the book.

[36] Lewis 1943, 46.

[37] Ibid., 33-35.

[38] Ibid., 23-24.

[39] Ibid., 295-299.

[40] Ibid.

[41] See Midgley, 1992, 80.

[42] 1990, 195, 197-224,337-41.

[43] 1950.

[44] References will be to the online Greek version at Thesaurus Linguae Graecae, http://stephanus.tlg.uci.edu/inst/wsearch?wtitle=2892+049&uid=&GreekFont=Unicode&mode=c_search, according to chapter and line. Unless otherwise specified, references in this section will be to the Mystagogia.

[45] 5.1-10. 'Intellect' in particular is used as a scholarly rendering of the Greek 'nous,' and is not equivalent to the layman's use of 'intellect,' particularly not as cognate to 'intelligence.' The 'reason' ('logos') is closer to today's use of the term, but not as close as you might think. This basic conceptualisation is common to other patristic and medieval authors, such as Augustine.

[46] 1992, 239.

[47] 'Punctuated equilibrium' is a variant on Darwin's theory of (gradual) evolution. It tries to retain an essentially Darwinian mechanism whilst acknowledging a fossil record and other evidence which indicate long periods of stability interrupted by the abrupt appearance and disappearance of life forms. It is called 'punk eek' by the irreverent.

[48] I.82. Material from the Capita Gnosticae, not available in Thesaurus Linguae Graecae, will be referenced by century and chapter number, i.e. I.82 abbreviates Century I, Chapter 82.

[49] See Lewis 2001, 522.

[50] What we usually mean by 'cause' today: something which mechanically brings about its effect, as time and favourable conditions cause an acorn to grow into an oak.

[51] The 'final cause' is the goal something is progressing towards: thus a mature oak is the final cause of the acorn that would one day grow into it.

[52] As seen on the Project Gutenberg archive at http://www.gutenberg.net/etext97/1ws1810.txt on 15 June 2004.

[53] 1992, 147-165.

[54] 1998, 104-7.

Blessed Are the Peacemakers: Real Peace Through Real Strength

In chapel, a speaker spoke of a person who was asked "Do you know how to play golf?" and answered "Yes, I learned yesterday." He then went on to speak of one of the simplest of Jesus's lessons, and how to truly learn that lesson is the work of a lifetime. If I were to be asked if I understand what I am talking about, the best and most honest answer I could give would be "No, but I am beginning to." For all of my life, I have been shown and have seen that there is something horrible that occurs when a human life without Christ is extinguished, and believed that, if destruction is something God wishes humans to avoid, then he would not place them in situations where it is unavoidable. It is not God's nature to say "this is to be avoided" and then be unfaithful and not provide a way out: sin is to be avoided and minimized. God always provides a way out. When I sin, it is not because God allowed me to come to a situation where there is no way to act without sin, or even because there was a way out that was beyond my strength, but because I choose to disregard what God in his love and wisdom has provided, and bring pain and destruction to myself and to God. And so I have spent time questioning and studying, and in the past couple of years have stumbled across something that astounds me. At first I saw one means that can work when diplomacy fails, and does not say to any other human being "You are expendible. I will permit you to die." And then, looking deeper, I have seen that it is not only another way to avoid violence, but that it is the imitation of Christ, and a new understanding of what it means to imitate Christ, to suffer for him, to conquer in his name. From time to time, God has given me affirmations of what I am doing - showing me other Christians who before me have seen what I have discovered, bringing a new light to the darkness that is in causing suffering to another. I have no delusions of being a master of that of which I speak - while I learn, while I progress, I do not see how I will ever be other than a novice before I am in Heaven and no longer see darkly and through a glass - but, at the same time, God has shown me something that is awesome in the true meaning of the word, and it is something that I cannot keep to myself.

The most dangerous assumption is the one that is not realized as such. An assumption that is realized can be strengthened and improved in detail if it is true, and rejected if it is false. The one that is unstated offers the danger of not showing its full glory if it is true, and not offering itself for rejection if it is false. There is an often unrealized assumption that there are ultimately some situations where violence is the only way out (IE where God can't or won't use any other means), and furthermore that the choice is between violence and inaction (no other alternatives). Stating that it is an assumption neither proves nor disproves it, but does bring it to light - to consider and judge as an assumption.

The idea that the use of physical force is an evil is a presupposition that is carried throughout this work. All agree violence is preferably to be avoided, not a desirable state, and its means, deception and destruction, bear the mark of darkness rather than the mark of light.

I know fully that the sixth commandment, translated as "Thou shalt not kill." in King James, used language that would better be translated "You shall not murder.", a command that left open the possibility of killing in many cases. This does not mean that that moral avenue is still open. The ninth commandment, "Thou shalt not bear false witness against thy neighbor" was written in language that specifically spoke of lying in court. This does not mean that a court of law is the only place that a Christian is not permitted to lie. There are many things that were made complete when Christ came, one of which was shifting from inwardly attempting to maintain purity to outwardly evangelizing. In the Old Testament, the prophet had a role calling back the lost sheep of Israel, but to the Gentiles there was no real sense of the Great Commission. Christ's coming changed that, so that one of the primary responsibilities given to Christians is to win souls. It is with knowledge of this that Paul spoke of becoming a servant to all, ending with "I have become all things to all men so that by all possible means I might save some." (I Cor 9:22)

Each person in this world is either ready to die or not ready to die. A person who is ready to die will not be serving someone who needs to be stopped. I know that there are many soldiers who would rather not fight, who would rather die than kill, and who bear no hatred towards their enemies. At the same, if you would kill, I have this question for you: Can you consider it to be the best possible form of evangelism to look an enemy soldier in the eyes, say "Jesus loves you. He died so that you may be forgiven of your sins and go to Heaven. I love you." and then, pulling a trigger, send that soldier to Hell?

The early Christian church (before Constantine's vision) had a strong aversion to the shedding of blood, as reflected by people such as Athenagorus, who said in 180 AD "We [Christians] cannot endure even to see a man put to death, though justly." When the Emperor attempted to create a Christian state, a part of the compromise that was introduced was the concept of just war theory: killing is undesirable and an evil under all circumstances, but there are some circumstances when it is not the greatest evil, and inaction and the damage it will cause is a greater evil. This thought is at the center of misunderstanding of pacifism: that a pacifist sits back and does nothing, that pacifism is passivism. I will attempt here to outline the difference between pacifism and passivism. If I succeed, it is only by God's grace.

If Shadrach, Meshach, and Abednego had prescribed to the idea that it would be possible to know in advance what is the greater evil and what is the lesser evil, and to choose between, then certainly the lesser of the two evils would have been to bow down _once_ and continue with their many other ministries. The story, however, glorifies their refusal to commit even the smallest evil, and reflects God's disregard for what is and isn't humanly possible. "Not by might, nor by power, but by my Spirit.", says the Lord. Zech. 4:6

The new law is to love your enemy as yourself, and to forgive the one who injures you seven times seventy, as per Matthew 18:22.

Oftentimes people ask me "Well, God commanded not only defensive wars and even conquest but genocide in the Old Testament; what about those?" Please be assured that, were I to be born before Christ came, I would believe that violence is sometimes allowed. If I were to be born before Christ came, I would probably be an active member of the military, because that is what God commanded of many people and something that my gifts would be suited for. Jesus, however, said "You have heard that it was said: 'Love your neighbor and hate your enemy.' But I tell you: Love your enemies, bless those who curse you, do good to those who hate you, and pray for those who persecute you... Be perfect, therefore, as your heavenly father is perfect." (Matt. 5:43,44,48) Before this command, it would have been not only acceptable but a moral duty to strike at some enemies, just as it was not only acceptable but a moral duty to repay life for life, eye for eye, tooth for tooth, hand for hand, foot for foot, burning for burning, wound for wound, stripe for stripe (Ex. 21:23-25). With Christ, however, things were completely changed: "You have heard that it was said: 'Eye for eye and tooth for tooth.' But I tell you, do not resist an evil person. If someone strikes you on the right cheek, turn to him the other also." (Matt. 5:38-39) Any action taken in a war must be reconcilable with complete and absolute love for the enemies attacked: loving ("Love does no harm to its neighbor", Rom 13:10), doing good towards, praying for, blessing.

If you wish to become a warrior, then you will study and try to learn tactics and strategy. An attack that is lacking in planning will fall to a defense that is strategic, even if the attackers have better soldiers and better weapons.

If you wish to use the means of peace (whether or not you believe that they are always sufficient), then just as a warrior must study, you must study the concepts and principles of the means of peacemaking. You must study the tactics and strategy of making peace before even considering to declare it an insufficient tool for a situation where violence is necessary.

Once the men of a village came, running, and told Gandhi that they had run away while the police were raping and pillaging. When they told him that this was because of his instruction to be nonviolent, he hung his head in shame. He would not have been angry with them if they had defended their families by the power of a sword. He would have approved had they stood in harm's way, calling all injury to themselves without seeking to strike or to harm, to the point of death. But to run away like that and passively leave those who could not run was an act of great and terrible cowardice, the darkest possible answer to the problem. Gandhi - because the Hindu religion sees grey and dark_er_ and light_er_ courses of action (every action falling onto a spectrum) believed that violence was necessary in many situations, in any event infinitely superior to cowardice. I do not believe that God presents a situation that does not have some way out that is free of sin and evil, and so I believe that violence is completely unnecessary to the Christian. The point of this example still stands, however - that cowardice is diametrically opposed to peacemaking.

Random violence for its own sake is not farther from a just war than sitting back and doing nothing is from pacifism. Cowardice is the direct opposite of peacemaking, and a coward CANNOT learn to be a peacemaker without first learning bravery.

Long before one person _ever_ strikes another in a corporeal manner, peace has been breached. The first principle of peace is something that lies much stronger and much deeper than the absence of physical conflict. The Hebrew word "shalom" has come to have the meaning that peace should have - if you have not encountered the word shalom, take "harmony" or "accord" to be a rough English equivalent. When there is truly peace between two people, they love each other to the point of being ready to forfeit wealth, honor, and life. Such peace leaves no room for prejudice and misunderstanding, which scatter as cockroaches scatter at the appearance of light. To establish peace, you do not merely ensure a lack of physical violence (particularly not through intimidation at your own superior capability for violence - "peace through strength" destroys what it wishes to establish), but rather work to remove all traces of hatred and injustice. Peace is not an absence, but the presence of love.

"The greatest of these is love." I Cor 13:13 Establish love and there will be peace.

Just as a warrior must be ready to sacrifice the life of another by killing, so also, to live by peace you must be ready to sacrifice yourself by dying. This is the heart of the difference between passivism and pacifism. A passivist sits back and does nothing. A pacifist goes out on the battlefield, ready to die. To go out into a battle to kill, with the knowledge that you may die, requires great courage. To go out into a battle, not to kill, but to die, requires greater courage still.

It is obvious that there is a certain power which, in order to harness, it is necessary to take up arms and be ready to kill if need be. What is not so obvious is that there is another power for which it is necessary to put down arms and be ready to die if need be.

It is easy to return love to one who loves. It is not easy to give love to one who hates. And yet to do this impossible task is possible by the grace of God: "I can do everything in Christ who gives me strength." Phil. 4:13

Christ did not conquer us by threats of fire and brimstone. His message was not centered around "If you do not follow me, you will go to Hell." (although that is true) He did not torture us until we said "Ok, Ok, I believe." (although he has the power, the authority, and the right to do so) He rather said "Look how much I love you. Look at what I did for you. Look at what I want to do for you." He loved us who were his mortal enemies, and conquered us from the inside out: not by force, not by threat, but by love that knew no bounds. When we evangelize - conquering those who are God's mortal enemies - we do not threaten with Hell or use torture. We show our love, and by the power of the Holy Spirit conquer from _the_inside_out,_ making an ally of an enemy and bringing blessing where God wills. This nature, this love, this manner of conquering is the heart of peacemaking.

In the midst of a world where darkness has its dominion, the powers of light are not overcome. This is not because the power of Satan is weak, but because the power of God is stronger. If you master an enemy by violence, your victory is temporary. If you master an enemy by love, your victory is eternal.

In the study of war and peace, look not only at troubled individuals and nations in the time of war, but also when there is peace - and know, as much as what went wrong when there were battles, what went right when there was love. Formal elaboration of some principles of peacemaking are rare, but its practice is more common than you might think. When you use your body to shield another person from injury, when you place yourself in the path of harm - take the example of the king of Denmark shielding Jews from Hitler - that is peacemaking.

Brother Andrew, while speaking at a chapel here, recounted an an excellent example of peacemaking. He was talking with the leader of a terrorist liberation front who was holding hostages. He reasoned with the leader for a while, talking about how he could not rest if a single brother or sister of his in Christ was in captivity, but did not succeed. Diplomacy failed, as it sometimes will. He did not break into a fistfight, or try to grab one of the guns in the room. What he did do was to ask, "Will you take me in his place? Will you let him go free, and chain me to the central radiator?" The leader was astonished, not believing at first that he actually realized (let alone meant) what he said, and then that Andrew's house was in order, and that he really was ready to be a hostage. That is acting in Christ's love.

Love is not weakened or limited by hostility of the ones loved. It would be hollow and worthless if it were only an effective means of dealing with people who love you and take you seriously. Christ came down and died, died not for perfect people who were worthy of salvation (such people would need no such thing), but for people who were walking in the darkness and hated the light. His manifest power is revealed in the ones who have been conquered and transformed by its strength, and so Billy Graham, Jeffrey Dahlmer, and myself who were all repulsive in his sight and fully worthy of Hell have come to be forgiven and made anew. We were God's enemies, conquered not by a show of force on God's part (which would have been easy - God could kill me as easily as I lift a finger), but by costly love. He came down in human form and, when he had shown his love in all other ways, showed his love by dying. And, as God conquered us who were his enemies by the power of his love, and made us to be his reconciled sons and daughters, so we must conquer those who are our enemies by the power of his love manifest in us, and make them to be our reconciled brothers and sisters.

Jesus said "If someone strikes you on the right cheek, turn to him the other also." (Matt. 5:39) This is not a command to act as if you have no rights and passively let yourself be regarded as subhuman, but rather an insistence on the fact that you do have rights. In the society of that time, a slap on the cheek was not intended as a physical injury but rather as an insult, putting an inferior back in his or her place. The strength of that insult depended greatly upon which hand dealt it: as the left hand was seen as unclean, a slap with the left hand was the insult far greater than one dealt with the right hand. This was reflected in the legal penalties for an inappropriate slap: the penalty for slapping a peer with your left hand was a fine one hundred times the penalty for slapping a peer with your right hand; the penalty for slapping a better with your right hand was a fine while the penalty for slapping a better with your left hand was death. The people Jesus was speaking to most directly were, by and large, slaves and the downtrodden. A slap on the right cheek was dealt with the left hand. To turn the other cheek would leave the master with two options. The first would be to slap the slave again, but this time with the right hand (therefore declaring the slave a peer). The second would be not to slap the slave again (therefore effectively rescinding the first slap). Now, such impudence and sauciness would often tend to bring punishment, but it none the less says "Hey, I'm a human. I have rights. You can't treat me like this." It is not an action without suffering for oneself, nor does it inflict suffering on the "enemy": but it does say and do something in a powerful way.

If you are to be a peacemaker, you must act against any evil - no matter how small it may appear (by human measure - there is _no_ small evil by God's measure) - whenever you see it. Even if it is not a breach of peace in the military sense, it is a breach of shalom, and should be stopped as soon as possible, so that it does not grow and multiply. If this is done, it will be rare if ever that violent intervention is even a question.

The power of violence is in what it can compel of the body. The power of peacemaking is what it can compel of the soul. If someone commands you to do what is morally repugnant to you, and you use the force of arms to stop that person, then you will probably slay some, and you will certainly make emnity. If instead you use the force of peacemaking - by noncompliance, being disobedient and taking whatever the consequences must be, and by choosing your own suffering over the convenience of obedience - you will not see results as quickly, but your actions will command respect rather than emnity.

If you are to gain the power to successfully intervene with violence, then you must devote resources to equipment and time to training. Time and money thus spent are not spent on humanitarian ends. This is not to say that military technology and research does not have civilian spinoffs, or to say that the precision and discipline within military bodies is not something that can be very useful. Both of these benefits do exist, and are worth taking note (and advantage) of. At the same time, it is necessary to think: Is this really the most powerful and best way to spend this money? Love and active peacemaking are not limited to the well financed. Its power does not come from the investment of scarce monetary resources, but rather through the Holy Spirit, which is anything but a scarce resource. Money is freed to other ends.

Everyone in this discussion agrees that it is better to voluntarily suffer than to inflict suffering on others.

Diplomacy is a powerful thing. It becomes even more powerful if you study the positions of all parties involved, study both their stated desires and what is unstated: their culture, their experience, the motivation behind stating the desires and intentions that they state. Oftentimes goals that appear diametrically opposed will, when examined at the root, reveal a mutually beneficial way of resolution. The power of diplomacy is not, however, absolute, and it depends to an extent on the goodwill of both parties. It is then that either one side must turn back, or that the desires be accomplished at the price of suffering. The usual method of waging wars uses physical force to conquer. The method of peacemaking - to stand in the way of the evil being done against you, and not dodge or resist the blows aimed at you - uses spiritual force which opens a hardened heart.

Love is not the exclusive domain or power of one group. Any individual can bring surprise by an act of love. The power of love, when applied to all ways so that there are no charges of incompletion or hypocrisy, is overwhelming.

Love wishes nothing that it would not accord to another. Greed, the placement of self at the center of the universe, is diametrically opposed to love.

Christ's resistance and even revulsion at our evil did not cause him to force that evil from us. He rather showed us the better way, and left us to choose between the paths of light and those of darkness. So it is with love that makes peace: it is not forced upon those who believe violence to be the greatest interventive power.

Proclaim Christ at all times, and use words if need be.

Morally, there is not a difference between directly and indirectly causing an action. The one who commissions an assassination is no less guilty than the one who murders in person. Be sure that the actions you support are as pure as the actions you would take in person.

Just as Jesus said not to murder either in body (by breaking the sixth commandment) or in mind (by harboring hatred), peacemaking and love must penetrate both the actions of the body and the actions of the mind completely.

If you oppose someone with peacemaking, you will call to yourself the love and respect of others. Your power is not dependent on the extent of your military might (which is dependent on the extent to which you sacrifice humanitarian ends), but only on the extent to which you love and to which the Holy Spirit has power. In other words, if it fails, it is because God sees more good in that momentary failure than its success.

Peacemaking is more the opposite of inaction than it is of violence. Violence consists of seeing an evil and trying to act to rectify it; the means are imperfect. Cowardice and inaction make no hint of an effort to rectify the situation, and in my view are more reproachable than well meant violence. I have no respect for cowards - including those who dodge military conscription because they are afraid to die or be maimed in battle - but do hold respect for soldiers who have the courage and the desire to rectify which is the heart of peacemaking.

The power of love to conquer a hostile person without harm is a mystery; I would be a great liar if I said that I have always treated others in love. I will say that, when I have acted in a manner that says "You are expendable", there is a seed of evil and poison, however small, that starts to grow. When I have acted in a manner that does not see the least (by the world's measure) as expendible, God's love acting in me has shown power that is beyond my comprehension.

At the heart of violent intervention is a presupposition that you know the hearts of your enemies and that you can predict what can happen, so that the slaughter you cause will be lesser than the slaughter you prevent, and that if you instead intervene with your own blood without physically incapacitating your enemy, God will not work through and bless your actions as much as if you had compromised. When this assumption comes to mind, I believe that God has answered it when he said "Satan is a liar and the father of all lies." John 8:44, and that that he can and will do "immeasurably more than we all ask or imagine." (Ephesians 3:20) I am personally offended by the idea that it is necessary to take evil in order to prevent evil, because it carries the implication that God is either a hypocrite (by telling us never to to evil, and having the power to keep us from a choice between acts of evil, but choosing not to) or incompetent (telling us never to do evil, but lacking the power to make this possible). At the heart of peacemaking is faith, faith that without committing any undesirable evil it is possible to conquer the darkness. I have taken too many leaps of faith and landed on solid ground too many times to think that God is unable or even unwilling to grant power to those that will not compromise.

It is said that it is more blessed to give than to receive. Whether or not you agree with that - I find a great blessing in both - it is evident that one of the marks of love is that it benefits the one who loves and the one who is loved. Violence does not "do no harm to its neighbor" (I Cor 13:10), but very regretfully does what it hopes to be a minimum of harm to its neighbor. The power of love and peacemaking is such that it brings blessings upon the one who uses it to oppose evil, and the person whose evil is opposed.

Civil disobedience must be loving and sincere in all regards. To hatefully scream while restraining your fists is not enough: you must act in complete love and not harm in the least the person who you are resisting.

When you take an action, always look at why you act.

Love that is ready to die leaves no room to be cowardly.

"Do not be overcome by evil, but overcome evil with good." Romans 12:21

I hope that, if God offers me the honor of becoming a martyr, I would have the courage to accept the honor. As Paul said in Phillipians 1:21, "To live is Christ; to die is gain."

All Scriptural quotations (except for quotations from the ten commandments) NIV.

Dark Patterns / Anti-patterns and Cultural Context Study of Scriptural Texts:

A Case Study in Craig Keener's Paul, Women, and Wives: Marriage and Women's Ministry in the Letters of Paul

Dark Patterns / Anti-patterns and Cultural Context Study of Scriptural Texts:
A Case Study in Craig Keener's Paul, Women, and Wives:
Marriage and Women's Ministry in the Letters of Paul

Jonathan Hayward
christos.jonathan.hayward@gmail.com
CJSHayward.com

Diploma in Theology and Religious Studies, 2003
Faculty of Divinity
University of Cambridge
20 May 2003

Abstract

The author suggests how the concept of 'patterns' in architecture and computer science, or more specifically 'dark patterns' / 'anti-patterns', may provide a helpful vehicle to explicitly communicate tacit knowledge concerning problematic thought. The author also provides a pilot study which seeks to provide a sample analysis identifying indicators for the 'surprising cultural find' pattern in which cultural context is misused to explain away offending Bible passages.

Introduction to Patterns, Dark Patterns, and Anti-patterns

The technical concept of pattern is used in architecture and computer science, and the synonymous dark patterns and anti-patterns refer to patterns that are not recurring best practices so much as recurring pathologies; my encounter with them has been as a computer programmer in connection with the book nicknamed 'GoF'[1]. Patterns do not directly provide new knowledge about how to program; what they do provide is a way to take knowledge that expert practitioners share on a tacit level, and enable them both to discuss this knowledge amongst themselves and effectively communicate it to novice programmers. It is my belief that the concept is useful to Biblical studies in providing a way to discuss knowledge that is also held on a tacit level and is also beneficial to be able to discuss explicitly, and furthermore that dark patterns or anti-patterns bear direct relevance. I hope to give a brief summary of the concept of patterns, explaining their application to Biblical studies, then give a pilot study exploring one pattern, before some closing remarks.

Each pattern consists of a threefold rule, describing:

  1. A context.
  2. A set of forces within that context.
  3. A resolution to those forces.

In the contexts of architecture and computer science, patterns are used to describe best practices which keep recurring and which embody a certain 'quality without a name'. I wish to make a different application, to identifying and describing certain recurring problematic ways of thought in Biblical or theological inquiry which may be understood as dark patterns, which often seem to be interlaced with sophistry and logical fallacy.

Two examples of what a dark pattern, or anti-pattern might be are the consolation prize, and the surprising cultural find. I would suggest that the following provide instances of the consolation prize: discussion of a spiritual resurrection, flowering words about the poetic truth of Genesis 1, and Calvin's eucharistic theology. If you speak of a spiritual resurrection that occurs instead of physical resurrection, you can draw Christians far more effectively than if you plainly say, 'I do not believe in Christ's physical resurrection.' The positive doctrine that is presented is a consolation prize meant to keep the audience from noticing what has been taken away. The context includes a text that (taken literally) a party wants to dismiss. The forces include the fact that Christians are normally hesitant to dismiss Scripture, and believe that insights can give them a changed and deepened understanding. The resolution is to dress up the dismissal of Scripture as a striking insight. Like other patterns, this need not be all reasoned out consciously; I suggest, via a quasi-Darwinian/meme propagation mechanism, that dismissals of Scripture that follow some such pattern are more likely to work (and therefore be encountered) than i.e. a dismissal of Scripture that is not merely undisguised but offensive.

In the surprising cultural find, a meticulous study is made of a passage's cultural context to find some basis to neutralise the passage so that its apparent meaning does not apply to us. The context is similar to that of the consolation prize, if more specific to a contemporary Western cultural setting. The forces, beyond those mentioned for the consolation prize, include ramifications of period awareness and the Standard Social Science Model: there is a very strong sense of how culture and period can influence people, and they readily believe claims about long ago and far away that which would seem fishy if said about people of our time and place. The resolution is to use the passage's cultural setting to produce disinformation: the fruits of careful scholarly research have turned up a surprising cultural find and the passage's apparent meaning does not apply to us. The passage may be presented, for instance, to mean something quite different from what it appears to mean, or to address a specific historical situation in a way that clearly does not apply to us.

It is the dark pattern of the surprising cultural find that I wish to investigate as a pilot case study in this thesis.

Case Study

Opening Comments

The aim of this case study is to provide a pilot study of how the surprising cultural find may be identified as a dark pattern. In so doing, I analyse one sample text closely, with reference to comparison texts when helpful.

I use the terms yielding to refer to analysis from scholars who presumably have interests but allow the text to contradict them, and unyielding to refer to analysis that will not allow the text to contradict the scholar's interests. Yielding analysis does not embody the surprising cultural find dark pattern, while unyielding analysis does. I consider the boundary to be encapsulated by the question, 'Is the text allowed to say "No!" to a proposed position?'

Ideally, one would compare two scholarly treatments that are alike in every fashion save that one is yielding and the other is unyielding. Finding a comparison text, I believe, is difficult because I was searching for a yielding text with the attributes of one that was unyielding. Lacking a perfect pair, I chose Peter T. O'Brien's The Letter to the Ephesians[2] and Bonnie Thurston's Reading Colossians, Ephesians & 2 Thessalonians: A Literary and Theological Commentary[3] to represent yielding analysis and Craig Keener's Paul, Women, Wives: Marriage and Women's Ministry in the Letters of Paul [4] to represent unyielding analysis. I was interested in treatment of Ephesians 5:21-33. When I use Biblical references without a book, I will always be referring to Ephesians. All three of secondary sources present themselves as making the fruits of scholarly research accessible to the layperson. O'Brien provides an in-depth, nonfeminist commentary. Thurston provides a concise, feminist commentary. Keener provides an in-depth, Biblical Egalitarian monograph. Unfortunately, the ordered copy of Thurston did not arrive before external circumstances precluded the incorporation of new materials (and may have been misidentified, meaning that my advisor and I both failed after extensive searching to find a yielding feminist or egalitarian treatment of the text). My study is focused on Keener with comparison to O'Brien where expedient.

There seems to be an interconnected web of distinguishing features to these dark patterns, laced with carefully woven sophistry, and there are several dimensions on which a text may be examined. The common-sense assumption that these features are all independent of each other seems to be debatable. One example of this lack of independence is the assumption that what an author believes is independent of whether the analysis is yielding: the suboptimal comparison texts were selected partly because of the difficulty a leading Christians for Biblical Equality scholar and I experienced trying to locate yielding feminist analyses other than Thurston in Tyndale's library. I do not attempt to seriously investigate the interconnections, beyond commenting that features seem interconnected and less independent of each other than most scholars would assume by default.

The substance of my inquiry focuses on observable attributes of the text. I believe that before that point, observing a combination of factors may provide cues. I will mention these factors, but not develop them; there are probably others:

There will be a decided imbalance between attention paid to Keener and O'Brien. Part of this is due to external constraints, and part is due to a difference between O'Brien and Keener. With one major exception, described shortly, O'Brien's analysis doesn't run afoul of the concern I am exploring. If I were writing cultural commentary for my texts as Keener and O'Brien write cultural commentary for their texts, I would ideally spend as much time explaining the backgrounds to what Keener and O'Brien said. I believe they are both thinkers who were shaped by, draw on, and are critical of their cultures and subcultures. Explaining what they said, as illuminated by their context, would require parity in treatment. However, I do not elaborate their teachings set in context, but explore a problem that is far more present in Keener than in O'Brien or Thurston. I have more of substance to say about how Keener exhibits a problem than how O'Brien doesn't. As such, after describing a problem, I might give a footnote reference to a passage in O'Brien which shows some analogy without seeming to exhibit the problem under discussion, but I will not systematically attempt to make references to O'Brien's yielding analysis as wordy as explanations of Keener's unyielding analysis.

The one significant example of unyielding analysis noted in O'Brien is in the comment on 5:21: O'Brien notes that reciprocal submission is not enjoined elsewhere in the Bible, points out that 'allelous' occurs in some contexts that do not lend themselves to reciprocal reading ('so that men should slay one another'[5]), and concludes that 'Believers, submit to one another,' means only that lower-status Christians should submit to those placed above them. This is as problematic as other instances of unyielding analysis, and arguably more disturbing as it lacks some of the common indicators alerting the careful reader to be suspicious. There is a point of contact between this treatment and Keener's: both assume that 5:21 and 5:22-6:9 are not merely connected but are saying the same thing, and it is one thing only. It is assumed that the text cannot enjoin of us both symmetrical and asymmetrical submission, so one must be the real commandment, and the other is explained away. Both Keener and O'Brien end up claiming that something is commanded in 5:21 with clarificatory examples following, without asserting that either 5:21 or 5:22-6:9 says something substantively different from the other about submission. I will not further analyse this passage beyond this mention: I consider it a clear example of unyielding analysis. This is the one part of O'Brien I have read of which I would not say, '...and this is an example of analogous concerns addressed by yielding scholarship.'

The introductions to O'Brien and Keener provided valuable cues as to the tone subsequently taken by the texts. Both are written to persuade a claim that some of their audience rejects, but the divergence in how they seek to persuade is significant. Keener's introduction is written to persuade the reader of Biblical Egalitarianism: in other words, of a position on one of today's current issues. The beginning of O'Brien's introduction tries to persuade the reader of Pauline authorship for Ephesians, which they acknowledge to be an unusual position among scholars today; the introduction is not in any direct sense about today's issues. O'Brien's introduction is written both to persuade and introduce the reader to scholarly perspectives on background; while nontechnical, it is factually dense and heavy with footnotes. Keener's introduction seems to be written purely to persuade: he give statistics[6] concerning recent treatment of women which are highly emotionally charged, no attempt being made to connect them to the text or setting of the Pauline letters. Keener's introduction uses emotion to bypass rationality, using loaded language and various other forms of questionable persuasion explored below; a naive reader first encountering this debate in Keener's introduction could well wonder how any compassionate person could be in the other camp. O'Brien works to paint a balanced picture, and gives a fair account of the opposing view before explaining why he considers it inadequate. O'Brien seeks to persuade through logical argument, and his book's pages persuade (or fail to persuade) as the reader finds his arguments to be sufficient (or insufficient) reason to accept its conclusions.

Emotional Disinformation

Among the potential indicators found in Keener, the first broad heading I found could be described as factual disinformation and emotional disinformation. 'Disinformation', as used in military intelligenceordinarily denotes deception through careful presentation of true details; I distinguish 'factual disinformation' (close to 'disinformation' traditionally understood) from 'emotional disinformation', which is disinformation that acts on emotional and compassionate judgment as factual disinformation acts on factual judgment. While conceptually distinct, they seem tightly woven in the text, and I do not attempt to separate them.

An Emotional Plea

One distinguishing feature of Keener's introduction is that it closes off straightforward rebuttal. Unlike O'Brien, he tries to establish not only the content of debate but the terms of debate itself, and once Keener has established the terms of debate, it is difficult or impossible to argue the opposing view from within those terms. Rebuttal is possible, of course, but here it would seem to require pushing the discussion back one notch in the meta-level hierarchy and arguing at much greater length. O'Brien seems more than fair in his style of argument; Keener loads the dice before his reader knows what is going on.

One passage is worth citing for close study [7]:

There are issues where most Biblically conservative Christians, including myself, disagree with prominent elements of the feminist movement... But there are other concerns which nearly all Christians, including myself, and nearly the whole women's movement plainly share....

[Approximately two pages of alarming claims and statistics, including:] ...Although "bride-burning" is now illegal in India, it still happens frequently; a bride whose dowry is insufficient may be burned to death so that her husband can find a new partner. There is no investigation, of course, because it is said that she simply poured cooking oil over herself and set herself on fire accidentally.... A Rhode Island Rape Crisis Center study of 1700 teenagers, cited in a 1990 InterVarsity magazine, reported that 65% of the boys and 47% of the girls in sixth through ninth grades say that a man may force a woman to have sex with him if they've been dating for more than six months.... Wife-beating seems to have been a well-established practice in many patriarchal families of the 1800's....

But while some Christians may once have been content to cite proof-texts about women's subordination to justify ignoring this sort of oppression, virtually all of us would today recognise that oppression and exploitation of any sort are sinful violations of Jesus's commandment to love our neighbour as ourselves and to love fellow-Christians as Christ loved us. [Keener goes on to later conclude that we must choose between a feminist conception of equality and an un-Christian version of subordination.]

The text starts by presenting Keener as Biblically conservative, moves to a heart-wrenching list of wrongs against women, implicitly conflates nonfeminist Christians with those who condone rape and murder, and presents a choice crystallising the fallacy of the excluded middle that had been lurking in prior words. It has more than one attribute of emotional disinformation.

Keener both identifies himself as Biblically conservative and says that, among some Christians, the egalitarian position is the conservative one (contrast chapter 4, where 'conservative' means a reactionary misogynist). Why? People are more likely to listen to someone who is perceivedly of the same camp, and falsely claiming membership in your target's camp is a tool of deceptive persuasion.

The recitation of statistics is interesting for several reasons.

On a strictly logical level, it is a non sequitur. It has no direct logical bearing on either camp; even its rhetorical position assumes that conservative, as well as liberal, members of his audience believe that rape and murder are atrocities. This is a logical non sequitur, chosen for its emotional force and what impact that emotional recoil will have on susceptibility. The trusting reader will recoil from the oppression listed and be less guarded when Keener provides his way to oppose such oppression. The natural response to such a revolting account is to say, 'I'm not that! I'm the opposite!' and embrace what is offered when the fallacy of the excluded middle is made explicit, in the choice Keener later presents.

Once a presentation of injustice has aroused compassion to indignation, most people do not use their full critical faculties: they want to right a wrong, not sit and analyse. This means that a powerful account of injustice (with your claims presented as a way to fight the injustice) is a powerful way to get people to accept claims that would be rejected if presented on their logical merits. Keener's 'of course' is particularly significant; he builds the reader's sense of outrage by adding 'of course' with a (carefully studied but) seemingly casual manner. It is not obvious to a Western reader that a bride's murder would be left uninvestigated; adding 'of course' gives nothing to Keener's logical case but adds significantly to the emotional effect Keener seeks, more effectively and more manipulatively than were he to visibly write those words from outrage.

The sentence about proof-texts and loving one's neighbour is of particular interest. On a logical level, it is restrained and cannot really be attacked. The persuasive and emotional force—distinct from what is logically present—is closer to, 'Accepting those proof-texts is equivalent to supporting such oppression; following the Law of Love contradicts both.'

This is one instance of a broader phenomenon: a gap between what the author entails and implicates. Both 'entail' and 'implicate' are similar in meaning to 'imply', but illustrate opposite sides of a distinction. What a text entails is what is implied by the text in a strictly logical sense; what a text implicates is what is implied in the sense of what it leads the reader to believe. What is implicated includes what is entailed, and may often include other things. The entailed content of 'But while some Christians...' is modest and does not particularly advance a discussion of egalitarianism. The implicated content is much more significant; it takes a logically tight reading to recognise that the text does not entail a conflation claiming that nonfeminist Christians condone rape and murder. The text implicates much more than it entails, and I believe that this combination of restricted entailment with far-reaching implication is a valuable cue. It can be highly informative to read a text with an eye to the gap between what is entailed and what is implicated. The gap between entailment and implicature seemed noticeably more pronounced in Keener than in yielding materials I have read, including O'Brien. Another example of a gap between entailment and implicature is found close[8], '...the secular generalization that Christians (both men and women) who respect the Bible oppose women's rights is an inaccurate caricature of these Christians' admits a similar analysis: the entailment is almost unassailable, while the implicature establishes in the reader's mind that the conservative position is excisable from respect for the Bible, and that the nonfeminist position denies something basic to women that they should have. The term 'women's rights' is by entailment the sort of thing one would not want to oppose, and by implicature a shorthand for 'women's rights as understood and interpreted along feminist lines'. As well as showing a significant difference between entailment and implicature, this provides an example of a text which closes off the most obvious means of rebuttal, another rhetorical trait which may be produced by the same mindset as produces unyielding analysis.

What is left out of the cited text is also significant. The statistics given are incomplete (they focus on profound ways in which women suffer so the reader will not think of profound ways in which men suffer) but as far as describing principles to discriminate yielding versus unyielding analysis, this seems to be privileged information. I don't see a way to let a reader compare the text as if there were a complementary account written in the margin. Also, a careful reading of the text may reveal a Biblical nonfeminist position as the middle fallaciously excluded earlier, in which sexual distinction exists on some basis other than violence. All texts we are interested in—yielding or unyielding—must stop somewhere, but it is possible to exclude data that should have been included and try to conceal its absence. Lacunae that seem to have been chosen for persuasion rather than limitation of scope may signal unyielding analysis.

Further Examples

In a discussion[9] of the haustafel's (Ephesians 5:21 and following[10] injunction that the husband love his wife based on Christ's love for the Church, Keener says, 'Indeed, Christ's love is explicitly defined in this passage in terms of self-sacrificial service, not in terms of his authority.' The passage does not mention that self-sacrificial service is a defining feature of Christ's model of authority, and in these pages the impression is created that the belief in servant love is a Biblical Egalitarian distinctive, so that the reader might be surprised to find the conservative O'Brien saying[11]:

...Paul does not here, or anywhere else for that matter, exhort husbands to rule over their wives. They are nowhere told, 'Exercise your headship!' Instead, they are urged repeatedly to love their wives (vv. 25, 28, and 33). This will involve each husband showing unceasing care and loving service for his wife's entire well-being...

O'Brien is emphatic that husbands must love their wives; examples could easily be multiplied. Keener argues for loving servanthood as if it were a claim which his opponents rejected. The trusting reader will believe that nonfeminists believe in submission and egalitarians alone recognise that Paul calls husbands to servant love. I believe that this selective fact-telling is one of the more foundational indicators: some factual claims will be out of a given reader's competence to evaluate, but so far as a reader can evaluate whether a fair picture is presented, the presence or absence of selective fact-telling may help.

Chapter 4 is interesting in that there are several thoughts that are very effectively conveyed without being explicitly stated. The account of 'conservatives' (i.e. misogynistic reactionaries) is never explicitly stated to apply to Christians who disagree with Keener, but works in a similar fashion (and for similar reasons) to the 'Green Book' which introduces the first major argument in The Abolition of Man.[12] By the same mechanism as the Green Book leads the reader to believe that claims about the outer world are in fact only claims about ourselves, not the slightest obstacle is placed to the reader believing that Keener exposes the true nature of 'conservatism', and that the picture of Graeco-Roman conservatism portrayed is a picture of conservatism, period, as true of conservatism today as ever.

A smaller signal may be found in that Keener investigates inconvenient verses in a way that never occurs for convenient ones. Keener explores the text, meaning, and setting to 5:22-33 in a way that never occurs for 5:21; a careless reader may get the impression that 5:21 doesn't have a cultural setting.

Drawing on Privileged Information

I would next like to outline a difference between men's and women's communication, state what Keener's Roman conservatives did with this, and state what Keener did with the Roman conservatives. One apparent gender difference in communication is that when a woman makes a claim, it is relatively likely to mean, 'I am in the process of thinking and here is where I am now,' while a man's claim is more likely to mean, 'I have thought. I have come to a conclusion. Here is my conclusion.' Without mentioning caveats, there is room for considerable friction when men assume that women are stating conclusions and women assume that men are giving the current state of a developing thought. The conservatives described by Keener seem frustrated by this friction; Keener quotes Josephus [13]:

Put not trust in a single witness, but let there be three or at least two, whose evidence shall be accredited by their past lives. From women let no evidence be accepted, because of the levity and temerity of their sex; neither let slaves bear witness, because of the baseness of their soul.

This passage is introduced, "...regards the prohibition of women's testimony as part of God's law, based in the moral inferiority inherent in their gender." The reader is not likely to question whether it's purely misogyny for a man (frustrated by women apparently showing levity by changing their minds frequently) to find this perceived mutability a real reason why these people should not be relied on as witnesses when someone's life may be at stake. Keener has been working to portray conservatives as misogynistic. Two pages earlier[14], he tells us,

An early Jewish teacher whose work was undoubtedly known to Paul advised men not to sit among women, because evil comes from them like a moth emerging from clothes. A man's evil, this teacher went on to complain, is better than a woman's good, for she brings only shame and reproach.

This, and other examples which could be multiplied, deal with something crystallised on the previous page[15]. Keener writes,

Earlier philosophers were credited with a prayer of gratitude that they were not born women, and a century after Paul a Stoic emperor could differentiate a women's soul from that of a man.

The moral of this story is that believing in nonphysical differences between men and women is tantamount to misogyny. This is a highly significant claim, given that the questions of women's ordination and headship in marriage are largely epiphenomenal to the question of whether we are created masculine and feminine at every level of our being, or ontologically neuter spirits in reproductively differentiated bodies. Keener produces a conclusion (i.e. that the human spirit is neuter) without ever stating it or drawing the reader to consciously consider whether this claim should be believed. In a text that is consistently polite, the opposing view is not merely negated but vilified: to hold this view (it is portrayed) is tantamount to taking a view of women which is extraordinarily reprehensible. Either of these traits may signal unyielding analysis; I believe the combination is particularly significant.

Tacit and Overt Communication

Although the full import of tacit versus overt communication is well beyond my competency to address, I would like to suggest something that merits further study.[16] Keener seemed, to a significant degree, to:

As an example of this kind of tacit communication, I would indicate two myths worked with in the introduction and subsequently implied. By 'myth' I do not specifically mean 'widespread misconception', but am using a semiotic term comparable in meaning to 'paradigm': '[M]yths act as scanning devices of a society's 'possibles' and 'pensables' [17]. The two myths are: