A brush with the "Green Book"
I have been reading ArXiv AI articles, with particular attention to philosophy and ethics, and one philosophy article helped me understand what, exactly, I have to offer the conversations after I get my bearings on AI (a process which may never be complete). To give a couple of quotes:
All of the above five foundations can also be found in one passage of the medieval philosopher Thomas Aquinas in his very brief discussion of the foundations of naturalistic ethics. In this section he lays out a system built upon Aristotle' notion of there being three layers to the human body and mind: a vegetative soul, the type of which we share with all living things; a sensing soul, the type of which we share with all animals; and a rational soul, the type of which we share only with other rational beings [30]. Aquinas updates Aristotle to explain more carefully what would be entailed for sustainable survival of the human species [2]. While we would now consider both Aristotle and Aquinas to be out-of-date on many issues, at least on this one topic–the core aspects of human nature–Aquinas seems to have struck something significant.
Aristotle and Aquinas are both old, and an Orthodox might critique them, but not on grounds of being mostly out-of-date. The concerns I raised in An Open Letter to Catholics on Orthodoxy and Ecumenism, which might be among my top fighting words to Rome, may be critical enough of Thomas Aquinas, but the critiques never include his seniority among philosophers. They are critiques Orthodox might have made assessing that he was wrong when the ink was still wet on his pages or he stopped being wrong when he declared his works to be straw.
Interestingly, the text, co-authored by a Betty Li Hu, repeatedly quotes Confucius but does not raise the question of whether Confucius is out-of-date. Confucius is a source, not only on how things might have been lived in China before Christ, but how we can live now, which should be the condition Aristotle and Aquinas are evaluated for. Orthodox critiques of Aristotle and Aquinas never seem to complain that those figures are out of date.
It also echoes various other studies stating being unbiased as a criterion of desirable AI:
It is important that in these situations of well-intended AI use, we do not inadvertently create new problems from AI itself–unfair and biased systems [44], overreliance and resulting deskilling [27], various unintended consequences [1], etc.
On this point I would recall a classic hacker AI koan:
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
"What are you doing?", asked Minsky.
"I am training a randomly wired neural net to play Tic-Tac-Toe" Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play", Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?", Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.
And I would recall a point from C.S. Lewis, The Abolition of Man, about the authors of "the Green Book," a book that should be offering professional grammar when it instead offers amateur philosophy:
In actual fact Gaius and Titius will be found to hold, with complete uncritical dogmatism, the whole system of values which happened to be in vogue among moderately educated young men of the professional classes during the period between the two wars.' Their scepticism about values is on the surface: it is for use on other people's values; about the values current in their own set they are not nearly sceptical enough. And this phenomenon is very usual. A great many of those who 'debunk' traditional or (as they would say) 'sentimental' values have in the background values of their own which they believe to be immune from the debunking process. [emphasis added]
The article opens one quote by Confucius as "'men (sic.)…" which is telling enough of what biases they assume in calling for unbiased systems. In my own time studying at Fordham, a nasty enough place for Orthodox, texts lauded by the theology important would place an editorial "[sic]" after citations referencing a generic "man" or "he," which is the original naturally inclusive language. And this is a couple of years after a Toastmasters winning competition speech had a woman speak, without vitriol and in a voice that invited sympathy, about another character in her story as "my fellow man." and a TED talk repeats, without critique or any implied criticism, classic audio clips referring to mankind as "man." or the English Standard Version translates adelphoi, a standard Greek term for all Christians, as "brothers" with a footnote saying "Or 'brothers and sisters.'" I would say that this alone, even apart from other cues, refers to a concept of "unbiased" that includes a "whole system of values which happen[s] to be in vogue." Or may be falling out of vogue but is still stuck in some of the more backwards schools and departments.
The method question at Fordham
In the Fordham theology department's doctoral comprehensive exams, one of the questions is the "method question," in which you would be asked a question, and then out of study of six assigned texts and four that you supply yourself, analyze your answer to that question. The question was known in advance and for that year was, "Does the earth matter for theology?" and my flaming liberal radical professor was horrified when she learned about Man and the Environment: A Study of St. Symeon the New Theologian and recognized that I could answer that question in its entirety out of the Orthodox Tradition, and she defined competency as taking 10 or 20 points for each of the sources (the six assigned texts simply assumed that taking the earth seriously could only be a liberal concern); I was at freedom in choosing which 10 or 20 points to take, but she excluded, after her horrified recognition, any answer to the method question which would be confessionally Orthodox at all.
But my interest brought to the department is one that is specific and neglected in the discussion I have read about AI. One distinction made in e.g. philosophy departments is that between problem-solving philosophy and philosophically informed history of ideas, and my area of interest (or rather a broad swath that would include my areas of interest) was theology that would be both historically grounded and represent a problem-solving interest. This was in distinction to, or perhaps in synthesis of, a basic historical theology interest that investigates the theology of previous eras from a historian's interests, or systematic theology that solves problems on the resources of today's systematic theology. I started in the historical theology program and switched to systematic upon clarification of methods deemed appropriate in history and reaching a conclusion that, under at least that department's division of labor, my interest fell under systematic theology. But at least my professor couldn't conceive, or at least very much did not want to conceive, of a problem-solving interest that drew on mostly older texts as opposed to only drawing on recent texts that way. (I didn't even begin to try to address the point with her of all Orthodox theology being mystical theology…)
What I have to add to the AI conversation
I'm still getting my bearings on AI. I wrote a thesis on it almost twenty years ago and haven't kept up, and I am in the process of catching up. However, my general approach and interest is a basis for writing much more than a post about AIlice in Wonderland, which talks about AI as a historically situated technology and looks at recent technological history.
And seeing the article mentioned above helped me realize that my basic perspective is not just one that was scarce to be found at Fordham; it is one that is scarce to be found at ArXiv's collection. When I get up to speed (or perhaps if I ever get up to speed on rapidly changing turf), this will likely make an imprint on what connections I have to offer. Which reminds me, I want to get around to reading Lewis Mumford's Technics and Civilization, written in 1934 and still salient, summarized in the Wikipedia as arguing, "It is the moral, economic, and political choices we make, not the machines we use, Mumford argues, that have produced a capitalist industrialized machine-oriented economy, whose imperfect fruits serve the majority so imperfectly."
I shouldn't strictly say that I haven't been catching up. I have been, and I've gleaned significant insights, and possibly even my realization that there's a whole lot more I don't know than I do is a sign of maturing understanding, or at least slowly maturing understanding. But while I am still wary of claiming I understand AI, I believe I have identified an area where my contribution can be significant.
Keep reading.