I Deleted My ChatGPT App

A passage in Plato's Phaedrus offers a critique of writing when writing was the hot new technology that offered to simply improve thinking with no negative effects:

Socrates: At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis was sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days Thamus was the king of the whole of Upper Egypt, which is in the district surrounding that great city which is called by the Hellenes Egyptian Thebes, and they call the god himself Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he went through them, and Thamus inquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. There would be no use in repeating all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; for this is the cure of forgetfulness and folly. Thamus replied: O most ingenious Theuth, he who has the gift of invention is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance a paternal love of your own child has led you to say what is not the fact: for this invention of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters. You have found a specific, not for memory but for reminiscence, and you give your disciples only the pretence of wisdom; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome, having the reputation of knowledge without the reality.

The more things change, it seems, the more they stay the same.

Advertising copy for ChatGPT claimed that it could stimulate the imagination, and I looked at it for a second and said that it could probably do that used a certain way, but the more likely outcome would be that people would have it do their thinking for them.

It was not terribly much longer that I heard of YouTube videos of boyfriends copying and pasting ChatGPT responses because they didn't know how to console their girlfriends. I am unsure of the timeline, but the YouTube videos may have been live well before I made my "prediction."

I read Nicholas Carr, The Shallows: What the Internet is Doing to Our Brains, and it was nice to have a relatively up-to-date statement of things that were already mostly things I already knew; then I read another book of his, The Glass Cage: Automation and Us and How Computers are Changing Us and found a serious challenge that left me reconsidering a fairly deeply-held belief.

I have long been interested in UX ("User eXperience," the "Let's not forget the person who actually uses this" discipline within Information Technology), and I have labored hard at good UX for my main site, and inwardly winced at what Substack didn't allow me to do for UX on my Substack. I couldn't make visited and unvisited links look different, despite this being a top recommendation for good UX that is violated on the Web. My writing may be challenging to read; I prefer not to have on top of that difficulty people having trouble figuring out how to use my site.

In The Glass Cage, Nicholas Car says essentially that a high level of UX in software tools used to develop a skill dumbs down people's performance and learning for that skill. For a classic puzzle, a tool with highly enabled UX that showed, for instance, what were legal moves and what not, people learned and retained much less than a more basic user interface that required people to master what moves were legal themselves.

The most devastating critique in the book is what Electronic Medical Records have done, and are doing, to the medical profession, and I will leave you to read The Glass Cage for that. However, a related find was what Integrated Development Environments do to people's programming skills. Before that, I had assumed that when programmers wrote, "I'd crawl over a mile of Integrated this and Visual that to get to Emacs and a good copy of gcc," which I had simply assumed was a chauvinism for known and familiar tools. Another person much more crassly and much more scathingly denounced IDE-induced skill atrophy by saying, "Most programmers today couldn't find their d*cks if you took away their Visual M*st*rb*t**n Kit ++." The older command line tools (I use vim instead of Emacs) required the programmer to know what he was programming and keep it in his head. Emacs is a complex and capable system, but in a way that encourages development of expert skills ("…and with 'evil' mode, the operating system includes an editor."). A distinction has been made between "novice-friendly" and "expert-friendly" systems, and Unix and Linux are both expert-friendly systems. (In Linux Mint, a novice-friendly desktop metaphor is built on top of an expert-friendly chassis). It has been said, perhaps insultingly, "Unix is a very friendly operating system; it's just very selective about who it is friendly with." I do not ask you to like the last statement or for that matter any of these statements, but Unix is a classic example of an expert-friendly system that fosters the development and refinement of expert skill.

With older tools that fostered the development of expert skills, the individual contributor is functioning as an expert individual contributor. In the case of an Integrated Development Environments, and especially the ones with the most recent advances, much more of the work is done by tools, and the individual contributor is functioning more like an ersatz manager, more monitoring electronic tools than doing the main work. I had naively assumed that Integrated Development Environments were simply a more advanced tools whose strengths I had not learned to take advantage of. It turns out I had, by accident, been right to stay with vim and the Linux command line.

I read a recent newsletter about AI tools for jobhunters, and some of them seemed awfully sweet. Or at least sounded sweet at first. One would automate most or all of the drudge-work side of applying for a job online. It didn't mention an obvious consequence of mass use of such tools: though using the tool would allow a candidate to fill out more applications, faster, and with more convenience than doing things the old-fashioned way, which sounds like a great win until you realize that on the employer's side, it means that your application will be buried under a pile of a great many other hastily made applications. Filling out the manual data entry portions of an online job application, however boring and unintelligent work it may be, functioned before such tools as a costly signal that you genuinely wanted a job enough to fill out all the fields as they existed on the form. Furthermore, even before then employers were deluged by piles of applications so that the first chore for an employer was to get the pile of applications down to a manageable size. Now your resume will be buried among an even larger pile of applications, and almost all of the resumes will be possibly slightly tweaked outputs of generative AI. Under the old-school way, the bulk of a jobhunter's work was to do research on a company and communicate in more tightly tailored ways to a given job application; the manual data entry component was actually only a small part of the work, if perhaps the most chore-like to some jobseekers. Now the AI advantage has what I have called a "damned backswing" in that it will be all the more difficult to stand out and employers will be looking all the more, not to see if your application looks like a match, but to get an overwhelming flood of applications down to some kind of manageable size. (Perhaps AI tools for employers automate much of this process, too.)

The overall picture of automation is that the person using computer tools is not functioning primarily as the intelligence doing the work; he is functioning as someone to monitor and manage the computer programs that do most or all of the work. In what it said both about automated doctoring in the wake of Electronic Medical Records and automated piloting in the wake of the glass cockpit (which has been called a "glass cage" which provided the title for the book), human competency is reduced and stunted, and what is called [human] automaticity, the feature of expert performance where people perform advanced skilled work in a way that leaves them productively absorbed, cannot develop.

I'm sure, if I wanted to, I could get ChatGPT to do some amazing writing for me. But I believe in a human, internal basis for power. Perhaps more in divine synergy as it is called by Orthodox, but not less by managing artificial intelligences. Possibly I will be harder to find as ChatGPT and generative AI produce interesting writing, made to order, for the majority of people who still read. However, I want to develop my talents and not function as a manager to generative AI writing and living for me. And opting out of the brave new world of using my intelligence to manage AI as the real workers is a way for me to retain a unique power when AI is increasing and woke classics programs not only drop expectations that students learn Latin and Greek, but that they read texts even in translation, and maybe do a little classics name-dropping in doing what they can to project today's gender euphoria onto the world of the classics. I'm learning to be better at reading Greek, through old education and skills that still work today, and older technologies such as an intralinear text and the memory techniques Thamus expected writing would push into the background.

I have commented in a previous post that the Amish may seem "quaint," but they may seem a good deal less quaint when the supply chain breakdowns are affecting almost everything else and they will still have the living and active skills to continue functioning during other people's supply chain failures. I foresee a time, possibly during my lifetime though God only knows if I will live to die of old age, when by keeping custody of my native intelligence and my variegated education may leave me something like royalty after a damned backswing lets people rely on artificial intelligence, and it is then confiscated by economic breakdown and/or cascading systems failure. The Glass Cage talks about how GPS may mean that in one or two generations Inuit will lose forever their ancient skill of navigating a shifting snowy landscape before GPS becomes a casualty of collapsing systems failure. And I will, or least may, be pursuing my work, in contrast no longer really to people who have a liberal arts education, but to people whose education was entirely woke. The life of someone with an old-fashioned liberal arts education may itself tower among woke who have AI do their thinking for them, though I would recall a line from Plato: The Allegory of the… Flickering Screeen?: "In the land of the blind, the one-eyed man is crucified."

Hard Lessons from Israel's High-Tech Border Failure, written about how Hamas terrorists mostly disabled Israel's $1.2 billion USD wall at Gaza, says a great deal about escalating complexity and complex systems failure. (The comments are well worth reading, too.) One military figure, quoted as a medicine to those who would feel safe leaving the Gaza wall, was cited as saying, "People first, ideas second, machines third," a lesson put in a non-military context in Good to Great. Increasingly complex systems put us at risk of cascading system failure, and there are a great many things that are at a level of complexity people cannot really grasp. One of the comments on Hard Lessons from Israel's High-Tech Border Failure is written by someone responsible for addressing when Amazon's website goes down, and says that Amazon's system is really too complex for people to get their arms around. The trend is to increasingly brittle systems; a great many technological advances move from something less brittle to something more brittle. Some poorer nations have no concept of obsolescence and have donkey-drawn carts alongside sometimes new consumer electronics. The USA, with its Protestant heritage, has a mentality of "Out with the old, in with the new," and if some newer technology like cellphones or credit cards become unavailable the cascading systems failure would be poised to destroy the country. Other, poorer nations without a concept of obsolescence will have less of their infrastructure and support neutralized if cascading systems failure takes down a pillar of technological society. AI researchers, after allowing AI to improve itself, simply do not understand increasingly much of how it does what it does, and we may have a vulnerability to cascading systems failure beyond what was even possible with slightly older technologies like the smartphone.


The "When I Become an Evil Overlord" list ("4. Shooting is not too good for my enemies.") includes,

  1. I will keep a special cache of low-tech weapons and train my troops in their use. That way — even if the heroes manage to neutralize my power generator and/or render the standard-issue energy weapons useless — my troops will not be overrun by a handful of savages armed with spears and rocks.

All of us outsource a great deal of our thinking, and this is necessary and even good. Another name for this outsourcing of our thinking is "appropriate trust in authority," and I emphatically believe in right trust of right authority.

However there is another level of liability altogether to go woke, learn gender theory and not the traditional contents of mathematics or classics, and use AI whenever thinking is needed. I write under the authority of the Orthodox Church, or rather somewhat in the authority of the Orthodox Church, pre-eminent among authorities by which my work is rightly judged.

We need authority and we need technology, and my own contribution to broader society critically hinges on multiple websites. I do not in particular see why my own web presence should survive the Great Reset, but the copyright status of my works is intended to let my books survive me if anybody is there to pick them up. There is such a thing as planning for others' benefits.

But let us not simply offload our thinking to AI.