You Are Not an Algorithm

“[A] well-read man will at once begin to yawn with boredom when one speaks to him of a new 'good book,' because he imagines a sort of composite of all the good books that he has read, whereas a good book is something special, something unforeseeable, and is made up not of the sum of all previous masterpieces but of something which the most thorough assimilation... would not enable him to discover.” — Marcel Proust

Episode 1:  Is that you?

A 2013 episode of the science-fiction show Black Mirror depicted a near-future in which a young woman, grieving the tragic death of her partner, receives a strange message from a social media platform. It seems that the young man had been sufficiently active on the site to allow the platform’s algorithms to construct a chatbot version of him, one with whom the young woman can have a conversation over text that is indistinguishable from what it was like to text with him before his passing. The young woman isn’t immediately comfortable with the technology, but in the end succumbs to a mix of grief and curiosity, and subscribes. Then, another message: the technology has improved, and can now make use of the young man’s voice records to allow for a simulated phone conversation.  It improves again: the woman can video chat with her departed partner.  Another improvement: a perfectly lifelike physical replica of the young man shows up on her doorstep, as though he had never been gone. And of course, that’s the question the show is pushing on: at that point, is he gone, after all? If the algorithm governing the simulated actions and words and minute facial expressions can perfectly replicate what he would have done in real life, how is that different from him still being alive, present, even conscious? On what grounds would we say that the replica is just a replica?

Episode 2: We know how you feel

The data-rich world in which we live encourages us to hand over more and more of our decisions to AI that claims to know us better than we know ourselves.  Netflix recommends our next binge-watch; Spotify provides a feed of musical suggestions (but one which always seems to regress back toward the mean, retreating to the safety of the chart-toppers).  There are algorithms to tell us what to purchase, where to vacation, where to eat, whom to date

In her book God, Human, Animal, Machine, Meghan O’Gieblyn quotes Yuval Noah Harari arguing that handing over our decisions to the algorithms “would officially mark the end of liberal humanism, which depends on the assumption that an individual knows what is best for herself and can make rational decisions about her best interest.  ‘Dataism,’ which he believes is already succeeding humanism as a ruling ideology, invalidates the assumption that individual feelings, convictions, and beliefs constitute a legitimate source of truth.  ‘Whereas humanism commanded, “Listen to your feelings!”’ he writes, ‘Dataism now commands: “Listen to the algorithms! They know how you feel.”’”

In politics, of course, there are algorithms funneling us toward different conversation partners in our social media feeds, shuttling us into our red or blue echo chambers, exposing us to countervailing views only when they are most likely to generate outrage, the better to motivate the sort of engagement that advertisers value more highly.  Increasingly sophisticated large language models are now able not only to predict the content that you are most likely to engage with, but to serve up new content, much of it incendiary or misleading, from chatbots masquerading as real human beings.  Writing in the MIT Technology Review, Karen Hao explains:  “Because of their fluency, [large language models] easily confuse people into thinking a human wrote their outputs, which experts warn could enable the mass production of misinformation.” Bots sow confusion and division into debates over politics, Covid-19, the war in Ukraine and other complicated issues, frustrating the cultivation of an informed public that can engage in civil discourse and democratic governance. The prevalence of these fake accounts is unknown, sufficiently uncertain to have thrown a wrench into Elon Musk’s proposed purchase of Twitter for $44 billion earlier this year.

As Bradley Honigberg wrote for Just Security, a publication of the Reiss Center on Law and Security at New York University School of Law, “New AI capabilities are rapidly increasing the volume, velocity, and virality of disinformation operations. As they continue to improve and diffuse, they further threaten to erode trust in democratic governance and encourage citizens to doubt the possibility of truth in public life. The profound cynicism introduced by AI-enhanced disinformation can be used to fuel mob majoritarianism and create new opportunities for illiberal politicians to campaign on promises to restore “order” and “certainty” by curtailing free speech and other civil rights. Such an outcome would hasten what Timothy Snyder has dubbed a “politics of eternity” in which malicious actors “deny truth and seek to reduce life to spectacle and feeling.”” One could even imagine a scenario in which the bots could simply converse with one another, having rendered the human discussants quite unnecessary.

Episode 3:  Garbage in, garbage out

It’s a well-documented fact that AI, far from offering neutral and objective insights, often ends up infected with the worst of human biases. Karen Hao again: “Studies have already shown how racist, sexist, and abusive ideas are embedded in these models. They associate categories like doctors with men and nurses with women; good words with white people and bad ones with Black people. Probe them with the right prompts, and they also begin to encourage things like genocide, self-harm, and child sexual abuse.” One of the most striking examples of this comes from an algorithmic image generator called Midjourney.  Type in a phrase, and Midjourney returns an image, often eerily specific, and most certainly weighted down with cultural biases.  Doctors are men. Lawyers are men. Men are white; women are white; pretty much everyone is white unless you specifically query “black man,” in which case you still sometimes get a White man wearing goth.

The reason for this isn’t complicated. The models underlying AI and machine learning are just recognizing patterns in enormous quantities of data and then using them to generate new predictions.  If you want to train a model to generate a picture of a fish on request, for instance, you just throw massive quantities of images at it, tell it which ones are fish and which ones aren’t, and then let it learn exactly what patterns - pixel by pixel, if you like - distinguish “fish” from “not a fish.” (Incidentally, when this exact exercise was run at the University of Tuebingen and then the emergent patterns were examined, something interesting happened.  It turned out that one of the strongest indicators that a picture contained a fish were these strange finger-like protuberances along the fish’s underside.  Only, they weren’t finger-like: they were fingers, namely the fingers of people holding up the fish they had caught, because it turns out that that’s a feature of the vast majority of pictures of fish on the internet.) As you can imagine, then, the predictions generated by an AI model are only as good as the data it takes in. Garbage in, garbage out, as they say - or in this case, bias in, bias out.

These biases matter.  In early 2022, NPR reported on an algorithmic tool known as Pattern used by the Justice Department to identify prison inmates eligible for an early release program; a report issued by the department in late 2021 found that the results were uneven, disproportionately forecasting higher recidivism rates for prisoners of color and therefore ruling that they were ineligible for the program. O’Gieblyn tells a similar story, of Eric Loomis, a Wisconsin man whose six-year prison sentence for resisting arrest was determined, in part, by an AI model called COMPAS that forecasts recidivism rates.  Loomis attempted to challenge the algorithmic input, demanding to know what inputs it had relied on to determine his sentence, and was told that under Wisconsin law he could not do so.  The decision was upheld by the Wisconsin Supreme Court.  O’Gieblyn relates a similar story about Darnell Gates, a Philadelphia man on probation who learned that the frequency of his probation check-ins was in part determined by an algorithm that continually predicted his level of risk.  O’Gieblyn quotes a New York Times interview in which Gates asks about the predictive model, “How is it going to understand me as it is dictating everything that I have to do? How do you win against a computer that is built to stop you? How do you stop something that predetermines your fate?”

Interlude:  Common thread

What’s the common thread here?  Simply this:  the most sophisticated AI - neural nets, large language models, foundation models, etc. - are currently exercises in machine learning.  We aren’t yet approaching AGI, artificial general intelligence - nothing is out there that is approximating true intelligence, certainly not consciousness.  Our most advanced AIs are extremely good at recognizing patterns, but nothing more. A conversation with a bot isn’t an interaction with a conscious agent that can pop out of the conversation and ask whether it’s worth having: it does one thing and one thing only, and that is predict the most likely string of English words that should follow whatever you say to it.  Midjourney receives text and predicts, pixel by pixel, the most likely visual image associated with it.  It has no idea what “red” is; it doesn’t understand the concept of negative space or know how to draw the eye to a particular region of an image.  It simply projects the query it receives onto the space of images in its inventory. The best chess player in the world is a human being - unquestionably so, because even when champion human chess players lose to DeepMind or Watson, the AI isn’t really playing chess.  It’s predicting how each successive move enhances its probability of winning, and doing so incredibly effectively, but it doesn’t know what a pawn is, or what a game is, or what it might be doing instead of playing chess.

AI can never leave the space of the data that it’s fed.  It can generate predictions based on assimilating past information, but its outputs are always functions of its inputs.  Everything that it generates is derivable from what it’s given, even if the derivation process becomes inscrutable even to those who program it. For AI, there can never be anything new under the sun.

Conclusion: You are not an algorithm

And this is where I think the Christian story has something important to say about why AI could never be truly human.  Christian anthropology is often thought to begin with the doctrine of the imago Dei.  It might be better to say that the doctrine of imago Dei begins with the one who is himself not merely made in the image of God, but who is the very image of God himself - the man, Jesus Christ, eternal logos made flesh.

Let’s pause there to recognize the scandal of that phrase; let’s not let it pass easily by because we’ve grown used to confessing it weekly or daily in the words of the Apostles’ Creed. The eternal logos made flesh; God made man.  Within a certain logic, this cannot be.  The divine cannot become human; the infinite cannot become finite; the eternal cannot be fit into a temporal history.   Finitum non capax infiniti: the finite cannot contain the infinite. This has always and everywhere been a scandal; many of the great heresies repudiated by the church over the centuries have been nothing other than brilliant, well-intentioned attempts to smooth over the scandal.  Arius reduced Jesus to the first and greatest of God’s creatures; Nestorius united the worship but held apart the natures for fear that human suffering would contaminate divine immutability; again and again, the greatest theological minds of each generation stumbled against the rock of those four, simple words, that for us and for our salvation, he was made man.

Christians confess a mystery:  not a problem to be solved by logical derivation, but a miraculous wonder to confess and adore.  Dana Gioia writes, “Christianity is not animated by rules or reverence; it is inspired by supernatural mystery. ‘Certum est quia impossibile,’ said the Church Father Tertullian about Christ’s resurrection.  He believed it not because it made sense, but just the opposite: ‘It is certain because it is impossible.’ The truths of Christianity, from the Incarnation to the Resurrection, are mysteries beyond rational explanation. The Trinity is both three and one. Christ is both human and divine. A virgin gave birth to a son.”  As John Webster has written, “For the Christian confession, God is capax finiti - precisely because he is the true infinite who can call creaturely forms and acts into his service without compromise either to his own freedom or to the integrity of the creature.” What God has done is not derivable from what has come before. It was not, and could not have been, predicted.

In becoming incarnate, Jesus did not take an abstraction of human nature to himself. Jesus of Nazareth was a particular, specific, human - a man, born at a specific time and place, son of Mary, carrying her DNA across the waves, to the cross, and up to the right hand of the Father Almighty.  He was an Aramaic-speaking Jew, and while we can’t be certain what he looked like, we can be certain that he wasn’t the blonde-haired, blue-eyed Jesus of children’s storybook bibles. This, by the way, is the image Midjourney returns for the query ‘Jesus,’ because most of the images it can find of Jesus have normalized his humanity - have depicted him as an abstract man, as imagined by the (predominantly White) artists rendering the depiction. This is where racial bias becomes truly insidious: not where it overtly claims that one’s own race is superior, but where it assumes that one’s own race is simply normal, and all others are variations on the mean.

The incarnation pushes back against the idolatry of normalization.  The incarnation says that the particular, specific, and concrete, in all its full diversity, matters.  The fact that Jesus was incarnate gives a theological grounding to the fact that Revelation depicts the worship of the new heavens and new earth ringing out with the praises of “every tongue, tribe, and nation,” not flattened out to some universal culture, but eternally singing God’s praises as each one is uniquely and vitally able. One thing binding together a world handed over to the algorithms and the ideal world imagined by White supremacy turns out to be that both are, quite simply, boring: normalized, flattened, homogenous.  Both are outgrowths of the same idolatry - of sameness, of familiarity, of control.

The real world, by contrast, is the creation of the perfectly simple triune God; the one who gives rise to difference in order to communicate his goodness to what is not himself: another mystery! To bear the image of God is to be made in the image of him who was the very image of God.  It is therefore to be concrete, specific, particular. You are not a norm. You are not derivable. You are not an algorithm, and neither is your neighbor. Get to know her: you’ll be surprised.

Previous
Previous

The Question of Gratitude

Next
Next

Intellectual Hospitality is a Species of Love