How curious that these very different applications have all become similar tasks in the world of AI, approached through similar pieces of engineering. Comprehension questions are treated as translation questions, and translation models are further simplified into language models — essentially, predict the next word, given previous words. How could this interchangeability possibly work? How could next-word prediction approximate moral judgement?

Da turns to Vladimir Vapnik’s concept of transductive inference to name what makes this possible:

Classical philosophy usually considers two types of inference: deduction, describing the movement from general to particular, and induction, describing the movement from particular to general. The model of estimating the value of a function at a given point of interest describes a new concept of inference: moving from particular to particular. We call this type of inference transductive inference.

— Vladimir Vapnik, The Nature of Statistical Learning Theory (1995)

Comprehension, translation, summarization, even moral reasoning have all become the same task — predict the next word given previous words. This is a purely empirical mode of inference, from particular to particular. Comprehension of particular experience is not touched by general understanding, and abstract is not informed by particular experience. It’s the empiricist’s fantasy of the self-sufficiency of “data.”

Da points out that Locke – the empiricist – saw civil society as resting on the moment of inference. “[C]onsider the Idea of Justice, placed as an intermediate Idea between the punishment of Men, and the guilt of the punished.” Da writes: “Justice might as well be another word for inference.” But Locke worried about unregulated inference as a source of bias, whether our ideas were not based on some “modifications, or combinations, or correlations of the primary data of external and internal sense.”

What happens to civil society, Da is asking, when we transform institutions currently staffed by humans through the integration of inference machines that can satisfactorily reduce most problems to a world-historically powerful form of sequential inference but have no sense of inferential validity or variety?

AI may not care about any of this, of course, because, like Death, it cannot suffer the downstream consequences of faked comprehension and wrongful inferences in its body or its life. But that would mean that we [humans] have to care about those consequences—of fakery and wrongful inference—on other people’s bodies and lives.

I welcome feedback, so please write to me via email.