I learned about the XOR function, and pretty much everything I know about logic functions, from modular synthesis. Modular synthesis, like AI or any other media technology, works on a set of conventions ensconced in a set of standards. A modular synthesizer is basically an analog computer (this is a whole other post, which I will at some point write up) that separates sound from control (yes you can mix them up but let’s not worry about that for now), and works according to a set of standard voltages. So in my synthesizer, if I’m controlling pitch, a pitch will rise one octave with one volt. This is purely an agreed upon convention. A media standard. If you’re controlling a gate, let’s say to hear a pitched sound or not, it generally looks for the difference between 0 and some other number–maybe 1, 3 or 5 volts. So if the threshold is 3 volts, every time it receives 5 volts, it will make a sound. Every time it receives 2 volts, nothing happens. Yet of course the numbers 5 and 2 are different, as are those voltages.
Now, one can imagine controlling our synth sound with an XOR logic gate. Send continuously varying voltages, let’s say a pair of varying sine waves of different phase — one into input A and one into input B — and our logic module compares them. If it’s set to an XOR comparison, every time they are different, no matter what the amount or difference is, the gate outputs a 1 and you get sound. Every time they are the same, the gate outputs a zero and you get no sound. With this kind of XOR setup, most of the time you’ll have sound, with an occasional silence. Switch it to XAND, which outputs a 1 only when the voltages are the same, and you have the opposite scenario, only sound once in awhile. But again, the voltages could be any number, and could be varying quite wildly in different ways.
So in essence, the whole point of a binary logic gate is to reduce the blooming buzzing confusion of reality to two states: same or different. This is not a problem in modular synthesis: reduction and quantization are useful for all sorts of things. For instance, turning a set of continuously rising and dropping tones into a melody that makes musical sense–like a double bassist knowing where to put their finger on the fingerboard to play a note in tune. Quantization in audio is also incredibly useful and important, and the sampling theorem means that we can reconstruct continuous waves from discrete data points, as well as store big sounds in small places.
But when those binary operations are judgments about people, processes, or things that matter to people, the issue becomes something else entirely.
Sound or no sound is very different from qualified or not qualified, threat or no threat, human or gorilla. This is why a critique of quantification–or quantization as such is never enough.