Dominic Hughes (Stanford), Is linear logic two-thirds wrong (for classical first-order logic)?

Programme

Résumé

Linear logic decomposes classical logic into multiplicatives, additives and exponentials. Multiplicative proof nets are remarkably elegant; additive and exponential nets have struggled in comparison. Could this struggle mean that additives and exponentials are simply the wrong way to decompose classical logic?

I present a decomposition of classical first-order logic which retains multiplicatives at its core and replaces additives and exponentials with a naive notion: a function f is the ideal parallel representation of contraction and weakening. Contraction occurs when f(x)=f(y) and weakening occurs outside the image of f.

The approach clarifies the units: the positive multiplicative unit is retained and the three problematic units are rejected. Quantifiers are broadly similar to those in multiplicative proof nets.

[Note: additives and exponentials are interesting and elegant in their own right; here I reject them only from the perspective of decomposing classical first-order logic.]