AI and Secure Messaging Don't Mix
Over on the ACLU's Free Future blog, I just published an article titled AI and Secure Messaging Don't Mix.
The blogpost assumes for the sake of the argument that people might actually want to have an AI involved in their personal conversations, and explores why Meta's Private Processing doesn't offer the level of assurance that they want it to offer.
In short, the promises of "confidential cloud computing" are built on shaky foundations, especially against adversaries as powerful as Meta themselves.
If you really want AI in your chat, the baseline step for privacy preservation is to include it in your local compute base, not to use a network service! But these operators clearly don't value private communication as much as they value binding you to their services.
But let's imagine some secure messenger that actually does put message confidentiality first -- and imagine they had integrated some sort of AI capability into the messenger. That at least bypasses the privacy questions around AI use.
Would you really want to talk with your friends, as augmented by their local AI, though? Would you want an AI, even one running locally with perfect privacy, intervening in your social connections?
What if it summarized your friend's messages to you in a way that led you to misunderstand (or ignore) an important point your friend had made? What if it encouraged you to make an edgy joke that comes across wrong? Or to say something that seriously upsets a friend? How would you respond? How would you even know that it had happened?
My handle is dkg.
More times than i can count, i've had someone address me in a chat as "dog" and then cringe and apologize and blame their spellchecker/autocorrect.
I can laugh these off because the failure mode is so obvious and transparent -- and repeatable.
(also, dogs are awesome, so i don't really mind!)
But when our attention (and our responses!) are being shaped and molded by these plausibility engines, how will we even know that mistakes are being made? What if the plausibility engine you've hooked into your messenger embeds subtle (or unsubtle!) bias?
Don't we owe it to each other to engage with actual human attention?