‘Claude,’ attorney at law? AI platforms don’t have attorney-client privilege
Tech aficionados know “Claude” as a generative artificial intelligence (AI) platform operated by a private company. Former CEO Bradley Heppner was indicted for criminal fraud. He turned to, you guessed it, “Claude” for legal advice and counsel. Question: Were his discussions with “Claude” protected from disclosure by the attorney-client privilege, or could the government obtain them?
Thank goodness, no privilege!
Talk about being replaced by AI! The court declared Claude not to be a lawyer, so Heppner’s discussions with Claude must be disclosed. Why?
First, Claude is a program, not a human being, and it takes “a trusting human relationship . . . with a licensed professional who owes a fiduciary duty and is subject to discipline” to create the attorney-client privilege.
Second, the privilege requires that the dialogue be made in confidence and maintained in confidence. But here, there was the third-party company (Anthropic) that maintained Claude. The agreement Heppner signed to use Claude required that he first consent to Anthropic using his “inputs” and Claude’s “outputs” to train Claude. As the saying goes, two’s company, three’s a crowd.
Third, Heppner might have won had a flesh-and-blood lawyer directed him to consult Claude with certain questions. Then, per the court, “Claude might arguably be said to have functioned [as a lawyer’s agent] in a manner akin to a highly trained professional” like a junior lawyer or a paralegal.