There was a great conversation on LinkedIn the other day. A lawyer shared why they refused to sign a consent form allowing their therapist to record sessions and use AI to generate notes. Their concerns were about accuracy/inference and how these AI-generated summaries might be misused or misunderstood in a legal context. And honestly, they were right to be skeptical. (And we hadn't previously thought about how incorrect assumptions from the AI would enter the official record, potentially having future legal implications!)
The replies were all over the place. Some argued that AI saves time, others said it’s fine as long as it’s HIPAA-compliant. But HIPAA compliance just means the platform follows certain rules. It says nothing about whether the output is accurate, responsible, or safe. (And, like we've mentioned previously, HIPAA-compliance does not guarantee that there will never be a data breach...)
That’s why we built Quill differently. No recordings. No transcripts. No raw session audio. No inference or assumptions from the AI. The therapist gives us their summary -- in their own words -- and we help format that into a structured note. That’s it. No data is stored. Nothing is used for training. The therapist stays in control. The therapist determines the important and relevant details, not the AI.
Not all AI tools are created equal. That LinkedIn thread reminded us how important it is to keep talking about how these tools actually work -- and what therapists and clients are being asked to trade in exchange for a little convenience.
And it also made us think about how everyone comes at this from different perspectives. Therapists have a viewpoint, therapy clients have a viewpoint, software developers have a viewpoint, lawyers have a viewpoint, and of course you have a viewpoint too!
And hey, if you want to connect with us on LinkedIn to be a part of this conversation, let's do it! Send a connection request to Jon and also follow our LinkedIn page!