At Quill, we believe that the ethical use of AI starts with respecting the boundaries of both therapists and their clients. That’s why none of the data submitted to Quill -- not the session summaries you provide, and not the documentation we generate for you -- is ever stored. And absolutely none of it is used to train large language models (LLMs). Our AI does not learn from your notes. It doesn’t get smarter by learning from your recorded session summaries. It doesn't remember anything at all.
There’s a growing concern across the mental health field about where sensitive information goes when it passes through AI tools. And that concern is absolutely justified! Some platforms quietly use user data to improve their AI models, which effectively means those models are trained on real clients' details and real, private conversations (in the case of the many AI tools that record the therapy session).
At Quill, this "training AI" practice is not just discouraged -- it’s structurally and literally impossible. We never recorded your session, so that data is non-existent. And for the session summary that you do provide, we don’t store that data, and so there’s nothing for us to send to others or to learn from. This is a deliberate design choice rooted in ethics and privacy: Our AI helps write your notes, and that's it. That data goes nowhere further than that.