ImMemora Labs
Cognitive Memory Graph
At the beginning, memory is empty. As the user types, texts become structured memories: nodes are created around Emily Johnson, a fictional patient with a complex medical history.
A moment later, the scene expands: two people appear — Anna Smith and John Doe — with events, tests, decisions. The doctor notes that they are married and that during a trip to India they contracted similar infections. The Memory Layer puts the pieces together: not only a romantic relationship, but also shared causes and effects between their stories.
When the question about John arrives, the memory doesn't throw everything into the model: it activates only the necessary path, the one that truly justifies the answer. Nothing more, nothing less.
Because every answer has a reason. You can see where it comes from, which facts support it, and where there are contradictions or policies to follow. And because it's efficient: the model reads less text, but the right one. In practice: safer decisions, less noise, lower costs.
We transform conversations and documents into a persistent memory: a graph of facts, events, entities, decisions, and evidence, accompanied by vectors. We apply a multi-factor weighting (salience, freshness, reliability, time, etc.) that guides the conscious retrieval of conflicts and generates an explainable trace. With each use, the memory updates and learns.
Note — POC demo with synthetic data. Illustrative purpose, not medical advice.