While I’m actively looking for a deep-tech co-founder for GlyphAI, most of my time right now is occupied by academic work and ongoing research.
That work keeps reinforcing why GlyphAI must exist.
During my current research cycle, I came across several publications that I found genuinely exciting—because they independently validate ideas we are building architecturally, not just conceptually.
A few highlights worth sharing:
• Mathematical foundations of AI that formalize intelligence beyond heuristics and scale
• Cognitive linguistics and biocognitive science showing that meaning is not stored as raw symbols, but as structured, layered concepts shaped by culture and context
• Recent work proving that modern language models are injective and invertible, meaning their internal representations are lossless—a result with serious implications for privacy, transparency, and reconstruction risks
Taken together, these works point to something important:
AI systems do not “forget” as much as we assume.
They preserve meaning—even when we think they don’t.
This is exactly why semantic-level protection matters more than surface-level filtering or after-the-fact moderation.
GlyphAI is being designed around this insight:
-
Work with meaning, not raw data
-
Encode intent, not personal content
-
Enable memory without exposure
-
Build protection into the architecture, not the policy layer
I’ll share more thoughts as this research progresses. For now, I’m grateful that independent academic work continues to converge on the same conclusion:
If AI systems preserve meaning by design,
then safety, privacy, and accountability must also be designed at the semantic level.
If you’re working in symbolic AI, model interpretability, privacy-preserving ML, or foundational AI theory—and are curious about building something deep and difficult—feel free to reach out.
— Building GlyphAI
Protecting meaning. Protecting humans.
