Show HN: I'm 13 and built an AI that remembers context across conversations
ai.nityasha.comHi HN! I'm Amber (13) and my dad Raj, and we built Nityasha AI from Guna, India. After my dad's 12 years of failed startups (2012-2023), we created a personal AI assistant that handles email, coding help, research, and planning in one conversational interface.
I started coding at 9 on a 4GB RAM laptop. We failed 8 times before this—coupon sites, freelancing platforms, consulting. Nityasha is different: it uses Thesys generative UI for visual charts, includes Study Mode with Socratic teaching, and integrates everything so you don't need 10 tabs open.
500+ active users now. We just launched Nityasha Connect where businesses can integrate services directly into the AI.
Would love your feedback!
I don't think you should announce your age on a public forum. Generally, no one should at any age (I break this rule, but no one is trying to groom middle-aged men). At your age, definitely don't.
Kill your account and start fresh with better opsec. You're inviting predators.
Oh, I only just scanned the description and it sounds like you're likely female (to a westerner, anyway). Holy crap, kill this account and never link back to it for the next decade or more.
Oh cool another "I did something with existing technology and I'm THIS MANY YEARS OLD! LOOK AT ME"
[stub]
So a wrapper around an existing LLM? even chatgpt can reference previous conversations and remember context.
Also the fake comments (or friends comments?) are not really liked on Show HNs.
Congrats on building it.
Fair point on the "wrapper" label. Let me clarify what we're building on top of base LLMs:
Yes, we use OpenAI/Anthropic APIs - we're not training models from scratch (like you said, neither does Perplexity, Jasper, or most AI tools).
What we add (technical details):
1. Persistent Memory Architecture - Vector embeddings of user context stored in Pinecone - Semantic search across past conversations (not just in-session) - Retrieval pipeline: query → embed → cosine similarity → top-k memories → inject in prompt - Challenge: Managing token costs while maintaining context
2. Socratic Teaching System (Study Mode) - Question analysis: detect knowledge gaps - Progressive hint generation (not just Q&A) - Tracks learning progression - Example: Instead of "here's binary search code", asks "what property of sorted arrays makes this possible?"
3. Unified Workflow Integration - Email parsing + calendar sync + task extraction - Single interface reduces context switching - Memory persists across all tools
Architecture overhead: - User sends query - Retrieve relevant memories (vector search) - Build augmented context window - Send to LLM with enriched prompt - Generate + store new embeddings - ~200ms additional latency for memory operations
You're right that the base intelligence is GPT-4/Claude. But saying "wrapper" feels like saying Notion is "just a wrapper around PostgreSQL" or Stripe is "just a wrapper around payment processors."
The value is in the layer we built, not the underlying model.
That said - we should've been clearer about this upfront. Our first comment didn't explain the technical depth. That's on us.
Re: the fake comments - you're absolutely right to call that out. Those were friends/early users we asked to support the launch. That was a mistake and goes against HN's culture of authentic discussion.
I'm 13 and this is our first HN launch. We didn't understand how much the community values genuine engagement over orchestrated support. Won't happen again.
Apologies to the HN community for trying to game the system. Should've let the product speak for itself.
Appreciate you taking time to give honest feedback instead of just downvoting. This is exactly why we launched here - to learn from people who know better.
What would make this feel genuinely useful vs "just another wrapper" to you?
We've moved all the comments from new accounts to a stub to prevent them from gumming up the thread.
[dead]
[dead]
Thank you so much for using Nityasha, Abir! Really appreciate you being an early user.
We spent a lot of time on the UI trying to make it feel natural and not overwhelming. My dad kept saying "if a 13-year-old can't understand it instantly, we failed"
Out of curiosity - what features do you use most? And what would you like to see added?
Also, when you say "better than some new gen slms" - are there specific areas where you find it more helpful? Always trying to understand what's working so we can double down on it.
Thanks again for the support!
[dead]
[dead]
[flagged]
Thanks Ayish! Glad it's working well for you.
Quick clarification though - Nityasha isn't actually an SLM (Small Language Model) itself. We use existing large models (like GPT-4, Claude) but add a persistent memory layer on top + specialized features like Study Mode.
The "showing results" part is what we're most proud of - we tried to focus on actually being useful for daily tasks rather than just being impressive in demos.
What kind of tasks are you using it for? Would love to hear more about your workflow!