brain-circuitContext Management

by fp32

Understanding Context

Context is NOT like computer RAM - increasing it doesn't mean better memory.

What Context Includes:

  • Bot Definition

  • System Prompt

  • Custom Prompt

  • Scenario

  • Persona Definition

  • Chat Memory

  • Chat Messages

Think of context like a cup - as you pour more in, older content spills out. Some content is "permanent" (prompts, memory), leaving less room for chat messages.


Context Size Guidelines

16,000 tokens is all you need!

  1. LLM Memory is U-Shaped

    • Remembers beginning (prompts/personas) and end (recent messages)

    • Forgets middle content

    • Doesn't distinguish important vs. unimportant information

  2. Large Context = Dumber Model

    • 40% comprehension decrease from 16k to 120k tokens

    • Models forget more often

    • Can't respond to basic questions about recent messages

  3. Large Context = Slower & More Expensive

    • Takes minutes to process

    • Costs multiply (sending 30k-120k tokens per message)

    • "No one on Discord is an oil prince"

Memory Management Tips

If your AI keeps forgetting:

  • Use Chat Memory as bullet points (not narrative style)

  • Monitor token usage in prompts/personas/bots

  • Make every token count at 16k!


Advanced Tools

Memory Management Prompt

This tool helps maintain narrative continuity and world-building consistency:

Prompt Critique Tool

Steps:

  1. Click "Deep Thinking" button at bottom

  2. Copy prompt below and add your prompt afterwards

Works with any LLM - just change "DeepSeek" to your model name.

Note: If self-censors - copy output quickly before it deletes itself.


Page information Last updated: 01 Aug 2025 00:39 UTC, originally from herearrow-up-right Maintained by: Corpses (Thanks FP <3) Notes: Excerpted the Context Management section and Advance Tools This guide is community-maintained and may evolve over time.

Was this helpful?