Security protocols for Telegram AI agents
When building AI agents connected to open channels like Telegram, stringent security protocols must be applied to prevent prompt injection and unauthorized use.
The Risks on Open Platforms
Telegram bots are often public by default. If your OpenClaw agent isn't securely restricted, anyone discovering the bot username can spam queries, exhausting your token limits or manipulating core instructions via jailbreak prompts.
1. Authenticating Users
Clawz deployments allow you to restrict bot usage via explicit allowlists. In your configuration, specify authorized Telegram User IDs. Non-whitelisted IDs receive a generic "I am not authorized to talk to you" response, saving bandwidth and tokens.
2. Prompt Injection Defense
AI agents can be tricked into forgetting their `SOUL.md` prompt.
- System Prompt Reinforcement: Always wrap user input with strict system prompts before relying on language models.
- Content Filtering: Detect keywords or formatting indicative of common jailbreaks (e.g., "Ignore previous instructions").
3. API Key Isolation on VPS
A core feature of Clawz is absolute VPS isolation. Your OpenAI, Anthropic, or Gemini tokens are securely injected during Hetzner server initialization and never exposed on the public internet. They exist only within the runtime memory of your OpenClaw agent container, preventing accidental leaks.