Last updated: March 2026 Β· Source: agent-time-awareness
Smart Scheduler & Deadline Watch
Time Context Service (TCS) β a lightweight Python service that gives LLM agents accurate temporal awareness, background task tracking with timeout detection, and a persistent event log that survives context compaction.
What
TCS solves three related problems LLM agents have with time:
1. Temporal Context Injection
Generates a ready-made time context block for system prompts: current timestamp, day of week, relative timezone info, and contact quiet-hours. An agent that starts every session with this context knows exactly when it is.
2. Task Lifecycle Tracker
Register background tasks, poll them at configurable intervals, detect timeouts, and mark them complete. Prevents agents from starting duplicate work or forgetting to follow up on async operations.
3. Persistent Event Log
A SQLite-backed timeline of agent events. Because it lives outside the LLM context window, it survives context compaction. Agents can query "what happened in the last 2 hours" even after their context was trimmed.
Why
LLMs have no real-time clock. They know the world up to a training cutoff, not the current moment. Without explicit time injection, an agent cannot correctly answer "what day is it?" or reason about deadlines and scheduling. This is easily fixed at session start β but requires a dedicated service to do it reliably.
Background task tracking is harder. An agent that kicks off a long-running process (a build, a test run, a data pipeline) needs to check back on it. Without external state, the agent either polls every turn β creating noise β or forgets entirely after context compaction. TCS keeps task state in SQLite outside the context window, so it persists across session resets.
The event log addresses the same problem for history: context compaction silently removes earlier events. A SQLite event log outside the context window provides a durable timeline that any session can query, regardless of how many times the context has been trimmed.
Architecture
TCS runs as an MCP server, exposing all three layers as tools that agents call natively via the Model Context Protocol.
| Tool | Description |
|---|---|
| get_temporal_context | Current time context (text or JSON) for system prompt injection |
| start_task | Register a background task for tracking |
| poll_task | Check if a task should be polled now (based on configured interval) |
| finish_task | Mark a task completed or cancelled |
| list_tasks | List tasks, optionally filtered by status |
| check_timeouts | Scan all running tasks for timeout |
| log_event | Append an event to the persistent timeline |
| query_timeline | Query events by time range or type |
| search_events | Full-text search across event summaries |
| get_stats | Activity statistics |
The SQLite database is stored outside the agent's workspace β on a path that survives session restarts and context compaction. WAL mode enables concurrent reads alongside the server's writes. Task state and event log share the same database but separate tables.
Key Design Decisions
External process, not in-context state
TCS runs as a separate service. State lives in SQLite, not in the agent's context. This is the fundamental design choice: context compaction cannot erase task state or event history because they're stored outside the context window entirely.
Smart polling β "should I poll now?" not raw timestamps
The poll_task tool returns a boolean: should the agent check on this task right now? This abstracts the interval logic away from the agent. The agent doesn't need to track last-polled timestamps itself; TCS handles it.
MCP as the interface
Exposing TCS via MCP means any MCP-compatible agent can use it with no custom integration. The agent treats TCS tools the same as any other tool in its toolbox. This also makes it easy to add new tools without changing the agent configuration.
Time context injected at session start, not on every message
The temporal context block belongs in the system prompt, not repeated in every user message. Calling get_temporal_context once at session start and including the result in the system prompt is sufficient β the timestamp is accurate enough for scheduling purposes.
How to Build Your Own
1. Put temporal state outside the context window
The core insight: any state that needs to survive context compaction must live in an external store. SQLite is a good choice β lightweight, file-based, no server required. WAL mode allows concurrent reads without blocking writes.
2. Inject a time context block into every session-start system prompt
Generate a structured block at session start: current ISO timestamp, day of week, local timezone offset, and any relevant contact quiet-hours. Format it as human-readable prose, not raw JSON β the agent's reasoning about time will be more reliable.
3. Design poll_task as a gate, not a data source
The agent should call poll_task on every turn that might involve a tracked task, but the tool should return "yes, check now" or "not yet" β not the task status itself. This prevents the agent from polling an external system too frequently.
4. Event log schema: timestamp + type + summary + optional metadata
Keep the event log schema minimal. The summary field should be a short human-readable string that's searchable by FTS. Store structured metadata separately. Never delete events β archive or flag them instead. Agents querying "what happened" need the full history.
5. Per-task timeouts, not just global ones
Different tasks have different expected durations. Store a timeout value per task at registration time. The check_timeouts tool scans all running tasks and flags those that have exceeded their individual timeout β not a single global limit.
Frequently Asked Questions
Why does an LLM agent need a separate time service?
LLMs have no real-time clock and no persistent memory between context windows. TCS solves both: it provides the current timestamp on demand, and its SQLite event log records what happened even after the context is compacted or the session restarted.
What is "smart polling" for tasks?
The poll_task tool returns whether enough time has elapsed since the last poll, based on the configured interval. This prevents an agent from polling a background job every message turn when it only needs to check every 30 seconds.
Does the event log survive an OpenClaw session restart?
Yes. Events are stored in a SQLite file outside any session state. As long as that file is present, historical events are queryable from any session.
Authors: Qiushi Wu & Orange π