Handbook
What is Forge LCDL?
Forge LCDL is a private Python library for synchronous, governed calls to OpenAI-compatible chat APIs (/v1/chat/completions): versioned Markdown contracts per task, stable task_id + v1 dispatch, Result[Ok, Err], and…
Forge LCDL does not run Playwright, Docker, or Forge Fleet jobs. Consumers own the browser, process, container image, or HTTP service; LCDL stays a callable library wired in via pip.
Problems it solves
- Typed failures instead of naked exceptions: transport (
TransportFailure), gateway-shaped bodies (GatewayFailure), JSON repair (ParseFailure), contract/schema mismatch (SchemaFailure,ConfigFailure). - Repeatable prompts and outputs: each governed task binds input/output shape + operator instructions in
src/forge_lcdl/contracts/<task_id>/v1/contract.md(optional sidecarCONTRACT-SPEC.md). - Composable control flow (
forge_lcdl.operators):seq,fallback_chain,until_ok,branch,try_catch,optional_step,for_each,repeat.
Mental model for agents
run_task(task_id, "v1", payload, profile=...)— load contract, build messages, call chat, parse/verify, returnOk(payload)orErr(...).LlmEnvProfile(read_certificator_profile/read_taxonomy_profile) — base URL, model, timeout; matches how forge-certificators configures Granite-style gateways (consumer env).TaskRunner(chat=...)— swap transport forfake_chatin tests (ChatResult(ok, body)).
Snippet (tests / offline tooling — same pattern as README):
def fake_chat(messages, **kwargs):
from forge_lcdl.types import ChatResult
return ChatResult(True, '{"chunk_results":[{"chunk_id":"a","is_question_block":true,"confidence":0.9,"reason":"mcq"}]}')
runner = TaskRunner(chat=fake_chat)
runner.run("pw_chunk_classify", "v1", {...}, profile=profile)
Core API map (where to start)
| Module / entry | Purpose |
|---|---|
run_task, TaskRunner, contracts_root |
Task dispatch and overrides |
forge_lcdl.generic |
chat_with_json_mode_then_plain, parse_json_object_lenient, truncation, URLs |
forge_lcdl.transport |
Blocking urllib chat_completion_sync |
forge_lcdl.operators |
Sequential / retry / branching composition |
forge_lcdl.execution |
LcdlClient, ExecutionEngine, ExecutionPolicy, RAG + routing — CLIENT-API.md, EXECUTION-ENGINE.md |
forge_lcdl.retrieval, forge_lcdl.inference |
Evidence packs, Retriever, planner tasks — RAG.md |
forge_lcdl.prompts |
Stable prefixes, prompt_cache_key helpers — PROMPT-CACHING.md |
forge_lcdl.tasks.packs |
FORGE_LCDL_TASK_PACKS — TASK-PACKS.md |
forge_lcdl.verification, forge_lcdl.repair |
Post-task checks and structured retry hints (VERIFICATION.md, REPAIR-LOOPS.md) |
forge_lcdl.mcp_client |
Optional MCP client hub + Playwright adapter (MCP-CLIENT.md); extra pip install 'forge-lcdl[mcp]'. LCDL as MCP server in Cursor: MCP-SIDECAR.md. |
forge_lcdl.graph, forge_lcdl.context |
DecisionPack DAG + bounded repo context without LLM (GRAPH.md, CONTEXT-PACKS.md) |
Depth topics: PLAYWRIGHT-DISCOVERY.md, PAGE-MECHANICS.md, GAME-ENGINE.md, BENCHMARKS.md, ALPHA-ROADMAP.md.
Relationship to Forge Certificators
forge-certificators is today’s main consumer in this workspace: it runs Playwright locally, gathers probes/chunks, and calls run_task for pw_* catalog tasks (PLAYWRIGHT-DISCOVERY.md).
Example — pw_chunk_classify — forge-certificators sibling checkout path:
forge-certificators/src/forge_certificators/source_ingest/playwright_llm_page_discovery.py:
res = run_task(
"pw_chunk_classify",
"v1",
{"url": url, "chunks": chunks, "temperature": temperature},
profile=prof,
chat=chat,
pre_chat=pre_chat,
)
Phase A page-kind routing can use either the catalog task pw_page_kind_route or a compact phasea-json path built on run_json_contract_task + chat_once — imports in forge-certificators:
forge-certificators/src/forge_certificators/source_ingest/core/phase_a.py:
from forge_lcdl.generic.chat_policy import chat_once
from forge_lcdl.generic.json_task import run_json_contract_task
from forge_lcdl.result import Err, Ok, Result
Pipeline scripts pass run_task into run_phase_a_scan_route_sync (run_task_fn=). Example (forge-certificators scripts/pipeline/phase_a/run_fixture_bundle_http.py):
run_phase_a_scan_route_sync(
page,
url=url,
profile=profile,
run_task_fn=run_task,
operator_hints=ns.operator_hints,
...
)
The same run_task_fn=run_task pattern appears in experimentation drivers (forge-certificators scripts/pipeline/experiments/monte/mc_phase_a_strategy_seek.py):
from forge_lcdl import run_task
from forge_certificators.source_ingest.core.phase_a import run_phase_a_scan_route_sync
res = run_phase_a_scan_route_sync(
page,
url=ns.url,
profile=profile,
run_task_fn=run_task,
operator_hints="mc_phase_a_strategy_seek; authorized crawl.",
)
CLI acknowledgement: source_ingest discovery requires --allow-lcdl so operators explicitly opt into LCDL-backed flows (see forge_certificators/source_ingest/cli.py).
forge-lcdl-runtime (separate wheel)
forge-lcdl-runtime is an optional sibling: disk-backed ChatSession, RAG-ish helpers, DecisionPack execution helpers. Example: prose MCQ extraction in forge-certificators uses forge_lcdl_runtime PackExecutor paths while still returning structured items for extractor hints — see lcdl_prose_mcq_incremental.py. Do not conflate forge_lcdl (contracts + runner) with forge_lcdl-runtime (orchestration/RAG adjuncts).
Private repository hygiene
Treat forge-lcdl as private. Do not commit API keys, live gateway URLs, or customer content into docs or tests. See CONTRIBUTING.md.
Further reading
- CLIENT-API.md —
LcdlClient,execute_graph,ExecutionPolicyexamples. - EXECUTION-ENGINE.md — orchestration phases and traces.
- RAG.md — retrieval modes, evidence packs, citations.
- PROMPT-CACHING.md —
cached_tokens,extra_body,PromptPlan. - TASK-PACKS.md —
FORGE_LCDL_TASK_PACKS. - ADOPTION.md — dependency install + as-built consumer index (paths into forge-certificators).
- MCP-CLIENT.md —
McpHub,PlaywrightAdapter, snapshot/evidence helpers, policy defaults, live-test gate. - MCP-SIDECAR.md — expose LCDL operations as MCP tools in Cursor.