Log Formatting and Highlighting
Format job logs so the Logs tab can parse, color, and search them cleanly.
Durabull's Logs tab now applies structured highlighting by parsing common log patterns.
If your workers emit logs in the formats below, users get:
- timestamp detection
- level badges (
DEBUG,INFO,WARN,ERROR, etc.) - context tags
- key/value coloring
- reliable keyword search
- automatic long-line truncation for readability
Recommended Log Line Format
Use this shape whenever possible:
[<timestamp>] [<LEVEL>] [<OPTIONAL_CONTEXT>] message | key=value | key2=value2
Example:
[2026-02-13T21:25:43.232Z] [INFO] [SYNC] Processing adjustment | jobId=sync-adjustment-123 | queue=shipment-rebill | attempt=3
Parsing Rules Used by the Logs Viewer
The viewer makes these assumptions, in order:
- Timestamp:
- first token in brackets, like
[2026-02-13T21:25:43.232Z] - or a plain timestamp prefix like
2026-02-13T21:25:43.232Z ...
- Level:
- first bracket token after timestamp if it matches known levels
- or a plain prefix, like
INFO ...,ERROR: ...
Recognized levels:
TRACEDEBUGINFOWARN/WARNINGERRORFATALSUCCESSCONTEXT
- Context tags:
- additional bracket tokens after timestamp/level are treated as context tags
- examples:
[Worker-1],[SyncService],[CONTEXT]
- Message body:
- remaining text is treated as the message
|delimiters split visual segmentskey=valuesegments are highlighted as structured fields
Timestamp Guidance
Preferred:
[2026-02-13T21:25:43.232Z]
Also supported:
2026-02-13T21:25:43.232Z
2026-02-13 21:25:43.232Z
2026/02/13 21:25:43
Recommendation:
- always include timezone (UTC
Zpreferred) - use ISO 8601 for machine-friendly parsing
Structured Segment Guidance
Use | as a stable delimiter between logical parts:
[2026-02-13T21:25:43.232Z] [DEBUG] Recomputing totals | orderId=ord_123 | attempt=2 | source=worker
Why:
- each segment is easier to scan
- key/value pairs are highlighted cleanly
- search results are easier to understand
JSON Payloads in Logs
JSON fragments in the message are highlighted, but still treated as text lines:
[2026-02-13T21:25:43.232Z] [INFO] Payload received | data={"adjustmentId":"abc123","mode":"dry-run"}
Recommendation:
- keep JSON compact (single-line) in logs
- put large payloads in job data/trace storage, not inline logs
Long Line Behavior
To keep logs readable in the UI:
- very long message bodies are automatically truncated
- truncation is intentionally not expandable in the viewer
- full raw line remains available via copy/export behavior
Implication:
- put critical fields early in the line
- avoid large stack traces or huge JSON blobs in a single log line
Search Behavior
The Logs tab search is case-insensitive and scans the full raw line text.
Best practices:
- include stable field names (
jobId,requestId,queue,attempt) - prefer consistent level tags and context tags
- avoid free-form variants for the same field (
reqIdvsrequestId)
Good vs Bad Examples
Good:
[2026-02-13T21:25:43.232Z] [ERROR] [SYNC] Request failed | jobId=sync-adjustment-123 | requestId=req_516c2008d9054dc2 | status=404
Bad:
something failed maybe 404 id 123 retrying ??? {"huge":"payload", ...}
Worker Helper Pattern (TypeScript)
type LogLevel = 'TRACE' | 'DEBUG' | 'INFO' | 'WARN' | 'ERROR' | 'FATAL' | 'SUCCESS'
export function emitJobLog(
log: (line: string) => void,
level: LogLevel,
message: string,
fields: Record<string, string | number | boolean> = {},
context?: string
) {
const timestamp = new Date().toISOString()
const contextPart = context ? ` [${context}]` : ''
const fieldText = Object.entries(fields)
.map(([key, value]) => `${key}=${String(value)}`)
.join(' | ')
const suffix = fieldText ? ` | ${fieldText}` : ''
log(`[${timestamp}] [${level}]${contextPart} ${message}${suffix}`)
}
This keeps log output consistent with viewer parsing and highlighting rules.