Skip to main content

Coding Practices

This page documents the project's coding conventions, testing philosophy, and LLM prompt principles. These are enforced across all contributions.

Java Code Conventions

  • Java 25 — leverage new language features (records, pattern matching, sealed classes) when they fit
  • Lombok everywhere — no manual getters, setters, builders, equals, or hashCode. Use @RequiredArgsConstructor for dependency injection (never @Autowired)
  • Static imports — prefer static imports wherever possible
  • Single-line method declarations — never break method signatures across lines
  • No Javadoc — use one-liner comments only when needed for clarity
  • No FQCN — never use fully qualified class names in logic unless absolutely necessary

Fail-Fast Philosophy

  • No fallbacks whatsoever — when a service fails (LLM, config load, etc.), fail immediately. Do not return defaults or alternate values
  • Only exception: i18n locale fallback (missing Swedish prompt → use English)
  • Propagate exceptions — never catch and silently degrade. Errors must be visible
  • No error-hiding — bugs must surface early, not be masked

Configuration-Driven Design

  • All form fields, prompts, and behavior come from YAML configuration
  • No hardcoded domain-specific logic — the system is generic
  • No hardcoded user-facing text — everything through translation keys
  • All timeouts, models, and thresholds must be in YAML configuration
  • When adding to application.yml, always update src/main/docker/template.env too

LLM Prompt Principles

These principles were established through iterative development and apply to all prompt work:

Trust the LLM

Provide rich context (conversation history, field info, schema), not excessive rules. Don't over-steer with many examples or edge-case instructions. Keep prompts concise and context-rich. The LLM is intelligent — let it use that intelligence.

General-Purpose Solutions

All prompt fixes must work for any field type, not just the one being debugged. If a fix works for dates, it must also work for selects, times, text, etc. The system is generic — form fields come from YAML configs.

Dynamic Derivation

Never hardcode lists of capabilities, categories, or knowledge base types in prompt descriptions. Instead, derive them dynamically from registered components at runtime so the prompt auto-updates when components are added or removed.

No Keyword Matching

Prompts must focus on semantic meaning and intent understanding. Never instruct the LLM to look for specific words or phrases.

Verbatim Message Passing

The root LLM must pass user messages verbatim to the form-filling tool (maximum once per turn). The inner form-filling LLM has its own detailed logic for handling multiple answers in a single message. The knowledge tool may rephrase the user's message into a direct question.

Tool-Calling Architecture

The system uses a two-level LLM architecture:

Root LLM (Orchestration)

  • Receives user message + phase-gated tools via RootOrchestrationTools
  • Selects which tool(s) to call based on conversation phase and message
  • Synthesizes final response from tool results
  • Tools are inner record classes with @Tool annotations

Inner LLM (Form Filling)

  • Receives form schema + form state + user message
  • Calls FormTools methods to interact with the form engine
  • Drives form completion through conversational interaction
  • Extracts values inline — no separate extraction step

Key Patterns

  • Phase tracking: AtomicReference<ConversationPhase> shared between orchestrator and tool records
  • UserContext propagation: Reactor worker threads lose ThreadLocal context. Tool records use setContext(userContext) / UserContextHolder.clear() in try/finally blocks
  • Form schema: FormSchemaRenderer produces a flat, DFS-ordered schema with show_when conditions — designed for LLM consumption

Testing Rules

Core Philosophy

  1. Never accept failures to make tests pass — always fix the root cause
  2. Never create shortcuts or workarounds just to make a test pass
  3. Never hardcode values — this is explicitly forbidden
  4. If a test fails, investigate: would this happen in production? If yes, fix the root cause
  5. Tests must test actual production logic — not simplified or mocked versions

Assertions

  • Every method call must have an assertion — never ignore return values
  • No weak assertions (isNotEmpty(), anyMatch()) without verifying exact content
  • Exact assertions required — verify exact content, exact size, exact values
  • Both branches — every if-else/conditional must assert both paths

Architecture

  • All LLM calls MUST be mocked in unit/integration tests. Only @Tag("external-api") tests use real LLM
  • Mock external dependencies (LLM, ChatModel) but never mock what you're testing
  • Use test resources (src/test/resources), not main resources
  • Extend AbstractIntegrationTest for integration tests — never recreate Testcontainers setup
  • No hardcoded dates — use dynamic dates (e.g., LocalDate.now().plusDays(N))