Development
Logging
The application uses standard Python logging. The basic configuration is in app/main.py. Adjust the level (DEBUG, INFO, WARNING, ERROR) to control log verbosity.
Prompt Management
AI prompt templates are stored in app/llm/prompts/prompt_storage.json. Each entry is identified by a name (corresponding to a LangGraph node) and can have multiple versions.
Extending FSM Components
To expand the FSM structure (e.g., add new block types or properties):
Define new Pydantic models in app/models/models.py for new FSM components.
Update the ValidSchemaName Literal type in app/models/models.py to include names of any new top-level schemas (e.g., image_with_buttons).
Update LLM prompts (via the add-prompt API or by directly editing prompt_storage.json) to instruct the LLM on generating these new components.
Adjust LangGraph nodes in app/llm/actions.py if new schemas require specific extraction or generation logic.
Futures
-
Configurable LLM Backends: Implementing a modular design that allows users to easily switch between different LLM providers (e.g., OpenAI, Google Gemini, Anthropic) by updating configuration settings. Currenly mistralai only supported
-
Model-Specific Prompt Optimization: Investigating and developing prompt templates tailored for optimal performance with various LLMs, recognizing that different models may respond best to different prompting strategies
-
Performance Testing: Implementing tests to measure the generation speed and resource consumption for various prompt complexities and FSM sizes
-
Integration Tests: Developing more comprehensive integration tests that simulate complex multi-step workflows within LangGraph, verifying the seamless interaction between different nodes and LLM calls