Testing

The test/bruteforce.py script facilitates testing the /generate-json endpoint with multiple user prompts and various AI prompt versions.

How to Run Tests:

  1. Ensure the FastAPI server is running (see "Running the Application" above).

  2. Place your test prompts in the ./test/user_prompts folder. Each prompt should be a separate .txt file.

Example: ./test/user_prompts/chatbot_intro.txt

Create a chatbot for a tutor: introduction, lesson schedule, reminders, and FAQ.

  1. Run the test script:

Bash python test/bruteforce_json_generation.py

This script will:

  1. Load prompts from ./user_prompts.

  2. Iterate through predefined AI prompt versions (currently v2 for create_json_generation_prompt and v3, v4 for extract_state_names_node).

  3. Send requests to the local API.

  4. Save generated JSON responses (or error messages) into the ./json_generations_v2 folder (or other name you will make in script constants). Files are named with the prompt name, versions used, and a timestamp.