LLM API

How to Use the Chat Module This module is built to let you run multiple chat sessions in parallel with an AI model (like gpt-3.5-turbo, deepseek-chat, qwen-plus). Each session belongs to one user, but a user can have multiple sessions at the same time. Think of it like this: You are the user. Each session is like a separate conversation thread. The module will handle running them all at once and can save the results for you. 1. Initialize the Chat chat = Chat(MODEL, try_mode=True) MODEL is the model name you want to use. You can also set options like temperature or add a filter function if you want to change the output style. 2. Add a Prompt chat.use_prompt("Use email format the answer following question? {question}") This sets a system prompt for all sessions. The {question} placeholder will be replaced by each question you send. If the input is a string, but placeholder in the prompt. The placeholder would be ignored and input will be appended in the prompt. 3. Choose the Output Format Save results to file as JSONL: chat.save_to("Translation/translation2.jsonl", disable_return=True).to_json() Writes all answers into a file. ...

September 10, 2025 · 3 min · 460 words · Anton