3.9 KiB
GEMINI.md - Project Context for Gemini CLI
This document provides a comprehensive overview of the "Rapport-automatique" project for the Gemini CLI, enabling it to understand the project's purpose, architecture, and key components for effective collaboration.
Project Overview
The goal of this project is to create an AI agent that can automatically write an internship report. The agent uses a Retrieval-Augmented Generation (RAG) system to source information from a collection of notes (weekly internship reports) provided in the documents_projet/ directory.
The project is built in Python and leverages the LangChain and LangGraph frameworks to create a sophisticated agent.
Key Technologies:
- Orchestration: LangChain & LangGraph
- LLM:
mistral-large-latestviaChatMistralAI - Vector Database (RAG): ChromaDB (persisted in
chroma_db/) - Embeddings:
jinaai/jina-embeddings-v3from HuggingFace - Document Loading:
Unstructured(for.txtfiles) - Web Search: Tavily
- Experiment Tracking: MLflow
Architecture
The system is designed as a LangGraph agent with a clear, cyclical flow:
- Start (LLM Call): The agent starts by calling the Mistral LLM (
reponse_questionnode) with the current conversation history. - Tool Decision: The LLM decides whether to generate a direct response or use one of its available tools.
- Conditional Routing: The
should_continuefunction checks the LLM's output. If tool calls are present, the graph transitions to thetool_node. Otherwise, the session ends. - Tool Execution: The
tool_nodeexecutes the requested tools (e.g.,search_in_filesfor RAG,internet_search, file I/O). - Loop: The output of the tools is passed back to the LLM (
reponse_questionnode) for it to process the results and decide the next action, continuing the cycle.
The agent's state (CustomState) is explicitly managed and includes conversation history, a todo list for task management, and the query and results from the RAG system.
Building and Running
1. Setup
a. Install Dependencies: First, set up and activate a Python virtual environment. Then, install the required packages.
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
b. Configure Environment Variables:
Copy the template .env.template file and fill in your API keys (e.g., for Mistral, Tavily).
cp AgentReact/.env.template AgentReact/.env
# Edit AgentReact/.env with your credentials
2. Data Ingestion (RAG Setup)
Place your source documents (as .txt files) into the documents_projet/ directory at the project root. Then, run the initialization script to populate the Chroma vector database.
python RAG/init.py
3. Running the Agent
The main entry point for the agent is AgentReact/start.py.
python AgentReact/start.py
This script will invoke the agent graph with a hardcoded sample question and print the resulting messages.
Development Conventions
- Modularity: The code is well-structured into directories for the agent (
AgentReact), RAG components (RAG), and data (documents_projet). The agent's logic is further divided intoagent.py(graph),nodes.py,state.py, andtools.py. - Singleton Pattern: The
VectorDatabaseis implemented as a Singleton to ensure a single, shared instance throughout the application. - State Management: The agent's state is explicitly defined in
AgentReact/utils/state.py, making it clear what information is tracked across turns. - Human-in-the-Loop: The
ask_humantool provides a mechanism for the agent to request user input, although the full "supervised tools" workflow from the diagram is not yet implemented. - Roadmap: The
roadmap.mdfile tracks the project's progress and outlines future development goals, such as moving from ChromaDB to PG Vector and adding PDF generation.