Metadata-Version: 2.4
Name: arklex
Version: 0.0.16rc2
Summary: The official Python library for the arklex API
Project-URL: Homepage, https://github.com/arklexai/Agent-First-Organization
Project-URL: Issues, https://github.com/arklexai/Agent-First-Organization/issues
Author-email: "Arklex.AI" <support@arklex.ai>
License-Expression: MIT
License-File: LICENSE.md
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.10
Requires-Dist: email-validator<3.0.0,>=2.2.0
Requires-Dist: faiss-cpu<2.0.0,>=1.10.0
Requires-Dist: fastapi-cli<1.0.0,>=0.0.5
Requires-Dist: fastapi<1.0.0,>=0.115.3
Requires-Dist: greenlet<4.0.0,>=3.1.1
Requires-Dist: httptools<1.0.0,>=0.6.4
Requires-Dist: janus<2.0.0,>=1.0.0
Requires-Dist: langchain-anthropic<1.0.0,>=0.3.5
Requires-Dist: langchain-community<1.0.0,>=0.3.3
Requires-Dist: langchain-google-genai<3.0.0,>=2.0.9
Requires-Dist: langchain-huggingface<1.0.0,>=0.1.2
Requires-Dist: langchain-openai<1.0.0,>=0.2.3
Requires-Dist: langgraph<1.0.0,>=0.2.39
Requires-Dist: linkify-it-py<3.0.0,>=2.0.3
Requires-Dist: litellm<2.0.0,>=1.59.0
Requires-Dist: markdown>=3.8
Requires-Dist: mdit-py-plugins<1.0.0,>=0.4.2
Requires-Dist: networkx<4.0.0,>=3.4.2
Requires-Dist: parsedatetime<3.0,>=2.6
Requires-Dist: phonenumbers<9.0.0,>=8.13.48
Requires-Dist: pydantic-ai<1.0.0,>=0.1.10
Requires-Dist: pysocks<2.0.0,>=1.7.1
Requires-Dist: pytest<9.0.0,>=8.3.5
Requires-Dist: python-levenshtein<1.0.0,>=0.26.0
Requires-Dist: python-multipart<1.0.0,>=0.0.12
Requires-Dist: scikit-learn<2.0.0,>=1.7.0
Requires-Dist: scipy<2.0.0,>=1.14.1
Requires-Dist: selenium<5.0.0,>=4.26.1
Requires-Dist: tavily-python<1.0.0,>=0.5.0
Requires-Dist: textual<1.0.0,>=0.85.2
Requires-Dist: unstructured-client<1.0.0,>=0.30.0
Requires-Dist: unstructured[docx]<1.0.0,>=0.16.4
Requires-Dist: watchfiles<1.0.0,>=0.24.0
Requires-Dist: webdriver-manager==4.0.2
Requires-Dist: websockets<14.0,>=13.1
Provides-Extra: hubspot
Requires-Dist: hubspot-api-client<12.0.0,>=11.1.0; extra == 'hubspot'
Provides-Extra: milvus
Requires-Dist: mysql-connector-python<9.0.0,>=8.3.0; extra == 'milvus'
Requires-Dist: pymilvus<3.0.0,>=2.4.7; extra == 'milvus'
Requires-Dist: pymysql<2.0.0,>=1.1.0; extra == 'milvus'
Provides-Extra: shopify
Requires-Dist: shopifyapi<13.0.0,>=12.7.0; extra == 'shopify'
Provides-Extra: strict-versions
Requires-Dist: email-validator==2.2.0; extra == 'strict-versions'
Requires-Dist: faiss-cpu==1.10.0; extra == 'strict-versions'
Requires-Dist: fastapi-cli==0.0.5; extra == 'strict-versions'
Requires-Dist: fastapi==0.115.3; extra == 'strict-versions'
Requires-Dist: greenlet==3.1.1; extra == 'strict-versions'
Requires-Dist: httptools==0.6.4; extra == 'strict-versions'
Requires-Dist: hubspot-api-client==11.1.0; extra == 'strict-versions'
Requires-Dist: janus==1.0.0; extra == 'strict-versions'
Requires-Dist: langchain-anthropic==0.3.5; extra == 'strict-versions'
Requires-Dist: langchain-community==0.3.3; extra == 'strict-versions'
Requires-Dist: langchain-google-genai==2.0.9; extra == 'strict-versions'
Requires-Dist: langchain-huggingface==0.1.2; extra == 'strict-versions'
Requires-Dist: langchain-openai==0.2.3; extra == 'strict-versions'
Requires-Dist: langgraph==0.2.39; extra == 'strict-versions'
Requires-Dist: linkify-it-py==2.0.3; extra == 'strict-versions'
Requires-Dist: litellm==1.59.0; extra == 'strict-versions'
Requires-Dist: markdown==3.8; extra == 'strict-versions'
Requires-Dist: mdit-py-plugins==0.4.2; extra == 'strict-versions'
Requires-Dist: mysql-connector-python==8.3.0; extra == 'strict-versions'
Requires-Dist: networkx==3.4.2; extra == 'strict-versions'
Requires-Dist: parsedatetime==2.6; extra == 'strict-versions'
Requires-Dist: phonenumbers==8.13.48; extra == 'strict-versions'
Requires-Dist: pydantic-ai==0.1.10; extra == 'strict-versions'
Requires-Dist: pymilvus==2.4.7; extra == 'strict-versions'
Requires-Dist: pymysql==1.1.0; extra == 'strict-versions'
Requires-Dist: pysocks==1.7.1; extra == 'strict-versions'
Requires-Dist: pytest==8.3.5; extra == 'strict-versions'
Requires-Dist: python-levenshtein==0.26.0; extra == 'strict-versions'
Requires-Dist: python-multipart==0.0.12; extra == 'strict-versions'
Requires-Dist: scipy==1.14.1; extra == 'strict-versions'
Requires-Dist: selenium==4.26.1; extra == 'strict-versions'
Requires-Dist: shopifyapi==12.7.0; extra == 'strict-versions'
Requires-Dist: tavily-python==0.5.0; extra == 'strict-versions'
Requires-Dist: textual==0.85.2; extra == 'strict-versions'
Requires-Dist: unstructured-client==0.30.0; extra == 'strict-versions'
Requires-Dist: unstructured[docx]==0.16.4; extra == 'strict-versions'
Requires-Dist: watchfiles==0.24.0; extra == 'strict-versions'
Requires-Dist: webdriver-manager==4.0.2; extra == 'strict-versions'
Requires-Dist: websockets==13.1; extra == 'strict-versions'
Description-Content-Type: text/markdown

<p align="left">
  <img src="https://raw.githubusercontent.com/arklexai/Agent-First-Organization/main/assets/static/img/arklexai.png" alt="Package Logo" style="vertical-align: middle; margin-right: 10px;">
</p>

![Release](https://img.shields.io/github/release/arklexai/Agent-First-Organization?logo=github)
[![PyPI version](https://img.shields.io/pypi/v/arklex.svg)](https://pypi.org/project/arklex)
![Python version](https://img.shields.io/pypi/pyversions/arklex)

Arklex Agent First Organization provides a framework for developing **AI Agents** to complete complex tasks powered by LLMs. The framework is designed to be modular and extensible, allowing developers to customize workers/tools that can interact with each other in a variety of ways under the supervision of the orchestrator managed by *Taskgraph*.

## 📖 Documentation

Please see [here](https://www.arklex.ai/qa/open-source) for full documentation, which includes:

* [Introduction](https://arklexai.github.io/Agent-First-Organization/docs/intro): Overview of the Arklex AI agent framework and structure of the docs.
* [Tutorials](https://arklexai.github.io/Agent-First-Organization/docs/tutorials/intro): If you're looking to build a customer service agent or booking service bot, check out our tutorials. This is the best place to get started.

## 💻 Installation

```
pip install arklex
```

## 🛠️ Build A Demo Customer Service Agent

Watch the tutorial on [YouTube](https://youtu.be/y1P2Ethvy0I) to learn how to build a customer service AI agent with Arklex.AI in just 20 minutes.

<a href="https://youtu.be/y1P2Ethvy0I" target="_blank">
  <img src="https://raw.githubusercontent.com/arklexai/Agent-First-Organization/main/assets/static/img/youtube_screenshot.png" alt="Build a customer service AI agent with Arklex.AI in 20 min" width="400">
</a>

***

**⚙️ 0. Preparation**

* 📂 Environment Setup

  * Create a `.env` file in the root directory with the following information:

    ```
    OPENAI_API_KEY=<your-openai-api-key>
    GEMINI_API_KEY = <your-gemini-api-key>
    GOOGLE_API_KEY = <your-gemini-api-key>
    ANTHROPIC_API_KEY = <your-anthropic-api-key>
    HUGGINGFACE_API_KEY = <your-huggingface-api-key>
    MISTRAL_API_KEY = <your-mistral-api-key>

    LANGCHAIN_TRACING_V2=false
    LANGCHAIN_PROJECT=AgentOrg
    LANGCHAIN_API_KEY=<your-langchain-api-key>

    TAVILY_API_KEY=<your-tavily-api-key>

    MYSQL_USERNAME=<your-mysql-db-username>
    MYSQL_PASSWORD=<your-mysql-db-password>
    MYSQL_HOSTNAME=<your-mysql-db-hostname>
    MYSQL_PORT=<your-mysql-db-port>
    MYSQL_DB_NAME=<your-mysql-db-name>
    MYSQL_CONNECTION_TIMEOUT=<your-mysql-db-timeout>

    MILVUS_URI=<your-milvus-db-uri>
    ```

  * Enable LangSmith tracing (LANGCHAIN_TRACING_V2=true) for debugging (optional).

* 📄 Configuration File

  * Create a chatbot config file similar to `customer_service_config.json`.

  * Define chatbot parameters, including role, objectives, domain, introduction, and relevant documents.

  * Specify tasks, workers, and tools to enhance chatbot functionality.

* Workers and tools should be pre-defined in arklex/env/workers and arklex/env/tools, respectively.

**📊 1. Create Taskgraph and Initialize Worker**

> **:bulb:** The following `--output-dir`, `--input-dir` and `--documents_dir` can be the same directory to save the generated files and the chatbot will use the generated files to run. E.g `--output-dir ./example/customer_service`. The following commands take *customer_service* chatbot as an example.

```
python create.py --config ./examples/customer_service/customer_service_config.json --output-dir ./examples/customer_service
```

* Fields:
  * `--config`: The path to the config file
  * `--output-dir`: The directory to save the generated files
  * `--llm_provider`: The LLM provider you wish to use.
    * Options: `openai` (default), `gemini`, `anthropic`
  * `--model`: The model type used to generate the taskgraph. The default is `gpt-4o`.
    * You can change this to other models like:
      * `gpt-4o-mini`
      * `gemini-2.0-flash`
      * `claude-3-5-haiku-20241022`

* It will first generate a task plan based on the config file and you could modify it in an interactive way from the command line. Made the necessary changes and press `s` to save the task plan under `output-dir` folder and continue the task graph generation process.
* Then it will generate the task graph based on the task plan and save it under `output-dir` folder as well.
* It will also initialize the Workers listed in the config file to prepare the documents needed by each worker. The function `init_worker(args)` is customizable based on the workers you defined. Currently, it will automatically build the `RAGWorker` and the `DataBaseWorker` by using the function `build_rag()` and `build_database()` respectively. The needed documents will be saved under the `output-dir` folder.

**💬 2. Start Chatting**

```
python run.py --input-dir ./examples/customer_service
```

* Fields:
  * `--input-dir`: The directory that contains the generated files
  * `--llm_provider`: The LLM provider you wish to use.
    * Options: `openai` (default), `gemini`, `anthropic`
  * `--model`: The model type used to generate bot response. The default is `gpt-4o`.
    * You can change this to other models like:
      * `gpt-4o-mini`
      * `gemini-2.0-flash`
      * `claude-3-5-haiku-20241022`
  
* It will first automatically start the nluapi and slotapi services through `start_apis()` function. By default, this will start the `NLUModelAPI` and `SlotFillModelAPI` services defined under `./arklex/orchestrator/NLU/api.py` file. You could customize the function based on the nlu and slot models you trained.
* Then it will start the agent and you could chat with the agent

**🔍 3. Evaluation**

* First, create api for the previous chatbot you built. It will start an api on the default port 8000.

    ```
    python model_api.py  --input-dir ./examples/customer_service
    ```

  * Fields:
    * `--input-dir`: The directory that contains the generated files
    * `--llm_provider`: The LLM provider you wish to use.
      * Options: `openai` (default), `gemini`, `anthropic`
    * `--model`: The model type used to generate bot response. The default is `gpt-4o`.
      * You can change this to other models like:
        * `gpt-4o-mini`,  `gemini-2.0-flash` , `claude-3-5-haiku-20241022`
    * `--port`: The port number to start the api. Default is 8000.

* Then, start the evaluation process:

    ```
    python eval.py \
    --model_api http://127.0.0.1:8000/eval/chat \
    --config ./examples/customer_service/customer_service_config.json \
    --documents_dir ./examples/customer_service \
    --output-dir ./examples/customer_service
    ```

  * Fields:
    * `--model_api`: The api url that you created in the previous step
    * `--config`: The path to the config file
    * `--documents_dir`: The directory that contains the generated files
    * `--output-dir`: The directory to save the evaluation results
    * `--num_convos`: Number of synthetic conversations to simulate. Default is 5.
    * `--num_goals`: Number of goals/tasks to simulate. Default is 5.
    * `--max_turns`: Maximum number of turns per conversation. Default is 5.
    * `--llm_provider`: The LLM provider you wish to use.
      * Options: `openai` (default), `gemini`, `anthropic`
    * `--model`: The model type used to generate bot response. The default is `gpt-4o`.
      * You can change this to other models like:
        * `gpt-4o-mini`,  `gemini-2.0-flash` , `claude-3-5-haiku-20241022`
  
    📄 For more details, check out the [Evaluation README](https://github.com/arklexai/Agent-First-Organization/blob/main/arklex/evaluation/README.md).

