Metadata-Version: 2.4
Name: agentic-tools
Version: 0.2.0
Summary: A toolset for agentic workflows
Home-page: https://github.com/AxelGard/agentic-tools
Author: Axel Gard
Author-email: axel.gard@tutanota.com
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Requires-Python: >=3.7
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: langchain-ollama
Requires-Dist: langchain
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"
Requires-Dist: black; extra == "dev"
Requires-Dist: build; extra == "dev"
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: license-file
Dynamic: provides-extra
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

# agentic-tools
A python package that lets you setup agents with tools very easy 

<img src="https://raw.githubusercontent.com/AxelGard/agentic-tools/master/docs/icon.png" alt="drawing" style="width:300px;"/>


All you need is to add a docorator do the function that should be made accessable to the LLM. 

```python
from agentic_tools import ai_tool

def llm_can_not_use():
    pass 

@ai_tool
def llm_can_use():
    pass

```

Be awere that this will eat more input tokens, since you are adding context of the functions.


## Context that gets added

- [x] Function signarure, icludeing, arguments and type hints
- [x] Docstring


## install 

```bash 
pip install agentic-tools 
```

Agentic tools needs a [LangChain](https://python.langchain.com/docs/integrations/chat/) object chat model.


### build 

```bash 
git clone git@github.com:AxelGard/agentic-tools.git
cd agentic-tools
python3 -m venv env 
source ./env/bin/activate
pip install -e .
# or 
# pip install -e .[dev] 
# if you want dev depenacies 
```


## DEMO 

you can check out [my expriment notebook](https://github.com/AxelGard/agentic-tools/blob/master/expr.ipynb)

or

suppose you have some functions that you want you LLM to be able to call. 
Agentic tools allows you to just add a decorator to it and then the `agent` will have access to those tools. 

All [LangChain](https://python.langchain.com/docs/integrations/chat/) models are supported, but for this demo I will use [Ollama](https://ollama.com/) with [llama3.1](https://ollama.com/library/llama3.1)

### install 

install Ollma 

```bash 
curl -fsSL https://ollama.com/install.sh | sh
```
and then run ollama  
```bash 
ollama run llama3.1
```

install the python dependacnices

```bash 
pip install agentic-tools langchain-ollama yfinance
```

### Example 

```python

from langchain_ollama.chat_models import ChatOllama
from agentic_tools import ai_tool, Agent
import yfinance as yf


@ai_tool
def get_stock_price(symbol: str, period:str="1d") -> str:
    stock = yf.Ticker(symbol)
    price = stock.history(period=period)["Close"].iloc[0]
    return f"The stock price of {symbol} was ${price:.2f} {period} ago"

@ai_tool
def get_market_cap(symbol: str) -> str:
    stock = yf.Ticker(symbol)
    cap = stock.info.get("marketCap", None)
    if cap:
        return f"The market cap of {symbol} is ${cap:,}"
    return f"Market cap data for {symbol} is not available."

@ai_tool
def get_pe_ratio(symbol: str) -> str:
    """ Will return P/E ratrio of the company """
    stock = yf.Ticker(symbol)
    pe = stock.info.get("trailingPE", None)
    if pe:
        return f"The P/E ratio of {symbol} is {pe:.2f}"
    return f"P/E ratio data for {symbol} is not available."


llm = ChatOllama(model="llama3.1", temperature=0)
agent = Agent(llm_chat_model=llm)

query = "what was apples stock price a 5 days ago?"
print(f"{agent.query(question=query)}\n") 
# The stock price of AAPL was $210.16 5d ago

query="who are you?"
print(f"{agent.query(question=query)}\n") 
# I'm an AI assistant. I don't have a personal identity or emotions, but I can provide information and help with tasks to the best of my abilities. How can I assist you today?

```


```python
from langchain_ollama.chat_models import ChatOllama
from agentic_tools import ai_tool, Agent

llm = ChatOllama(model="llama3.1", temperature=0)
agent = Agent(llm_chat_model=llm)

@ai_tool
def execute_python_code(python_code_as_string:str) -> str:
    """ 
    This tool that lets you execute python code and returns the result as a string. 
    So if you want the result of a expression remeber to print it.
    If you need more then one line of code then you need to seperate with `;`
    So if you want to use this tool just call it with the python code and NOTHING ELSE. 
    When you get a result that you are satesfied with then you should respond with that result"""
    print(python_code_as_string) 
    try: 
        return str(exec(python_code_as_string))
    except Exception as e: 
        return str(e)


query="write the needed code that you need to calculate 2 to the power of 88"
r = "None" 
while True:
    r = agent.query(question=f"Question:{query}; result:{r}")
    print(r)

```
