Metadata-Version: 2.1
Name: alectio-sdk
Version: 0.7.3
Summary: Integrate customer side ML application with the Alectio Platform
Home-page: https://github.com/alectio/SDK
Author: Alectio
Author-email: admin@alectio.com
Classifier: Programming Language :: Python :: 3
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Logging
Classifier: Topic :: System :: Monitoring
Requires-Python: >=3.6
Description-Content-Type: text/markdown
Requires-Dist: python-dateutil (>=2.8.1)
Requires-Dist: urllib3 (>=1.25.9)
Requires-Dist: scikit-learn
Requires-Dist: wheel
Requires-Dist: pickle-mixin
Requires-Dist: opencv-python-headless
Requires-Dist: rich
Requires-Dist: typer
Requires-Dist: pandas
Requires-Dist: google-cloud-storage
Requires-Dist: google-auth
Requires-Dist: scikit-build
Requires-Dist: codecarbon

# Requirements
* Python3 (Required)
* PIP3    (Required)
* Ubuntu 16.04+ / MacOS / Windows 10
* GCC / C++ (Will depend on OS you are using. Ubuntu, MacOS it comes default. Some falvours of linux distribution like Amazon Linux/RED Hat linux might not have GCC or C++ realted libraires installed)

For this tutorial, we are assuming you are using Python3 and PIP3. Also, make sure you have the necessary build tools installed (might vary from OS to OS). If you get any errors while installing any dependent packages feel free to reach out to us but most of it can quickly be solved by a simple Google search.  

# Alectio SDK

AlectioSDK is a package that enables developers to build an ML pipeline as a Flask app to interact with Alectio's
platform.
It is designed for Alectio's clients, who prefer to keep their model and data on their on server.

The package is currently under active development. More functionalities that aim to enhance robustness will be added soon, but for now the package provides a class `alectio_sdk.flask_wrapper.Pipeline` that inferfaces with customer-side
processes in a consistent manner. Customers need to implement 4 processes as python functions:

* A process to train the model
* A process to test the model
* A process to apply the model to infer on unlabeled data
* A process to assign each data point in the dataset to a unique index (Refer to one of the examples to know how)

### Train the Model
The logic for training the model should be implemented in this process. The function should look like:

```python
def train(payload):
    # get indices of the data to be trained
    labeled = payload['labeled']

    # get checkpoint to resume from
    resume_from = payload['resume_from']

    # get checkout to save for this loop
    ckpt_file = payload['ckpt_file']

    # implement your logic to train the model
    # with the selected data indexed by `labeled`
    return

```

The name of the function can be anything you like. It takes an argument `payload`, which is a
dictionary with 3 keys

| key | value |
| --- | ----- |
| resume_from | a string that specifies which checkpoint to resume from |
| ckpt_file | a string that specifies the name of checkpoint to be saved for the current loop |
| labeled | a list of indices of selected samples used to train the model in this loop |

Depending on your situation, the samples indicated in `labeled` might not be labeled (despite the variable
name). We call it `labeled` because in the active learning setting, this list represents the pool of
samples iteratively labeled by the human oracle.


### Test the Model
The logic for testing the model should be implemented in this process. The function representing this
process should look like:

```python
def test(payload):
    # the checkpoint to test
    ckpt_file = payload['ckpt_file']

    # implement your testing logic here


    # put the predictions and labels into
    # two dictionaries

    # lbs <- dictionary of indices of test data and their ground-truth

    # prd <- dictionary of indices of test data and their prediction

    return {'predictions': prd, 'labels': lbs}
```
The test function takes an argument `payload`, which is a dictionary with 1 key

| key | value |
| --- | ----- |
| ckpt_file | a string that specifies which checkpoint to test |

The test function needs to return a dictionary with two keys

| key | value |
| --- | ----- |
| predictions | a dictionary of an index and a prediction for each test sample|
| labels | a dictionary of an index and a ground truth label for each test sample|

The format of the values depends on the type of ML problem. Please refer to the [examples](./examples) directory for details.

## Apply Inference
The logic for applying the model to infer on the unlabeled data should be implemented in this process.
The function representing this process should look like:
```python
def infer(payload):
    # get the indices of unlabeled data
    unlabeled = payload['unlabeled']

    # get the checkpoint file to be used for applying inference
    ckpt_file = payload['ckpt_file']

    # implement your inference logic here


    # outputs <- save the output from the model on the unlabeled data as a dictionary
    return {'outputs': outputs}
```

The infer function takes an argument `payload`, which is a dictionary with 2 keys:

| key | value |
| --- | ----  |
| ckpt_file | a string that specifies which checkpoint to use to infer on the unlabeled data |
| unlabeled | a list of of indices of unlabeled data in the training set |


The `infer` function needs to return a dictionary with one key

| key | value |
| --- | ----- |
| outputs | a dictionary of indexes mapped to the models output before an activation function is applied |

For example, if it is a classification problem, return the output **before** applying softmax.
For more details about the format of the output, please refer to the [examples](./examples) directory.

While creating your infer outputs it is necessary to use our sql client. You can access this client by using
`alectio_sdk.sdk.sql_client` where there exist functions `add_index()` and `create_database()`. 
We use a local sqlite database where you specify to be stored using your config.yaml file. Before you
begin recording your infer outputs, you will create a connection to this local database, and then
begin adding rows to the database. When you are done, you close the connection, and that is all that is needed.

The best reference for this is in the object detection example's `processes.py` file under `infer()`. 
## Installation
### 0. Key Management

If you have not already created your Client ID and Client Secret then do so by visiting:
1. open https://portal.alectio.com
2. Login there and create project and an experiment.
3. An experiment token will be generated.
4. Enter your experiment token in main.py to authenticate.

Please visit https://github.com/alectio/AlectioExamples for detailed examples.




### 1. Set up a virtual environment
We recommend to set-up a virtual environment.

For example, you can use python's built-in virtual environment via:

```
python3 -m venv env
source env/bin/activate
```
### 2. Install AlectioSDK/requirements
```
pip install .
pip install -r requirements.txt
```
### 3. Configure aws credentials
We need to [configure the aws cli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) by running
```
aws configure
```
Fill in your credentials as requested on your terminal
### 4. Run Examples

The remaining installation instructions are detailed in the [examples](./examples) directory. We cover one example for [topic classification](./examples/topic_classification), one example for [image classification](./examples/image_classification) and one example for [object detection](./examples/object_detection).
