Metadata-Version: 2.1
Name: aiwaifu-vts-controller
Version: 0.1.0
Summary: 
License: MIT
Author: hrnph
Author-email: hrnph@protomail.com
Requires-Python: >=3.10,<4.0
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Dist: asyncio (>=3.4.3,<4.0.0)
Requires-Dist: fastdtw (>=0.3.4,<0.4.0)
Requires-Dist: librosa (>=0.10.2.post1,<0.11.0)
Requires-Dist: numpy (>=2.0.1,<3.0.0)
Requires-Dist: pyvts (>=0.3.2,<0.4.0)
Requires-Dist: requests (>=2.32.3,<3.0.0)
Requires-Dist: scipy (>=1.14.0,<2.0.0)
Requires-Dist: soundcard (>=0.4.3,<0.5.0)
Requires-Dist: soundfile (>=0.12.1,<0.13.0)
Requires-Dist: websocket-client (>=1.8.0,<2.0.0)
Description-Content-Type: text/markdown

# AIwaifuVTSController
VTS Plugin for AIwaifu using python to
- Voice Buffer as input for VTS Lip Sync
- Expression/Animation Control API

# Usage
```python
import asyncio
from waifu_vts_controller import VTSController

async def main():
    phoneme_paths = {
        "a": "./local/mels/0_a.mp3",
        "i": "./local/mels/1_i.mp3",
        "u": "./local/mels/2_u.mp3",
        "e": "./local/mels/3_e.mp3",
        "o": "./local/mels/4_o.mp3",
        "n": "./local/mels/5_n.mp3"
    }

    plugin_info = {
        "plugin_name": "AIWaifuController",
        "developer": "HRNPH",
        "authentication_token_path": "./token.txt",
    }
    
    audio_file_path = "./local/samples/0_rachel.mp3"

    # Create the VTSController instance
    controller = VTSController(plugin_info=plugin_info)
    await controller.connect()
    
    await controller.audio_controller.play_audio_with_mouth_movement(audio_file_path, phoneme_paths)
    

if __name__ == "__main__":
    asyncio.run(main())

```
