GitHub - oobabooga/text-generation-webui: A Gradio web UI for Large Language Models.
A Gradio web UI for Large Language Models. Contribute to oobabooga/text-generation-webui development by creating an account on GitHub.
github.com
GitHub - SillyTavern/SillyTavern: LLM Frontend for Power Users.
LLM Frontend for Power Users. Contribute to SillyTavern/SillyTavern development by creating an account on GitHub.
github.com
You can run LLMs locally (such as Pygmalion or a higher-end model) in oobabooga's UI and chat with them via SillyTavern.
It is very possible to train a character that knows exactly how to return SDT syntax (if trained properly), including syntax of the add-ons (i.e. character actions and things like that).
Better yet, you could use ElvenLabs or another locally-ran LLM to add real voices directly into the game (which is already supported via SillyTavern).
Long story short what this would need to happen:
1) You need a menu in SDT to input the API credentials including your ST username/password
2) Dialogues would be made to instead call the API, and the API call would include a "prompt" that explains what is needed from the AI (i.e. a message from i.e. a succubus greeting the character)
3) System prompt (that would probably be universal, editable by the user, which lays out what the AI is capable of: i.e. the dialogue messages types and allowed actions)
I do this already locally on my machine (without SDT integration), and the results are insane.
Any SDT-familiar developers know whether flash would support something of the sort? Adobe Flash Platform * Basics of using the external API
Looks like this would be done via proxy? I have never touched modding SDT, but I'd try if I knew what I was getting into with something like this. I'd imagine you'd need an entirely new dialogue engine, or, feature added to the enhanced dialogues mod.
I am familiar with prompt engineering, so if anyone is down to try to make a stab at a basic mod that changes how the dialogue system works, I'm open to trying to see if I can get an example dialogue running (I can run local uncensored LLMs)