2
Home
BobMaster edited this page 2023-04-20 10:06:36 +08:00
If you have any questions, feel free to open an issue or join our matrix room:
https://matrix.to/#/#public:matrix.qqs.tw
Glossary
*: required
parameter | |
---|---|
homeserver* | your matrix server address |
user_id* | your matrix account name like @xxx.matrix.org |
password | account password |
device_id* | your device id, like GMIAZSVFF |
access_token | account access_token |
room_id | if not set bot will work on the rooms it is in |
import_keys_path | location of the E2E room keys |
import_keys_password | E2E room keys password |
model_size* | Size of the model to use, tiny, tiny.en, base, base.en, small, small.en, medium, medium.en, large-v1, or large-v2 |
device | Device to use for computation ("cpu", "cuda", "auto"), default is cpu |
compute_type | Type to use for computation. See https://opennmt.net/CTranslate2/quantization.html |
cpu_threads | number of threads to use when running on CPU (4 by default). |
num_workers | When transcribe() is called from multiple Python threads, having multiple workers enables true parallelism when running the model (concurrent calls to self.model.generate() will run in parallel). This can improve the global throughput at the cost of increased memory usage |
download_root | Directory where the model should be saved, defaults is ./models |
Use either access_token
or login_id+password
(recommended) to login
Want to chat with ChatGPT, Bing AI, Google Bard? Try our matrix_chatgpt_bot