commit e6476e486e8b0a9e56cc40e3fa049ba4a36bf6f7 Author: BobMaster <32976627+hibobmaster@users.noreply.github.com> Date: Thu Apr 20 00:06:28 2023 +0800 Initial Home page diff --git a/Home.md b/Home.md new file mode 100644 index 0000000..2daa2d2 --- /dev/null +++ b/Home.md @@ -0,0 +1,23 @@ +If you have any questions, feel free to open an issue or join our matrix room:
+https://matrix.to/#/#public:matrix.qqs.tw + +Glossary
+*: required +| parameter | | +|---|---| +| homeserver* | your matrix server address | +| user_id* | your matrix account name like @xxx.matrix.org | +| password | account password | +| device_id* | your device id, like GMIAZSVFF | +| access_token | account access_token | +| room_id | if not set bot will work on the rooms it is in | +| import_keys_path | location of the E2E room keys | +| import_keys_password | E2E room keys password | +| model_size* | Size of the model to use, tiny, tiny.en, base, base.en, small, small.en, medium, medium.en, large-v1, or large-v2 | +| device | Device to use for computation ("cpu", "cuda", "auto"), default is cpu | +| compute_type | Type to use for computation. See https://opennmt.net/CTranslate2/quantization.html | +| cpu_threads | number of threads to use when running on CPU (4 by default). | +| num_workers | When transcribe() is called from multiple Python threads, having multiple workers enables true parallelism when running the model (concurrent calls to self.model.generate() will run in parallel). This can improve the global throughput at the cost of increased memory usage | +| download_root | Directory where the model should be saved, defaults is ./models | + +Use either `access_token` or `login_id+password`(recommended) to login