Initial Home page
commit
e6476e486e
1 changed files with 23 additions and 0 deletions
23
Home.md
Normal file
23
Home.md
Normal file
|
@ -0,0 +1,23 @@
|
|||
If you have any questions, feel free to open an issue or join our matrix room: <br>
|
||||
https://matrix.to/#/#public:matrix.qqs.tw
|
||||
|
||||
Glossary<br>
|
||||
*: required
|
||||
| parameter | |
|
||||
|---|---|
|
||||
| homeserver* | your matrix server address |
|
||||
| user_id* | your matrix account name like @xxx.matrix.org |
|
||||
| password | account password |
|
||||
| device_id* | your device id, like GMIAZSVFF |
|
||||
| access_token | account access_token |
|
||||
| room_id | if not set bot will work on the rooms it is in |
|
||||
| import_keys_path | location of the E2E room keys |
|
||||
| import_keys_password | E2E room keys password |
|
||||
| model_size* | Size of the model to use, tiny, tiny.en, base, base.en, small, small.en, medium, medium.en, large-v1, or large-v2 |
|
||||
| device | Device to use for computation ("cpu", "cuda", "auto"), default is cpu |
|
||||
| compute_type | Type to use for computation. See https://opennmt.net/CTranslate2/quantization.html |
|
||||
| cpu_threads | number of threads to use when running on CPU (4 by default). |
|
||||
| num_workers | When transcribe() is called from multiple Python threads, having multiple workers enables true parallelism when running the model (concurrent calls to self.model.generate() will run in parallel). This can improve the global throughput at the cost of increased memory usage |
|
||||
| download_root | Directory where the model should be saved, defaults is ./models |
|
||||
|
||||
Use either `access_token` or `login_id+password`(recommended) to login
|
Loading…
Reference in a new issue