Open-source AI large models like DeepSeek and Qwen perform excellently. With tools such as Ollama and LM Studio, we can easily set up large model services locally and integrate them into various AI applications, such as video translation software.
However, limited by the VRAM of personal computers, locally deployed large models are usually smaller, such as 1.5B, 7B, 14B, or 32B.
The official DeepSeek online AI service uses the r1 model with a parameter count as high as 671B. This huge difference means that local models have relatively limited intelligence and cannot be used as freely as online models. Otherwise, you may encounter various strange issues, such as prompts appearing in the translation results, mixing of original text and translation, or even garbled text.
The root cause is that smaller models lack sufficient intelligence and have weaker ability to understand and execute complex prompts.
Therefore, when using local large models for video translation, pay attention to the following points to achieve better translation results:
1. Correctly Configure the Video Translation Software's API Settings
Enter the API address of the locally deployed model into the API Interface Address under Translation Settings --> Compatible AI & Local Large Models in the video translation software. Typically, the API interface address should end with /v1.
- If your API interface requires an API Key, enter it in the SK text box. If not set, fill in any value, such as
1234, but do not leave it blank. - Enter the model name in the Fill in All Available Models text box. Note: Some model names may include size information, such as
deepseek-r1:8b; the suffix:8bmust also be included.


2. Prioritize Models with More Parameters and Newer Versions
- It is recommended to choose a model with at least 7B parameters. If possible, try to select a model larger than 14B. Of course, the larger the model, the better the performance, provided your computer can handle it.
- If using the Tongyi Qianwen series models, prioritize the qwen2.5 series over the 1.5 or 2.0 series.

3. Uncheck the "Send Complete Subtitles" Option in the Video Translation Software
Unless the model you deploy is 70B or larger, checking "Send Complete Subtitles" may cause errors in the subtitle translation results.

4. Reasonably Set the Subtitle Line Number Parameter
Set both Traditional Translation Subtitle Lines and AI Translation Subtitle Lines in the video translation software to smaller values, such as 1, 5, or 10. This helps avoid issues with too many blank lines and improves translation reliability.
The smaller the value, the lower the chance of translation errors, but the translation quality may decrease; the larger the value, the better the translation quality when no errors occur, but it is also more prone to errors.

5. Simplify the Prompt
When the model is small, it may not understand or follow instructions well. In such cases, simplify the prompt to make it clear and straightforward.
For example, the default prompt in the software directory/videotrans/localllm.txt file might be complex. If the translation results are unsatisfactory, try simplifying it.
Simplified Example One:
# Role
You are a translation assistant capable of translating the text within the <INPUT> tags into {lang}.
## Requirements
- The number of lines in the translation must equal the number of lines in the original text.
- Translate literally; do not explain the original text.
- Return only the translation; do not return the original text.
- If translation is not possible, return empty lines without apologizing or explaining the reason.
## Output Format:
Output the translation directly; do not output any other prompts, such as explanations or guiding characters.
<INPUT></INPUT>
Translation Result:Simplified Example Two:
You are a translation assistant. Translate the following text into {lang}, keep the number of lines unchanged, and return only the translation. If translation is not possible, return empty lines.
Text to Translate:
<INPUT></INPUT>
Translation Result:Simplified Example Three:
Translate the following text into {lang}, keeping the number of lines consistent. If translation is not possible, leave it blank.
<INPUT></INPUT>
Translation Result:You can further simplify and optimize the prompt based on your actual situation.
By optimizing the above points, even smaller local large models can play a greater role in video translation, reduce errors, improve translation quality, and provide a better local AI experience.
