Translator Copilot is Unbabel’s new AI assistant constructed instantly into our CAT instrument. It leverages giant language fashions (LLMs) and Unbabel’s proprietary High quality Estimation (QE) expertise to behave as a wise second pair of eyes for each translation. From checking whether or not buyer directions are adopted to flagging potential errors in actual time, Translator Copilot strengthens the connection between clients and translators, making certain translations should not solely correct however absolutely aligned with expectations.
Why We Constructed Translator Copilot
Translators at Unbabel obtain directions in two methods:
- Common directions outlined on the workflow stage (e.g., formality or formatting preferences)
- Mission-specific directions that apply to specific information or content material (e.g., “Don’t translate model names”)


These seem within the CAT instrument and are important for sustaining accuracy and model consistency. However underneath tight deadlines or with complicated steering, it’s potential for these directions to be missed.
That’s the place Translator Copilot is available in. It was created to shut that hole by offering automated, real-time help. It checks compliance with directions and flags any points because the translator works. Along with instruction checks, it additionally highlights grammar points, omissions, or incorrect terminology, all as a part of a seamless workflow.
How Translator Copilot Helps
The function is designed to ship worth in three core areas:
- Improved compliance: Reduces threat of missed directions
- Increased translation high quality: Flags potential points early
- Decreased price and rework: Minimizes the necessity for guide revisions
Collectively, these advantages make Translator Copilot an important instrument for quality-conscious translation groups.
From Concept to Integration: How We Constructed It
We started in a managed playground surroundings, testing whether or not LLMs may reliably assess instruction compliance utilizing various prompts and fashions. As soon as we recognized the best-performing setup, we built-in it into Polyglot, our inner translator platform.
However figuring out a working setup was simply the beginning. We ran additional evaluations to grasp how the answer carried out inside the precise translator expertise, amassing suggestions and refining the function earlier than full rollout.
From there, we introduced every thing collectively: LLM-based instruction checks and QE-powered error detection had been merged right into a single, unified expertise in our CAT instrument.
What Translators See
Translator Copilot analyzes every phase and makes use of visible cues (small coloured dots) to point points. Clicking on a flagged phase reveals two sorts of suggestions:
- AI Strategies: LLM-powered compliance checks that spotlight deviations from buyer directions
- Potential Errors: Flagged by QE fashions, together with grammar points, mistranslations, or omissions


To help translator workflows and guarantee easy adoption, we added a number of usability options:
- One-click acceptance of recommendations
- Capacity to report false positives or incorrect recommendations
- Fast navigation between flagged segments
- Finish-of-task suggestions assortment to collect consumer insights
The Technical Challenges We Solved
Bringing Translator Copilot to life concerned fixing a number of powerful challenges:
Low preliminary success charge: In early exams, the LLM accurately recognized instruction compliance solely 30% of the time. Via intensive immediate engineering and supplier experimentation, we raised that to 78% earlier than full rollout.
HTML formatting: Translator directions are written in HTML for readability. However this launched a brand new situation, HTML degraded LLM efficiency. We resolved this by stripping HTML earlier than sending directions to the mannequin, which required cautious immediate design to protect which means and construction.
Glossary alignment: One other early problem was that some mannequin recommendations contradicted buyer glossaries. To repair this, we refined prompts to include glossary context, lowering conflicts and boosting belief in AI recommendations.
How We Measure Success
To judge Translator Copilot’s impression, we applied a number of metrics:
- Error delta: Evaluating the variety of points flagged at the beginning vs. the tip of every process. A constructive error discount charge signifies that the translators are utilizing Copilot to enhance high quality.


- AI recommendations versus Potential Errors: AI Strategies led to a 66% error discount charge, versus 57% for Potential Errors alone.


- Person habits: In 60% of duties, the variety of flagged points decreased. In 15%, there was no change, seemingly instances the place recommendations had been ignored. We additionally observe suggestion stories to enhance mannequin habits.
An attention-grabbing perception emerged from our information: LLM efficiency varies by language pair. For instance, error reporting is greater in German-English, Portuguese-Italian and Portuguese-German, and decrease in english supply language pairs akin to English-Spanish or English-Norwegian, an space we’re persevering with to analyze.


Wanting Forward
Translator Copilot is a giant step ahead in combining GenAI and linguist workflows. It brings instruction compliance, error detection, and consumer suggestions into one cohesive expertise. Most significantly, it helps translators ship higher outcomes, sooner.
We’re excited by the early outcomes, and much more enthusiastic about what’s subsequent! That is only the start.