You will annotate and evaluate content in both English and Arabic, ensuring linguistic precision and cultural nuance in safety-critical contexts. Your evaluations will directly contribute to preventing large language models from generating unsafe, biased, or adversarial outputs, including content related to hate, violence, sexual material, self-harm, misinformation, and other sensitive domains.
This role is part of a fast-growing AI data services organization that provides high-quality training data to leading AI companies and foundation model labs, helping shape the safety and reliability of next-generation AI systems.
Candidate Profile
