Company Description
MillionLogics is a global IT solutions provider and trusted Oracle Partner, with headquarters in London, UK, and a dedicated development hub in Hyderabad, India. Focused on delivering innovative, scalable, and future-ready IT services, MillionLogics specialises in Data & AI, Cloud Solutions, IT consulting, and Oracle Cloud technologies. Backed by a team of over 55+ technology experts, every solution is tailored to meet client needs and drive measurable outcomes. With a mission to empower organisations to lead and adapt, MillionLogics is committed to transforming businesses through cutting-edge technology and strategic expertise.
Role Overview:
As an AI Quality Analyst, you will evaluate a new personalisation feature for Gemini. You will assess how well the model uses information from your past Gemini conversations, Gmail, Google Search, and YouTube activity to make responses more relevant and helpful. This role requires a unique blend of creativity and analytical rigour. You will actively design prompts from the perspective of your own personal experiences. You will then use your analytical skills to assess the quality of the model's personalised responses, evaluating dimensions like Grounding, Integration, and Helpfulness.
Offer Details:
- Duration of Contract: 12 months
- Mode of Work: Fully Remote
- Pay: $1200 per month (take-home)
- Number of positions: 10
Key Qualifications
- Thai Proficiency: Ability to read and write in Thai with a high degree of comp, as Thai is the focus language for this project.
- Personal Account Usage: Willingness to use your primary personal Google account (not a testing account) and enable personal data sources for a genuine assessment.
- Schedule Flexibility: Full-time availability in your local time zone is required. We are staffing a global, 24-hour operations team.
- Exceptional Analytical Thinking: Demonstrate ability to evaluate nuanced and ambiguous AI responses, specifically assessing personalisation quality.
- Creative Prompt Engineering: Experience in designing creative, multi-turn starting prompts based on personal context to thoroughly test the model's capabilities.
- Strong Evaluation Acumen: Understanding of personalisation concepts, including the ability to identify incorrect personalisation, poor inferences, and forced connections.
- Meticulous Attention to Detail: The ability to review Side-by-Side (SxS) model responses and spot subtle differences in naturalness and overnarrating.
- Excellent Written Communication: Superior ability to write clear, concise, and structured rationales for model rankings, explicitly referencing specific turn numbers.
- Feedback: Ability to provide constructive feedback and detailed annotations.
- Communication: Excellent communication and collaboration skills.
- Independence: Self-motivated and able to work independently in a remote setting.
- Technical Setup: Desktop/Laptop set up with a good internet connection.
Description:
- In this role, you will be part of a dynamic team focused on evaluating the quality of personalised AI interactions. Your day-to-day work will involve:
- Designing and executing multi-turn conversational prompts (typically 1-5 turns) that require the AI to utilise your personal information and experiences.
- Evaluating model responses based on your intent from the starting prompt, checking if the personalisation was appropriately applied.
- Analysing responses for Grounding issues, ensuring claims about you are supported by evidence and not flawed inferences or hallucinations.
- Assessing Integration quality to ensure personal data is woven naturally into the response without robotic overnarrating.
- Rigorously evaluating and stack-ranking two model responses side-by-side (SxS) to determine which is overall more helpful, easy to use, and enjoyable.
- Writing clear, defensible rationales for your comparisons, explicitly referencing where issues or positive aspects occurred in the conversation.
- Extracting and verifying Debug Info from the model to confirm that chat summaries and data sources were properly utilised.
- Maintaining strict data hygiene by deleting evaluation conversations to prevent them from polluting your future chat history.
Education & Experience
- BS/BA degree or equivalent experience in a relevant field (e.g., Policy, Law, Ethics, Linguistics, Journalism, Computer Science, or a related analytical field).
- Experience in data annotation, AI quality evaluation, content moderation, or a related role is strongly preferred.
Offer Details:
- Commitments Required: at least 4 hours per day and a minimum 30 hours per week with 4 hours of overlap with PST. (We have 2 options of time commitment: 30 hrs/week or 40 hrs/week)
- Engagement type: Contractor
Evaluation Process
- Shortlisted candidates will be sent a Job Interest Form.
- After the profile review, an assessment will be shared, which must be completed within 24 hours.
- Based on the assessment outcomes, shortlisted candidates will be contacted to discuss the pre‑onboarding requirements.
How to apply Please send us your updated CV to [Confidential Information] with THAI as your email subject