Job Title:
- Junior AI Engineer (Agentic AI & LLMs)(New Grads / 0-2 Years Experience)
- AI Platform Engineer (New Grads / 1+ Year Internship Experience also welcome)
About Us
We are a specialized AI product team building a production-grade, Cloud Native Agentic AI Platform. Unlike traditional software teams, we treat Generative AI as a core architectural component. We work with highly experienced Senior AI Engineers and Solution Architects to deliver intelligent, scalable, and robust solutions that bridge the gap between complex AI models and real-world user applications.
AI Engineers goal is to make our AI smart and autonomous. While the Lead Engineer ensures the system is scalable and deployed on AWS, AI Engineers will focus on the Behavior: designing how Agents think, cleaning the data they learn from, and proving they work through rigorous testing.
Job Responsibilities
1. Agent Orchestration & Logic (40%)
- Build Thinking Agents: Use LangChain and LangGraph to design agents that can plan, remember context (Multi-turn conversational memory), and execute tasks.
- Tool Integration: Write the Python logic that allows the AI to use tools (e.g., calling an API, querying a database, or searching the web).
- Prompt Engineering: Design and optimize system prompts to control the persona and guardrails of the AI, ensuring it doesn't hallucinate or break character.
2. Data Orchestration (35%)
- Data Preparation: You are the gatekeeper of data quality. You will write scripts (Python/Pandas) to scrape, clean, and structure unstructured data (PDFs, Websites) before it enters our system.
- Vector Pipeline: Manage the ingestion of data into our Vector Database (pgvector/Weaviate), experimenting with different chunking strategies to see what yields the best search results.
- Exploratory Data Analysis (EDA): Analyze incoming data to understand edge cases that might confuse the model.
3. Evaluation & Experimentation (25%)
- Benchmarking: Stop guessing. Implement evaluation loops (using Ragas, DeepEval, or custom scripts) to measure the accuracy and retrieval quality of our agents.
- Model Selection: Help the Lead Engineer test new models (e.g., comparing Llama 3 vs. Mistral vs. Claude) to find the best balance of cost vs. intelligence for specific tasks.
Background / Experiences
- Engineering Foundation: B.Eng. or B.Sc. in Computer Engineering, Computer Science, or a related field. We look for a solid grasp of core concepts like Data Structures and Algorithms.
- Project Portfolio:0-2 years of experience (internships included). We are looking for candidates who have applied their learning to build practical, end-to-end projectswhether it's a Capstone Project, a Hackathon solution, or a product built during an internship.
- Python Proficiency: Strong command of Python. We appreciate candidates who write clean, modular code and are familiar with modern practices like Async I/O and Type Hinting.
- Collaborative Mindset: Experience working in teams, participating in code reviews, or collaborating on shared repositories.
Knowledge & Skills
- AI Frameworks: Hands-on experience with tools like LangChain, OpenAI SDK, or Hugging Face. You should be familiar with the concepts of building a RAG workflow or an AI Agent.
- Database Fundamentals: Proficiency in SQL (PostgreSQL or BigQuery) for data retrieval. Familiarity with Vector Databases (like Weaviate or pgvector) is highly relevant to our work.
- Version Control: Comfortable using Git for daily development and collaboration.
- Data Handling: Ability to work with unstructured data (JSON, text files) and basic data manipulation libraries.
Bonus Points (Preferred if you have)
- Backend & API Development: Familiarity with FastAPI or Flask to wrap AI logic into usable API endpoints.
- Cloud Exposure: Basic understanding of cloud environments like AWS or GCP (e.g., knowing how code runs on a server vs. a local machine).
- DevOps Tools: Experience with Docker for containerization or writing basic CI/CD scripts.
- Workflow Orchestration: Knowledge of tools like Airflow or Prefect for managing data pipelines.
- Web Scraping: Experience using tools like Selenium, Scrapy, or Firecrawl to gather training data.
- Local LLMs: Curiosity drivenyou have experimented with running models locally using Ollama or vLLM.
Drop your CV: [Confidential Information]