Summary
The Data Engineer builds and optimizes Azure-based data pipelines using Databricks, ADF, Data Lake, and Synapse. The role involves handson development in PySpark, SQL, and Python, supporting analytics, DevOps, and realtime solutions like Kafka
Key Responsibilities
- Design and maintain data pipelines (Databricks, ADF, Data Lake)
- Develop data models and warehouses (Synapse, SQL Server, T/USQL)
- Write efficient code (PySpark, SQL, Python)
- Manage DevOps and CI/CD workflows
- Work with Kafka for streaming data
- Understand data requirements with stakeholders
- Apply dimensional modeling
- Ensure data quality and security
- Communicate technical concepts to varied audiences
Qualifications
- Bachelor's degree in relevant field
- 3+ years handson experience with Databricks, ADF, Synapse, Data Lake, SQL Server
- Strong PySpark, SQL, Python skills
- CI/CD and DevOps experience
- Cloud experience (Azure/AWS)
- Data warehousing knowledge
- Familiarity with Kafka
Must Have Skills
- Databricks
- ADF
- Data Lake
- Synapse
- SQL Server
- T/USQL
- PySpark
- Python
- SQL
- CI/CD
- DevOps
- Kafka
- Dimensional modeling
- Cloud platform experience
- Native Thai and English Speaker
Good to Have Skills
- Kafka streaming experience
- Product mindset
- Strong stakeholder Management
- Strong communication and collaboration
Hybrid Work - 4 days a week work from the office
Office Location - Gemopolis industrial, Prawet, Bangkok
Contract Duration - 1 year
Career Stage - Associate L2