Job description
We are seeking a skilled Data Engineer with 2+ years of experience in designing and maintaining data pipelines. The ideal candidate will have strong expertise in PySpark, Python, SQL,and ETL orchestration tools and will play a key role in building, optimizing, and managing scalable data workflows across cloud environments. This role offers the opportunity to work on high-impact data engineering projects and contribute to the development of robust, efficient and reliable data solutions.
Key Responsibilities
- Develop and manage PySpark-based notbooks in MS Fabric.
- Write efficient SQL queries for data transformation.
- Automate and orchestrate data pipelines using tools like Dagster or ADF.
- Work with Microsoft Fabric for integration and analytics.
- (Optional) Build reports and dashboards in Power BI.
Requirements:
- 2+ years in data engineering.
- Strong hands-on experience with PySpark, Python, and SQL.
- Experience with any orchestration tool (Dagster, ADF, etc.).
- Familiarity with Fabric or cloud data platforms.
Perks & Benefits:
- Competitive salary
- Opportunities for advancement
- Professional training and certifications
- Food, travel, gym allowance
- Paid time off and holidays
- Bi-annual increments and bonuses
- Opportunity for certifications
- Working on modern Technologies
- Flexible working hours
How to Apply:
Interested candidates are invited to submit their resume and a cover letter detailing their relevant experience and why they are a good fit for this position to hr@datumlabs.io. Or fill out the form here!.
Datum Labs is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Apply here
Make your first move in giving your career a massive push forward.