Send me Jobs like this
Experience
5 - 10 Years
Job Location
Education
Bachelor of Science(Computers)
Nationality
Any Nationality
Gender
Not Mentioned
Vacancy
1 Vacancy
Job Description
Roles & Responsibilities
Key Responsibilities
- Build and maintain batch and streaming data pipelines with strong emphasis on reliability, performance, and efficient cost usage.
- Develop SQL, Python, and Spark/PySpark transformations to support analytics, reporting, and ML workloads.
- Contribute to data model design and ensure datasets adhere to high standards of quality, structure, and governance.
- Support integrations with internal and external systems, ensuring accuracy and resilience of data flows.
GenAI & Advanced Data Use Cases
- Build and maintain data flows that support GenAI workloads (e.g., embedding generation, vector pipelines, data preparation for LLM training and inference).
- Collaborate with ML/GenAI teams to enable high-quality training and inference datasets.
- Contribute to the development of retrieval pipelines, enrichment workflows, or AI-powered data quality checks.
Collaboration & Delivery
- Work with Data Science, Analytics, Product, and Engineering teams to translate data requirements into reliable solutions.
- Participate in design reviews and provide input toward scalable and maintainable engineering practices.
- Uphold strong data quality, testing, and documentation standards.
- Support deployments, troubleshooting, and operational stability of the pipelines you own.
Professional Growth & Team Contribution
- Demonstrate ownership of well-scoped components of the data platform.
- Share knowledge with peers and contribute to team learning through code reviews, documentation, and pairing.
- Show strong execution skills delivering high-quality work, on time, with clarity and reliability.
Impact of the Role
In this role, you will help extend and strengthen the data foundation that powers analytics, AI/ML, and GenAI initiatives across the company. Your contributions will improve data availability, tooling, and performance, enabling teams to build intelligent, data-driven experiences.
Tech Stack
- Languages: Python, SQL, Java/Scala
- Streaming: Kafka, Kinesis
- Data Stores: Redshift, Snowflake, ClickHouse, S3
- Orchestration: Dagster (Airflow legacy)
- Platforms: Docker, Kubernetes
- AWS: DMS, Glue, Athena, ECS/EKS, S3, Kinesis
- ETL/ELT: Fivetran, dbt
- IaC: Terraform + Terragrunt
Desired Candidate Profile
Company Industry
Department / Functional Area
Keywords
- Professional Data Engineer
Disclaimer: Naukrigulf.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advice against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@naukrigulf.com
Property Finder Group
Property Finder is the leading property portal in the Middle East and North Africa (MENA) region, dedicated to shaping an inclusive future for real estate while spearheading the region s growing tech ecosystem. At its core is a clear and powerful purpose: To change living for good in the region. Founded on the value of great ambitions, Property Finder connects millions of property seekers with thousands of real estate professionals every day. The platform offers a seamless and enriching experience, empowering both buyers and renters to make informed decisions. Since its inception in 2007, Property Finder has evolved into a trusted partner for developers, brokers, and home seekers. As a lighthouse tech company, it continues to create an environment where people can thrive and contribute meaningfully to the transformation of real estate in MENA.
https://boards.greenhouse.io/propertyfinder/jobs/7537303003?gh_jid=7537303003