We are working with a Tech Giant based in the Middle East, focused on AI and cloud computing that builds and manages cloud computing infrastructure, performs fundamental and applied AI research, and deploy AI-centric industry solutions across different industries.
Their mission to build the largest cloud in the UAE and to offer Artificial Intelligence capabilities in Data, Cloud, Oil & Gas, Security, Intelligence and defense.
The Subsidiary is a leading AI company focused on Smart Government programs, public safety, economic planning, and national happiness. They strive at actionable intelligence for safer and smarter cities across these product offerings: data science, data management, intelligence, advisory services.
As a Data Engineer in our team you are responsible for assessing complex data sources and quickly turning these into business insights. You are expected to visually demonstrate data concepts and use cases that will [provide business values to our customers. The Data Engineer also supports the implementation and integration of these new data sources into our platform.
Who you are:
• You manage Data Warehouse/Data Lake plans for a product or a group of products.
• You are a Team-Player, collaborating with engineers, product managers and product analysts to understand data needs.
• You build data expertise and own data quality for allocated areas of ownership.
• You design, build and launch new Data Models in production.
• You design, build and launch new data extraction, transformation and loading processes in production.
• You are detailed, defining and managing SLAs for all data sets in allocated areas of ownership.
• You work with data infrastructure to triage INFRA issues and drive to resolution.
• You transform ambiguity into clarity.
• You enjoy collaborating in a multicultural and diverse environment that expands to include various geographic locations.
• You have stellar communication skills, effectively expressing yourself. You convey and receive information in a clear, credible and consistent manner.
What you ll need:
• Bachelor s degree in CS, Statistics, Information Systems or another quantitative field, with a minimum of 5 years of experience working on Big Data technologies.
• 2+ years experience with Data Warehouse/Data Lake.
• 2+ years experience in custom ETL design, implementation and maintenance.
• 2+ years experience with either a MapReduce, SPARK or a MPP system.
• 2+ years experience with Schema design/Dimensional Data Modeling/No SQL Data Modeling.
• 2+ years experience in writing SQL statements.
• Experience working with Python or Java, Scala or Go.
• Experience working with version management tools such as GitHub, GitLab.
Bonus if you have:
• Experience working in an Agile environment, CSD, CSM, SA, ASE.
• Knowledge /Experience with a cloud stack.
• Telecom and/or government data set experience/knowledge.
If this resonates with you, then please do apply.