Data Engineer This job has ended. You cannot apply anymore.

YKB Logo
Computers/IT
Posted on 19 Sep, 2022
Closing on 03 Oct, 2022

Job Description

Job Title: Data Engineer
Job Location: Yemen Kuwait Bank, Sana’a, Yemen
Application Deadline: 03 October 2022

Job Summary

Yemen Kuwait Bank (YKB) is a leading financial services firm, helping Yemeni's individuals and businesses achieve their financial goals through a broad range of financial products. YKB is reliant upon the confidentiality, integrity, and availability of its data and information to successfully conduct its operations, meet customers and staff expectations, and provide services. This is why at the heart of our digital transformation is data as well as assembling the right team of people in Data & Analytics domain.
As part of the transformation, we are looking for talented Data Engineer (but not only, see other job postings) to join our newly formed Data & Analytics unit. As a Data Engineer you’ll provision and set up data platform technologies that are on-premises and in the cloud. You’ll build and manage the flow of structured and unstructured data from multiple sources into analytics data stores, ensure that data services securely and seamlessly integrate across the data platform.

Responsibilities

  • Primary responsibilities of data engineer include the use of on-premises and cloud data services and tools to ingest, egress, and transform data from multiple sources.
  • Collaborate with business stakeholders to identify data requirements, and work with IT team to determine and understand appropriate data sources that meet stakeholder requirements.
  • Design and build high-performance, secure, and scalable data pipelines to support data analytics projects following software engineering best practices (Modularity and Automated Testing)
  • Maintain high data quality by applying data quality, data standardization and enrichment transformations into data pipelines.
  • Harmonise and simplify the disparate data sources by combining various data sets and build a unified business domain layer analytical stores (such as Lake database, data warehouses/mart), which can be reused for various analytics and reporting use cases
  • Monitor and optimize data storage and data processing by implementing appropriate observability tools and optimization methods.
  • Design and implement data security including data auditing, masking, encryption and role-based access control.
  • Maintain reliable metadata layer by collecting technical metadata and business metadata into data governance tool and metastores.

Knowledge Areas

  • Know PYTHON is more than a snake! You have programming skills needed for big data processing (e.g., complex SQL scripts and with Pyspark / Jupyter Notebooks)
  • Demonstrated skill in batch processing tools (e.g. SSIS, ODI, Data Factory, Data Lake, Spark, Azure Synapse Pipelines, PolyBase, and Azure Databricks, Informatica, Talend)
  • Demonstrated skill in stream processing tools (e.g. Stream Analytics, Azure Databricks, and Azure Event Hubs, Spark Streaming, Kafka, Flink, Storm, Oracle GoldenGate)
  • Demonstrated skill in Data Quality tools (e.g. Informatica, Talend, CluedIn and Ataccama One, Great Expectations)
  • Demonstrated skill in Master Data Management solutions (e.g. Profisee, CluedIn, MDS, SAP master data governance, Semarchy)
  • Demonstrated skill in analytical stores design (e.g. Data Lake, Data Warehouse, MPP, Dimensional Modeling, partitioning)
  • Knowledge of Relational and non-relational Databases (e.g. SQL Server, Oracle, MongoDB, CosmosDB, Azure SQL)
  • Knowledge of data preparation and enrichment tools (e.g. Web Scrapping, Data APIs services, Power Query M, Power BI Dataflows)
  • Awareness of DevOps and CI/CD principles.

Experience

  • 4+ years of experience in a data-driven role
  • 2+ years of hands-on experience building data pipelines in production and ability to work across structured, semi-structured and unstructured data
  • Minimum of 3 years of hands-on experience in Big Data Programming such as Python, Scala-SPARK, Spark SQL.
  • Experience in Banking/Financial sector will be an added advantage

Education

  • Bachelor’s degree in Computer Science, Management Information Systems, or in a related field;

Certification

  • Any Data Engineering Certifications is a plus (Spark Developer Certification, Databricks Certified Data Engineer) 
  • Azure certifications will be added advantage (Azure Data Engineer Associate)

Competencies

  • Technology design and programming (Level 4): Capacity to use programming to design machines or technological systems which fit user needs. In addition, understanding how others use tools, determine the cause of operating errors and how to fix them. That include writing computer programmes for various purposes, especially in the big data space. You'll need expertise in at least one (or ideally all) of these languages (Python, PySpark, Scala, Java, R, SQL, or occasionally C++)
  • Coding (code quality) (Level 4): Optimises, challenges and follows the coding and documentation standards and takes ownership of features by specifying, designing and writing code and documentation. Continues to partner with and performs code reviews for peers and others.
  • Data APIs (Level 4): Interacting with data APIs is an essential skill for any technical data engineer. These days, the majority of tools and platforms have restful APIs—and you'll need to be able to interact with these services to build solutions.
  • Functional Knowledge (Level 3): Is able to fully understand the bank's business process in the various domains and to design related products.
  • Analytical thinking and innovation (Level 3): Capacity to analyze information and use logic to address issues and problems, apply alternative thinking to develop new, original ideas and answers. 
  • Attention to detail, trustworthiness (Level 4): Dependability, commitment to doing the job correctly and carefully, being trustworthy, accountable and paying attentive to details.
  • Self-Development (Level 5): Actively seeks new ways to grow and be challenged using both formal and informal development channels. For example, shows commitment to own leadership development and encourages others to expand their expertise and skills; uses stretch opportunities to broaden capabilities aligned with organizational needs. 
  • Collaborates (Level 5): Builds partnerships and works collaboratively with others to meet shared objectives. For example, encourages coworkers and external partners to work together as a team, and makes sure they get credit for doing so. Encourages people to share their honest views, responds in a non-defensive way when they do.
  • English Language Proficiency (Level 4): Can understand the main ideas of complex text on both concrete and abstract topics, including technical discussions in his/her field of specialisation. Can interact with a degree of fluency and spontaneity that makes regular interaction with native speakers quite possible without strain for either party. Can produce clear, detailed text on a wide range of subjects and explain a viewpoint on a topical issue giving the advantages and disadvantages of various options.

How to Apply

How to apply has been removed becuase this job has ended. If this is a special case, please contact us and we will help.

Important Notes / مقترحات هامة

Following the instructions on How to apply will always increase your chances of getting your application looked at.
إتباع تعليمات التقديم المذكورة في كل إعلان ستزيد من فرصة النظر لسيرتك الذاتية من قبل الجهة المعلنة
If you’re applying by email, make sure you mention the job title in the “Subject” field of your message
تذكر أن تكتب إسم الوظيفة و موقعها في عنوان البريد عندما يكون التقديم عبر الإيميل