<p>Join our data engineering team as a Big Data Engineer (Hadoop/Spark) based in Jeddah, Saudi Arabia. You'll leverage Apache Spark, Hadoop, Python, Scala to build scalable data pipelines that process millions of records daily.</p>
<p>This position involves working closely with cross-functional teams to deliver projects on schedule and within scope. You'll take ownership of assigned tasks, communicate progress regularly, and proactively identify potential roadblocks. You'll also contribute to process improvements and help establish technical standards.</p>
<p>We pride ourselves on maintaining a supportive workplace where diverse perspectives are welcomed and everyone can thrive. Our organization provides the resources and autonomy you need to succeed, along with mentorship from experienced professionals. You'll have the opportunity to work with cutting-edge technologies while solving meaningful problems.</p>
رجوع
متاح
Big Data Engineer (Hadoop/Spark) - Jeddah, Saudi Arabia
Confidential Company
وصف الوظيفة
المتطلبات
<ul>
<li>5-8 years of progressive professional experience</li>
<li>Bachelor's degree in relevant field; Master's degree preferred</li>
<li>Extensive experience leading complex projects and initiatives</li>
<li>Strong proficiency in Apache Spark with hands-on project experience</li>
<li>Expert knowledge of Hadoop and related best practices</li>
<li>Demonstrated experience working with Python in production environments</li>
<li>Deep understanding of Scala and its practical applications</li>
<li>Advanced skills in Hive with ability to solve complex problems</li>
<li>Detail-oriented approach with commitment to quality and accuracy</li>
<li>Ability to adapt quickly to changing priorities and requirements</li>
</ul>
<li>5-8 years of progressive professional experience</li>
<li>Bachelor's degree in relevant field; Master's degree preferred</li>
<li>Extensive experience leading complex projects and initiatives</li>
<li>Strong proficiency in Apache Spark with hands-on project experience</li>
<li>Expert knowledge of Hadoop and related best practices</li>
<li>Demonstrated experience working with Python in production environments</li>
<li>Deep understanding of Scala and its practical applications</li>
<li>Advanced skills in Hive with ability to solve complex problems</li>
<li>Detail-oriented approach with commitment to quality and accuracy</li>
<li>Ability to adapt quickly to changing priorities and requirements</li>
</ul>
المسؤوليات
<ul>
<li>Build real-time data streaming pipelines for operational analytics</li>
<li>Develop data quality frameworks to identify and resolve data issues</li>
<li>Create self-service analytics tools for business users</li>
<li>Perform exploratory data analysis to identify trends and patterns</li>
<li>Implement data security measures and access controls</li>
<li>Optimize data storage costs through partitioning and compression</li>
<li>Collaborate with engineering teams to improve data collection methods</li>
<li>Maintain metadata repositories and data catalogs</li>
</ul>
<li>Build real-time data streaming pipelines for operational analytics</li>
<li>Develop data quality frameworks to identify and resolve data issues</li>
<li>Create self-service analytics tools for business users</li>
<li>Perform exploratory data analysis to identify trends and patterns</li>
<li>Implement data security measures and access controls</li>
<li>Optimize data storage costs through partitioning and compression</li>
<li>Collaborate with engineering teams to improve data collection methods</li>
<li>Maintain metadata repositories and data catalogs</li>
</ul>
المهارات المطلوبة
Apache Spark
Hadoop
Python
Scala
Hive
Kafka
Data Lake
AWS
المزايا
Health Insurance
Annual Flight Tickets
End of Service Benefits
Housing Allowance
Transportation Allowance
Professional Development Budget