company_logo

Full Time Job

Staff Data Engineer

HBO

Culver City, CA 01-21-2022
 
  • Paid
  • Full Time
  • Senior (5-10 years) Experience
Job Description
The Job

We have created a new Data Insights and Operations (DIO) group within HBOMAX Direct-To-Consumer Organization to make the streaming products more data-driven. This team is looking to hire a motivated Staff Data Engineer who will be working closely with team of highly motivated data engineers to build a state-of-the-art data platform to solve various data-driven use cases across the organization. This platform will host various data products such as but not limited to, Subscription, Content, and Product Analytics, Personalization and Recommendation, Marketing & Ad-Sales enablement. You will be charged with building a new core data platform in the cloud - handles both streaming and batch data processing, capable of solving any big data initiatives in scope now or evolve in future as well. You will be helping data engineers, analysts, and scientists perform their functions by building highly scalable capabilities across the platform. A strong focus, will be placed on building data quality solutions and ensuring the delivery of reliable data.

This individual will bring in his/her expertise in a wide variety of big data processing frameworks (both open source and proprietary), large scale database systems (OLAP and OLTP), stream data processing, API Development, Machine learning operationalization, and cloud automation to build and support all the data needs across HBOMAX platform.

The Daily
• Take lead role in translating various business requirements in to engineering architecture.
• Design and develop the data platform to efficiently and cost effectively address various data needs across the business.
• Build software across our entire cutting-edge data platform, including event driven data processing, storage, and serving through scalable and highly available APIs, with awesome cutting-edge technologies.
• Change how we think, act, and utilize our data by performing exploratory and quantitative analytics, data mining, and discovery.
• Think of new ways to help make our data platform more scalable, resilient and reliable and then work across our team to put your ideas into action.
• Ensure performance of the product by implementing and refining robust data processing, REST services and caching technologies.
• Help us stay ahead of the curve by working closely with data architects, stream processing specialists, API developers, our DevOps team, and analysts to design systems which can scale elastically in ways which make other groups jealous
• Work closely with data analysts and business stake holders to make data easily accessible and understandable to them.
• Ensure data quality by implementing re-usable data quality frameworks.
• Work closely with various other data engineering teams to roll out new capabilities.
• Help build and maintain foundational data products such as but not limited to Various conformed datasets, Consumer 360, data marts etc.
• Build process and tools to maintain Machine Learning pipelines in production.
• Develop and enforce data engineering, security, data quality standards through automation.
• Participate in supporting the platform 24X7.
• Be passionate about growing team - hire and mentor engineers and analysts.
• Be responsible for cloud cost and improving efficiency.

The Essentials
• Bachelor's degree in computer science or Similar discipline.
• 8+ years of experience in software engineering
• 3+ years of experience in data engineering
• Expertise in programming and scripting languages - Java, Python or similar.
• Expertise in building micro services and managing containerized deployments, preferably using Kubernetes
• Expertise in a large volume, scalable, reliable and high-quality data processing (both streaming [Kafka, Kinesis, …] and batch [Spark, Flink, …]) platform is a must.
• Expertise in different types of databases (RedShift, SnowFlake, …) and query engine (SQL, Spark SQL, Hive,…).

Nice to Have
• Cloud (AWS) experience is preferred
• No-SQL (Apache Cassandra, DynamoDB or similar) is a huge plus
• Experience in operationalizing and scaling machine models is a huge plus.
• Experience with variety of data Tools & frameworks (example: Apache Airflow, Druid) will be a huge plus.
• Direct to consumer digital business experience preferred
• Experience with analytics and visualization tools such as Looker, Tableau is preferred.
• Ability to work in fast paced, high visibility, agile environment & ability to take initiative
• Ability and willingness to learn any new technologies and apply them at work in order to stay ahead of the curve.
• Ability and desire to understand the meaning behind the data
• Strong interpersonal, communication and presentation skills.
• Strong team focus with outstanding organizational and resource management skills
• Excellent data analytical skills

Jobcode: Reference SBJ-g4okxq-3-17-184-90-42 in your application.