McD Tech Labs

McD Tech Labs

CA, US

Tech Lead, Data Infrastructure

We are currently looking for a talented tech lead / manager to lead our data infrastructure department. This role reports directly to the McD Tech Labs Director of Engineering. The tech lead will lead an engineering team focused on infrastructure powering transformative AI/ML products reaching tens of millions of customers per day, feeding billions McD customers worldwide. The department covers data infrastructure, data pipelines, analysis, and performance optimization.

The ideal candidate has experience leading engineers focused on building robust, scalable, large-scale datastores, and a passion for building agile, high-performing teams.

Responsibilities:

  • Build, lead and mentor a team of infrastructure engineers in a fast-moving and quickly growing organization
  • Create big data and batch/real-time analytical solutions that leverage hybrid datastore technologies operating at petabyte scale
  • Collaborate with ML/AI teams to develop features and APIs to support use cases spanning analytics, research, triage, and monitoring
  • Develop and maintain pipelines to manage resilient idempotent coordination with external databases, APIs and systems
  • Identify and address functionality, stability and scalability design decisions across the stack

Qualifications:

  • BS degree in Computer Science, similar technical field, or equivalent experience
  • 5+ years of experience in data infrastructure with proven experience leading a team of engineers
  • 5+ years of Java experience working in Linux/Unix environment
  • 3+ years of Python and scripting experience preferred
  • Strong experience developing infrastructure powering products and services operating in high performance, distributed systems in large-scale, open-source Linux/Unix environment
  • Strong understanding of databases, distributed systems, data lakes, data warehouses, workflow systems and event streaming
  • Experience with modern, large scale data processing technologies such as Hadoop, Hive, Pig, Spark, Presto and Impala
  • Experience with pub/sub pipelines and dataflows such as Kafka, SQS, MQTT
  • Experience with Elasticsearch and search engine capabilities
  • Experience with API design and gRPC
  • Experience with AWS technologies such as Redshift, Athena, S3, RDS, ECS and SNS
  • Demonstrated ability to facilitate and coordinate complex API design and implementation activities across teams in agile sprints with minimal direction