
Lead Data Engineer – Platform
Posted 1 day ago

Posted 1 day ago
• Design, develop, and sustain scalable frameworks and platform utilities utilized by engineering teams.
• Create reusable patterns, templates, and abstractions to standardize and expedite delivery.
• Define and enhance architectural decisions for the platform, ensuring scalability, maintainability, and consistency.
• Design and implement CI/CD pipelines and automation frameworks to boost engineering efficiency.
• Establish and enforce engineering standards related to testing, code quality, deployment, and documentation.
• Identify and eliminate manual or repetitive tasks through the implementation of automation and tooling enhancements.
• Integrate AI-assisted development tools into engineering processes to enhance productivity.
• Develop and maintain AI engineering resources, such as coding guidelines, prompt frameworks, and reusable agent configurations.
• Lead the development and operational support of core data transformation frameworks, including dbt Core at an enterprise level.
• Investigate and resolve framework-level issues, such as deployment failures, dependency conflicts, and production incidents.
• Support the onboarding and enablement of engineering teams as they adopt platform tools.
• Serve as the primary technical contact for inquiries related to the platform and frameworks.
• Collaborate with engineering teams to identify challenges and translate them into platform enhancements.
• Ensure that platform tools adhere to security, compliance, and operational standards.
• Conduct and facilitate code and design reviews across various platform components.
• Monitor the health, performance, and adoption of the platform, making adjustments based on feedback and metrics.
• Contribute to documentation, developer guides, and enablement materials to enhance usability and adoption.
• Strong proficiency in Python for constructing frameworks, automation tools, and reusable components.
• Practical experience with Databricks, including notebooks, workflows, jobs, and Unity Catalog.
• Solid SQL skills and experience with distributed processing frameworks like Apache Spark.
• Extensive experience with dbt Core, including project structure, models, tests, macros, and large-scale deployment.
• Proven track record in designing and maintaining CI/CD pipelines (e.g., GitHub Actions, Azure DevOps, or GitLab CI).
• Experience with designing data engineering platforms, including scalable pipeline and workflow architectures.
• Strong grasp of software engineering principles (DRY, SOLID, modular design).
• Familiarity with cloud platforms (preferably AWS) and infrastructure-as-code concepts (e.g., Terraform).
• Experience in implementing automated testing strategies (unit, integration, data quality).
• Strong understanding of platform monitoring, logging, and alerting practices.
• Experience in writing technical documentation and guidance for developers.
• Experience working within complex engineering environments that involve multiple teams.
• 24 paid vacation days annually.
• Coverage for national holidays.
• Sick leave (up to 20 days per year).
• Unpaid leave (up to 20 days per year).
• Medical insurance coverage.
• Multisport card OR Multikafeteria benefits.
• Support for maternity and paternity leave.
• Access to internal workshops and learning initiatives.
• Reimbursement for professional certifications.
• Opportunities to participate in local and global professional communities.
• Growth Framework to help manage expectations and outline steps for career advancement.
• Mentoring program that allows individuals to become mentors or mentees for career progression.
• Flexibility with remote and hybrid work options (dependent on country).
• Opportunities for career advancement, including international mobility and professional development programs.
• Access to cutting-edge tools, training, and industry experts for learning and development.
Get handpicked remote jobs straight to your inbox weekly.