
Senior Data Engineer
Posted 1 hour ago

Posted 1 hour ago
• Design, implement, and sustain robust and scalable data pipelines utilizing AWS, Azure, and containerization technologies.
• Develop and manage ETL/ELT processes to extract, transform, and load data from diverse sources into data warehouses and data lakes.
• Collaborate with data scientists, analysts, and fellow engineers to guarantee seamless data flow and accessibility throughout the organization.
• Enhance data storage and retrieval performance by leveraging cloud services such as AWS Redshift, Azure Synapse, or other pertinent technologies.
• Utilize containerization tools like Docker and Kubernetes to ensure efficient deployment, scalability, and management of data pipelines.
• Monitor, troubleshoot, and optimize data processing pipelines for performance, reliability, and cost-effectiveness.
• Automate manual data processing tasks and enhance data quality through the implementation of data validation and monitoring systems.
• Establish and maintain CI/CD pipelines for automation and deployment of data workflows.
• Ensure adherence to data governance, security, and privacy regulations across all data systems.
• Engage in code reviews and ensure adherence to best practices and documentation for data engineering solutions.
• Remain updated with the latest trends in data engineering, cloud services, and technologies to continually enhance system performance and capabilities.
• Exceptional communication skills, particularly fluency in verbal English, is essential for explaining complex technical concepts to non-technical stakeholders and collaborating across teams.
• Demonstrated experience as a Data Engineer, with practical expertise in building and managing data pipelines.
• Strong knowledge of cloud technologies, specifically AWS (e.g., S3, Redshift, Glue) and Azure (e.g., Data Lake, Azure Synapse).
• Experience with containerization and orchestration tools such as Docker and Kubernetes.
• Proficiency in data engineering programming languages, including Python, Java, or Scala.
• Solid experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra).
• Familiarity with data processing frameworks such as Apache Spark, Apache Kafka, or similar tools.
• Experience with workflow orchestration tools like Apache Airflow, DBT, or comparable options.
• Understanding of data warehousing concepts and technologies (e.g., Snowflake, Amazon Redshift, or Google BigQuery).
• Comprehensive understanding of ETL/ELT processes and best practices.
• Experience with version control systems like Git.
• Strong problem-solving abilities and a proactive approach to troubleshooting and optimization.
• Competitive salary and flexible payment options.
• Opportunities for growth and professional advancement.
• Flexible working hours and the option for full remote work.
• Collaborate in an innovative, inclusive, and supportive environment.
• Be part of a data-driven culture that leads the way in innovation.
NIVA Health
Stefanini Brasil
Get handpicked remote jobs straight to your inbox weekly.