Senior Data Integration Engineer

November 23, 2024
$192000 - $192000 / year
Login to Apply

Job Description

City: Santa Monica, CA/ Glendale, CA

Onsite/ Hybrid/ Remote: Onsite

Duration: 12 months

Rate Range: $96/hr on W2 depending on experience (no C2C or 1099 or sub-contract)

Work Authorization: GC, USC, All valid EADs except H1b

Description

We are looking for a highly skilled and proactive Data Integration Engineer to join our dynamic team. The ideal candidate will have expertise in AWS architecture, ETL orchestration, Python programming, and data pipeline management, with a focus on minimizing data quality issues and building safeguards for pipeline failures. This role will require a detail-oriented, organized, and disciplined individual who excels at collaboration, communication, and delivering high-quality solutions on time.

Key Responsibilities

Design, implement, and maintain scalable, resilient, and reliable data integration solutions using AWS services, including Lambda functions and ETL orchestration.

Develop, monitor, and troubleshoot Kafka messaging streams, ensuring seamless data sourcing and production.

Write clean, efficient, and reusable Python code, applying design patterns to create maintainable and efficient data pipelines.

Build and manage data pipelines with strong safeguards to prevent failures and ensure data quality using tools such as Datadog.

Collaborate with cross-functional teams, including data engineers, analysts, and product managers, to enhance and scale our data platform.

Implement processes to identify and mitigate data quality issues across multiple sources.

Participate in Scrum ceremonies and contribute to the agile process by following best practices, maintaining transparency, and tracking progress in JIRA.

Take a proactive approach to problem-solving and deliver solutions within defined timelines, ensuring the highest levels of quality.

Continuously seek new opportunities to optimize the data platform for performance, scalability, and resilience.

Contribute to team knowledge-sharing and process improvement initiatives.

Qualifications

Strong expertise in AWS architecture (e.g., Lambda, EC2, S3, Glue, RDS, etc.).

Proficiency in Python programming, with a focus on applying design patterns to create efficient, scalable, and reusable code.

Experience with Kafka messaging systems for data integration.

Deep understanding of ETL orchestration and pipeline management.

Strong logical, analytical, and problem-solving skills.

Experience building and maintaining data pipelines, with an understanding of failure modes and strategies to mitigate data quality issues using monitoring tools like Datadog.

Excellent organizational skills and attention to detail, with the ability to manage multiple priorities and deliver on time.

Strong communication skills to collaborate effectively with cross-functional teams.

Familiarity with Scrum/Agile methodologies and experience using JIRA to manage work.

A proactive, disciplined, and results-oriented approach to work.

Preferred Qualifications

Familiarity with CI/CD practices and DevOps tools.

Knowledge of data governance and security best practices.