Job Description
Job Title: Azure Databricks Engineer (Healthcare Data)
Visa: USC, GC, H1B transfer, EAD
Duration: 12+ Months
Location: 100% Remote (Must work in EST time zone)
Positions: 3 Openings
Job Summary
We are seeking an experienced Azure Databricks Engineer with a strong background in ETL development and healthcare data processing. The ideal candidate must have experience working with Databricks, Azure Cloud, data pipelines, and ETL automation. This role requires excellent communication skills and the ability to work collaboratively with cross-functional teams to ensure smooth data integration and transformation.
Key Responsibilities
* Design, develop, and optimize robust data pipelines and ETL automation using Databricks on Azure.
* Collaborate with data architects, DBAs, and product owners to align project goals, requirements, and solutions.
* Build and maintain scalable data pipelines in Azure using Python and Databricks, ensuring efficient processing of large datasets.
* Develop and maintain backend data architectures to support seamless data extraction, transformation, and loading (ETL) processes.
* Work with healthcare data (claims, pharma, PHI/PII) while ensuring compliance with data privacy regulations and implementing security measures like tokenization.
* Utilize Azure cloud technologies, particularly Databricks, to implement large-scale data processing and analytics solutions.
* Integrate middleware solutions like MuleSoft to facilitate data processing between SAP and Databricks.
* Lead and execute data transformation and automation tasks, ensuring scalability and efficiency within the engineering team.
* Ensure data quality, governance, and compliance with HIPAA and other healthcare industry regulations.
Required Qualifications
* 5+ years of experience in Databricks and ETL development.
* Strong expertise in Azure Databricks, Python, and cloud-based ETL processes.
* Experience in healthcare or pharmaceutical data processing, including handling sensitive data (PHI/PII) and implementing security measures.
* Hands-on experience with Azure cloud services and big data processing.
* Knowledge of data warehousing concepts, data modeling, and ETL best practices.
* Experience integrating middleware solutions like MuleSoft for data migration and processing.
* Excellent communication and interpersonal skills (10/10 required).
* Ability to work in EST time zone effectively in a remote setup.
Preferred Qualifications
* Experience working with SAP data integration.
* Familiarity with DevOps practices for data engineering.
* Knowledge of data lake architectures and optimization techniques.
Thanks & Regards
Sudha Mamidi | Manager, Recruitment
T 913 871 6522 | sudham@hyrglobalsource.com