Job Description
Job DescriptionJob DescriptionRole: PALANTIR Senior Data Engineer
Location: Dallas, TX(Onsite)( )
Duration: Contract
Work Authorization: (H1B/H4 EAD/L2 EAD/USC/GC)
Mandatory: Data Engineer, Palantir, AWS, Python, Pyspark
Years of Experience:ᅠ10+ Years of Experience
Job Description
Job Duties and Responsibilities:
* Use Pyspark to build data pipes in AWS environments
* Write design documents and independently build the Data Pipes based on the defined source to target mappings
* Convert complex stored procedures, SQL triggers, etc. logic using PySpark in the Cloud platform
* Be open to learning new technologies and implementing solutions quickly in the cloud platform
* Communicate with program key stakeholders to keep the project aligned with their goals
* Effectively interact with QA and UAT team for code testing and migrate to different regions
* Spearheads data engineering initiatives targeting moderately to complex data and analytics challenges, delivering impactful outcomes through comprehensive analysis and problem-solving
* Pioneers the identification, conceptualization, and execution of internal process enhancements, encompassing scalable infrastructure redesign, optimized data distribution, and the automation of manual workflows
* Addresses extensive application programming and analysis quandaries within defined procedural guidelines, offering resolutions that span wide-ranging scopes
* Actively engages in agile/scrum methodologies, actively participating in ceremonies such as stand-ups, planning sessions, and retrospectives.
* Orchestrates the development and execution of automated and user acceptance tests, integral to the iterative development lifecycle.
* Fosters the maturation of broader data systems and architecture, assessing individual data pipelines and suggesting/implementing enhancements to align with project and enterprise maturity objective
* Envisions and constructs infrastructure that facilitates access and analysis of vast datasets while ensuring data quality and metadata accuracy through systematic cataloging
Position Qualifications
* Total 10+ years of experience with 3 plus years of experience in data engineering/ETL ecosystems with Palantir Foundry, Python, PySpark and Java.
* Required skills: Palantir
* Nice to have skills: Pyspark and Python
* Expert in writing shell scripts to execute various job scheduler
* Hands-on experience in Palantir & PySpark to build data pipes in AWS environments
* Good knowledge of Palantir components
* Good exposure to RDMS
* Basic understanding of Data Mappings and Workflows
* Any knowledge of the Palantir Foundry Platform will be a big plus
* Implemented a few projects in the Energy and Utility space is a plus
Powered by JazzHR
DJEMXNo0Vk