Senior Data Engineer 8+ || AWS || Onsite Interview Only || H1B

Apply Now

Job Description

If this blog helped you, spread the word!

Contract: W2 /C2C

Interview: You Must attend the interview Onsite @ marvlen pa ( onsite interview for this position. This is mandatory, so please do not apply if you are unable to commit to an Onsite Interview

Note) You Must have genuine and Deep Hands On Exp @ AWS , Big Data & Python(PysparK) ( I am requesting you do not apply if you don’t have at least 8 years of experience @ AWS , Big Data, Python(PysparK) ) with Certifications

Job Title: Data Engineer ( Must have AWS with certification )

Location: Malvern, PA

Visa: H1B and CPT (Only CPT nor OPT) (PP number is mandatory)

LinkedIn: Your LinkedIn must be created before 2020 (Please check before you apply )

Client : Direct Client Requirement , Furter details will be known ones we shortlist you

Job Description: (Its an Onsite Interview So please check the JD Properly before you apply – Deep Hans on Must be there for each technology Mentioned)

Pay: 55$/Hr on C2C (Max)

We seek a highly skilled Data Engineer with expertise in cloud-based data processing, ETL development, and big data technologies. The ideal candidate will have strong problem-solving abilities and experience working with AWS services, CI/CD pipelines, and SQL development.

Role : Senior Data Engineer (8+ Years, AWS & Big Data)

Responsibilities:

* Architect, develop and optimize scalable ETL pipelines using Python, PySpark, and Apache Spark.
* Design and implement high-performance data solutions on AWS (EMR, Glue, Athena, Redshift, Lambda, Step Functions, S3, SNS, IAM, CloudFormation).
* Automate deployments with CI/CD pipelines (Bamboo, Bitbucket, GitHub, AWS CodePipeline).
* Enforce Test-Driven Development (TDD) and optimize data workflows for performance.
* Develop advanced SQL-based data transformations and manage large-scale datasets.
* Work with Kafka, Kinesis, Airflow, and Kubernetes for real-time and batch processing.
* Manage Unix/Linux environments for system optimization.
* Collaborate with analytics teams using Tableau, Hive, and Presto for reporting solutions.
Qualifications:

* 8+ years of hands-on experience in Big Data Engineering, ETL, and AWS.
* Expertise in distributed computing, parallel processing, and data lake architectures.
* Deep knowledge of data modeling, performance tuning, and schema design.
* Strong problem-solving, system optimization, and debugging skills.

If this blog helped you, spread the word!