Job Description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Xcelo Group Inc, is seeking the following. Apply via Dice today!
Job Description
BigData Engineer – W2 (H1B/OPT) Accepted
Location: Austin, TX (Remote)
Duration: Long Term
We are seeking a highly experienced Big Data Architect (Hadoop) to join our growing big data team. You will play a pivotal role in architecting, designing, and implementing robust and scalable big data solutions leveraging the Apache Hadoop ecosystem.
Responsibilities:
* Lead the design and implementation of enterprise-grade Hadoop architectures to meet evolving big data storage, processing, and analytics needs.
* Possess in-depth knowledge of core Hadoop components (HDFS, YARN, MapReduce, Hive, Pig, HBase) and their functionalities.
* Design and optimize data pipelines for efficient data ingestion, processing, and analysis.
* Develop and implement MapReduce jobs and Pig scripts for parallel data processing on large datasets.
* Manage and configure Hadoop clusters for optimal performance, scalability, and high availability.
* Integrate Hadoop with other big data tools and technologies (Spark, Kafka, etc.) to create a comprehensive data ecosystem (Preferred).
* Implement security best practices to ensure data privacy and access control within the Hadoop environment.
* Monitor and maintain the health of the Hadoop ecosystem, proactively identifying and resolving issues.
* Stay up-to-date with the latest advancements in big data technologies and best practices.
* Collaborate with data analysts, data scientists, and other stakeholders to understand data requirements and translate them into technical solutions.
* Document technical designs, architecture decisions, and deployment procedures for future reference.
* Mentor and guide junior Hadoop developers within the team (Preferred).
Qualifications:
* 6+ years of demonstrably successful experience as a Hadoop Developer, Big Data Architect, or a related role.
* Proven track record of designing and implementing scalable big data architectures using the Apache Hadoop ecosystem.
* In-depth knowledge of Hadoop components (HDFS, YARN, MapReduce, Hive, Pig, HBase) and their configurations.
* Proficiency in programming languages like Java or Scala for developing Hadoop applications.
* Experience with scripting languages like Pig for high-level data manipulation (Preferred).
* Experience with SQL for data querying and analysis (Preferred).
* Familiarity with Linux operating systems for working with Hadoop clusters.
* Experience with cloud platforms (AWS, Azure, Google Cloud Platform) for deploying Hadoop clusters (a plus).
* Excellent problem-solving, analytical, and critical thinking skills.
* Strong communication, collaboration, and leadership skills.
We look forward to hearing from you!
Regards,
Xcelo Group