Job Description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Multivision Inc-IL, is seeking the following. Apply via Dice today!
Scala Developer with Azure using Databricks
St. Louis, Missouri, United States
Information Technology (IT)
Job Type: Contract – C2C
Work Mode: Remote
Work Authorization: Any, prefer H1/USC
Rate: $65/hr on C2C
Communication – excellent communication (to interact English with native speakers)
Do you like building complex, secure platforms with a touch of a button? Are passionate about developing automated infrastructure as code that is successfully rolled out across a global implementation? Do you have what it takes to build robust solution that aide data engineers in delivering their data pipelines?
ERC, the Group Compliance & Regulatory Governance (GCRG) Technology team is looking for a hands-on Data Engineer on Azure leveraging Databricks for Scala to:
engineer reliable data pipelines for sourcing, processing, distributing, and storing data in different ways, using Databricks and Airflow
craft complex transformation pipelines on multiple datasets producing valuable insights that inform business decisions, making use of our internal data platforms and educate others about Best Practices for analytics big data
develop, train, and apply data engineering techniques to automate manual processes, and solve challenging business problems and ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements
build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues
leverage Airflow to build complex branching Data Driven pipelines and leverage Databricks to build the spark layer of data pipelines
leverage Scala for low level complex data operations, codify best practices, methodology and share knowledge with other engineers
Your team
You will be working as part of the Group Compliance Regulatory Governance Technology stream that focuses on Data Analytics and Reporting. Our crew is using the latest data platforms to further the group s data strategy to realize the true potential of data as an asset through utilizing data lakes, data virtualization, for use with advanced analytics, AI/ML. The crew also ensure the data is managed with strategic sourcing and data quality tooling. Your team will be responsible for building the central GCRG data lake, developing data pipelines to strategically source data from master and authoritative source, creating data virtualization layer, building connectivity for advanced analytics and elastic search capabilities with the aid of cloud computing.
Your expertise
Bachelor s or master s degree in computer science or any similar engineering is highly desired
ideally 13+ years of total IT experience in SWD or engineering and ideally 3+ years of hand-on experience designing and building scalable data pipelines for large datasets on cloud data platforms
ideally 3+ years of hand-on experience in distributed processing using Databricks, Apache Spark, Kafka & leveraging Airflow scheduler/executor framework
ideally 2+ years of hand-on experience programming experience in Scala (must have), Python & Java (preferred)
ideally 2+ years of hand-on experience programming experience in stream data to Azure, including using Azure Event Hubs, Azure SQL Edge, and Azure Stream Analytics
experience with monitoring solutions such as Spark Cluster Logs, Azure Logs, AppInsights, Grafana to optimize pipelines and knowledge in Azure capable languages, Scala or Java
proficiency at working with large and complex code base management systems like: Github/Gitlab, Gitflow as a project commiter at both command-line and IDEs levels using: tools like: IntelliJ/AzureStudio
experience working with Agile development methodologies and delivering within Azure DevOps, automated testing on tools used to support CI and release management
expertise in optimized dataset structures in Parquet and Delta Lake formats, with ability to design and implement complex transformations between datasets
expertise in optimized Airflow DAGS and branching logic for Tasks to implement complex pipelines and outcomes and expertise in both traditional SQL and NO-SQL authorship
How we hire
This role requires an assessment on application. We can organize for interview today / tomorrow
Mail ID should be associated with LinkedIn
Phone number has to be service provider number not VOIP/Google Voice
Share additional phone number to reach in case of emergency
Pls attach Resume in MS Word format, H1B, I94, DL and Educational Certificate