Location: Sydney,New South Wales,Australia

AWS Data Engineer, City People Solutions. Sydney NSW. Engineering: Software (Information & Communication Technology). Full time. Add expected salary to your profile for insights.

About CITY PEOPLE SOLUTIONS (CPS): City People Solutions is a leading IT Consulting, multi-specialist and managed recruitment and talent hire services provider. We are recognized for our expertise and success in providing businesses with flexible, Motivated and performance-based workforce within the Australian marketplace.

Since our beginnings in 2016, the Australian Staffing Agency has grown in the spirit of providing professional services with a personal touch and exceeding the expectations of our customers. CPS provides quality labor solutions across a number of industry sectors, including Industrial and commercial recruitment, and focuses on helping our customers meet their staffing needs. City People Solutions has built a strong company with a solid reputation based upon the company values of professionalism with a personal touch, respect for our customers and community, efficiency and responsiveness.

CPS provides training and on-the-job mentoring for all our staff to instill our values and help exceed our customers’ expectations in the work that they do. We help businesses build what is genuinely meaningful to their customers. We find the best way to enable change from the inside and ensure that the positive changes we make are sustained long after we leave.

Multiple AWS Opportunities to join a Leading Global Company based in Sydney. Permanent/Contract roles. Sydney Based. AWS DATA Engineer. Project Role: Data Engineer, Scale Migration Specialist.

Role: To drive the successful migration of on-premise Hadoop big data infrastructure to AWS, ensuring scalability, performance, and data integrity throughout the process.

Description: The Data Engineer will be a key player in our Ninja Team, responsible for designing, developing, and implementing data processing and migration strategies. The role involves working with various big data technologies and AWS services to facilitate a seamless transition from on-premise Hadoop to a cloud-based environment. This includes data migration, optimization, and integration tasks to ensure robust and scalable data solutions.

Key Responsibilities: Design and implement data migration strategies from on-premise Hadoop to AWS. Develop and optimize Spark jobs for data processing on AWS EMR. Develop and optimize Spark jobs for data processing on AWS EMR. Manage and maintain S3 storage solutions for data lake architecture. Configure and manage AWS Glue for ETL processes and data cataloging. Ensure data quality and integrity throughout the migration process. Collaborate with stakeholders to understand and document data requirements. Perform data mapping and transformation tasks as part of the migration strategy. Implement and manage AWS Lake Formation for data governance and security. Troubleshoot and resolve data migration issues and performance bottlenecks. Monitor and optimize data workflows for performance and cost efficiency. Develop and maintain documentation for data processes and architecture. Coordinate with cross-functional teams to align migration activities with project goals. Provide technical guidance and support to junior team members. Conduct performance tuning and optimization of data processing tasks. Ensure compliance with data security and privacy policies. Design and implement automated data pipelines using AWS services. Perform regular data validation and verification checks.

Collaborate with AWS support and engineering teams as needed. Stay up-to-date with the latest developments in big data and cloud technologies. Assist in training and knowledge transfer activities for team members."

Education & Primary Experience: Bachelor’s or Master’s degree in Engineering, Computer Science, or a related field. 6–10 years of experience with the following technologies: Spark, Oracle 11g, Hive, HDFS, Hadoop, and AWS services." Primary Skillset (must-have). Expertise in Apache Spark and AWS EMR. Strong experience with AWS services such as S3, Glue, and Lake Formation. Proficiency in HDFS, Hive, and data migration strategies. Solid understanding of data processing and optimization techniques."

Project Role: Senior Cloud Data Engineers

Description: As a Data Engineer, you are expected to be instrumental in designing, implementing, and maintaining data ingestion, lake formation pipelines, etc. on AWS Cloud Data Services. You will work closely with data scientists, analysts, data engineers and devops engineers to ensure the smooth flow and processing of data within the AWS ecosystem. You would be expected to use your expert knowledge and learnings to build scalable, reliable, and high-performance reusable data pipelines.  Your prime responsibility will be to understand the complexity of implemented pipelines, frameworks, and components of Cloudera Hadoop system  and migrate them to AWS Data Services and Frameworks, identify the existing pattern of data pipelines and implement it in AWS S3, Spark, and EMR ecosystem.

Key Responsibilities: Interact with business stake holders and designers to implement and understand business requirements. Hadoop development and implementation. Loading from disparate data sets. It is good to have Cloudera System Components. Must have working experience in IntelliJ IDEA, AutoSys, WinSCP, Putty & GitHub. Designing, building, installing, configuring and supporting Hadoop data warehouse on AWS S3/Spark-EMR. Ingest and transform data using spark & scala. Advanced understanding of ETL processes and practices, ideally having implemented an ETL system before.

Translate complex functional and technical requirements into detailed design. Perform analysis of vast data stores and uncover insights. Maintain security and data privacy. Familiarity with data governance, data privacy, and data security practices. Create scalable and high-performance data services on cloud. High-speed querying. Managing Metadata. Test prototypes and oversee handover to operational teams. Strong knowledge of data structures, including time variants, dimensional models, and algorithms. Propose best practices/standards. Communication Skills. Experience working in Agile. Education & Primary Experience. Bachelor's or master's degree in engineering. 6–10 years experience with Writing data pipelines using Spark on AWS Data Services.

Primary Skillset (must-have): AWS Data Services ( RDS Hive Metastore, Data Processing using EMR scoring/Hadoop/Spark, S3 Storage, Glue, Glue Catalog,  Spark-EMR, RMS, Lake Formation, Redshift/Snowflake, AWS Zero-ETL, etc.) Knowledge of Hadoop System Components. Hands-on Experience in Spark"

Applications close based on the volume of applications received.

This is an opportunity to work somewhere where you can truly be proud of your contribution. If this sounds like a role for you, let us know how you best fit the specific skills, and apply today with your cover letter and CV.

CPS is an equal opportunity employer that actively embraces diversity in its workforce through accurate community representation of gender, culture; thought and work arrangements.

Posted: 24-08-2024
Salary: Attractive packages with fringe benefits

NOTE: Never make payment to any employer, person, company, contractor or agency to get hired for a Job.

How to apply?

Email kish.c@citypeoplesolutions.com.au further details.

Related Jobs
Private

Posted: 13-11-2015 Location:  Dubai,United Arab Emirates

Private

Posted: 28-02-2015 Location:  Dubai,United Arab Emirates

Private

Posted: 05-07-2018 Location:  Dubai,United Arab Emirates

Private

Posted: 17-03-2019 Location:  Dubai,United Arab Emirates

Private

Posted: 19-08-2019 Location:  Dubai,United Arab Emirates