Position: Big Data developer with strong ETL
Location: Basking Ridge, NJ
Duration: 6-12 months
What Project/Projects will the candidate be working on while on assignment?
Project : tum Big data platform Data Lake
Is this person a sole contributor or part of a team?
Part of a team 5 developers
If so, please describe the team? (Name of team, size of team, etc.)
What are the top 5-10 responsibilities for this position? (Please be detailed as to what the candidate is expected to do or complete on a daily basis)
Design and develop ETL data flows using Hive / Pig.
Loading from disparate data sets.
Pre-processing using Hive and Pig.
Translate complex functional and technical requirements into detailed design.
Perform analysis of data sets and uncover insights.
Maintain security and data privacy.
Implement data flow scripts using Unix / hive / pig scripting.
Propose best practices/standards.
Work with the System Analystand development manager on a day-to-day basis
Work with other team members to accomplish key development tasks
Work with service delivery (support) team on transition and stabilization
What software tools/skills are needed to perform these daily responsibilities?
We are looking for a Mid Level Big Data Devloper who will lead and work on the collecting, storing, processing, and analyzing of huge sets of data.
The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. Y
ou will also be responsible for integrating them with the architecture used across the company.
Individual should have expertise and at least 5 years of hands-on experience in Java Enterprise ecosystem (Design, Development, Test, Deploy on Production) is required .
At least 4 years of hands-on experience with Hadoop/Hive/MapReduce/Pig/Sqoop/Flume (Design, Development, Test, Deploy on Production Hadoop cluster) , demonstrated ability to segment nd organize data from disparate sources and knowledge of data security and encryption models is desirable.
Experience working with Hadoop/HBase/Hive/MRV1/MRV2 & ETL processing with HIVE/Pig scripts is required.
Demonstrate experience in implementing Unix scripting is required.
Any experience is health care insurance industry is a plus.
What skills/attributes are a must have?
PIG/Hive/Hadoop MR / Unix scripting
What skills/attributes are nice to have?
Experience with HBase, Talend , NoSQL databases-Experience Apache Spark or other streaming big data processing preferred.
Previous experience with Insurance industry.
Ashutosh Singh
Sr. IT Recruiter- LEAD
Contact No.:614-662-1007
Office ID: ashutosh@technocraftsol.com
Yahoo/ Gmail ID: ashu.technocraft
Linked-in: https://www.linkedin.com/home?trk=nav_responsive_tab_home
Note: Technocraft Solutions LLC works with Direct Client's and Preferred Vendors Nationwide.
Your confirmation would means that you understand the level of Technocraft Solutions LLC association for the mentioned project and will not approach Technocraft Solutions LLC Client directly.
You received this message because you are subscribed to the Google Groups "US Jobs: Requirements, Clients and Consultants" group.
To unsubscribe from this group and stop receiving emails from it, send an email to recruiters-r-us+unsubscribe@googlegroups.com.
To post to this group, send email to recruiters-r-us@googlegroups.com.
Visit this group at http://groups.google.com/group/recruiters-r-us.
For more options, visit https://groups.google.com/d/optout.
No comments:
Post a Comment