Title: ETL Data Engineer With SQL And Python
Duration: 12-18+ Months Contract extendable
Location: Menlo Park, CA
· Manage reporting/dashboard plans, develop SQL pipelines, design and own data models for a product or group of products.
· Interface with engineers, product managers, product analysts to understand data and reporting needs.
· Build data expertise and own data quality in allocation areas of ownership.
· Design, develop, test and launch new reports, pipelines, data aggregations and dashboards into production.
· Provide support to reports, pipelines and dashboards running in production.
· Design and manage SLAs for all data sets (existing and new) in allocated areas of ownership
· Experience with Python Coding and Scripting
· Work with data infrastructure to triage infra issues and drive to resolution.
· We would like someone with strong SQL/Pipeline experience, have experience with HIVE or HBASE, demonstrated ability and drive to learn new technologies, Tableau or other visualization tool.
· In effect we are looking for a full stack solid engineer with 7-10 years of experience, who has applied his/her skill in data engineering and also in some part visualization.
We are looking for Structured Query Language developers. That is not the same as a SQL Server Developer
80% of the work is on SQL development: Temporary tables, Table variables and Common Table Expressions (CTE), Stored Procedures, Triggers and User Defined Functions, Performance tuning, query optimization and execution plan of queries, Managed and created and altered Databases, Tables, Views, Indexes and Constraints with business rules using T-SQL and PL/SQL, Incremental Data Load, Slowly Changing Dimensions, Normalization, Constraints, Querying, Joins, Keys, Indexes(i.e., Clustered and Non Clustered), Data Import/Export, Triggers, Functions, Sub Queries, Cursors. Thorough knowledge of addressing Performance Issues and Involved in query tuning, Index tuning, Data Profiling
Writing complex SQL Queries, Stored procedures, Triggers, Views, DDL, DML and UDFs to implement the business logic.
Wrote PL/SQL scripts for pre & post session processes and to automate daily loads.
Wrote heavy stored procedures using dynamic SQL to populate data into temp tables from fact and dimensional tables for reporting purpose.
Extracted data from Teradata database. Design and develop methodologies to migrate multiple development/production databases from Sybase to Oracle 11g.
• Worked on data migration and data conversion using PL/SQL, SQL and Python to convert them into custom ETL tasks.
• Worked on performance tuning of ETL code and SQL queries to improve performance, availability and throughput.
• Involved in ingesting data into Data Warehouse using various data loading techniques.
• Involved in Data Mapping specifications to create and execute detailed system test plans.
• Extensively used joins and subqueries to simplify complex queries involving multiple tables
• Tuned existing ETL SQL scripts by making necessary design changes to improve performance of Fact tables.
Thanks & Regards
Rachita Upadhyay
USTECH Solutions, Inc.
10 Exchange Place; Suite 1820
Jersey City NJ 07302
You received this message because you are subscribed to the Google Groups "US Jobs: Requirements, Clients and Consultants" group.
To unsubscribe from this group and stop receiving emails from it, send an email to recruiters-r-us+unsubscribe@googlegroups.com.
To post to this group, send email to recruiters-r-us@googlegroups.com.
Visit this group at https://groups.google.com/group/recruiters-r-us.
For more options, visit https://groups.google.com/d/optout.
No comments:
Post a Comment