- What are the "must-have skills for this role? Python, AWS, PySpark, SQL
- Years of experience required for each skill? 8-10 Years
- 8 years of data engineering experience with a proven track record of designing and implementing scalable ingestion framework.
- 6 years hands on experience on the AWS platform using, Redshift, S3, DynamoDB, EMR serverless, Lambdas, Step Functions, etc.
- Proficiency in ETL development and design using Python, PySpark, Big Data (Hadoop/SPARK)
- Strong SQL knowledge, RDBMS experience on any recognized database (Postgres, Redshift, MySQL, etc).
- Deep understanding of data modeling, ETL processes and data warehousing concepts.
- Excellent written and verbal communication skills with the ability to explain complex technical concepts to non-technical stakeholders.
- Experience in working with Agile Methodology and scrum processes.
- Collaboration skills with colleagues regarding development best practices and data application issues.
- Excellent problem solving skills and ability to work independently as well as lead junior developers. Completes other duties as assigned.