Columbus, OH, USA

Description

We are looking for a skilled and innovative AWS Data Engineer to join our team. In this role, you will design, build, and maintain scalable data pipelines and infrastructure on AWS to support our business's data-driven decision-making. You will collaborate closely with data analysts, scientists, and other engineers to ensure efficient data delivery and a seamless workflow.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • 3+ years of experience as a Data Engineer, with expertise in AWS services (e.g., S3, Glue, Redshift, Lambda, EMR, Kinesis).
  • Strong proficiency in SQL and programming languages like Python or Scala.
  • Hands-on experience with ETL tools and frameworks.
  • Solid understanding of data modeling, data warehousing, and big data processing.
  • Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation) is a plus.
  • Familiarity with DevOps practices and CI/CD pipelines.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration abilities.

Bonuses

Performance Bonuses: Quarterly and annual bonuses tied to individual and team achievements.Referral Bonuses: Attractive incentives for referring qualified candidates who join the company.Spot Bonuses: Rewards for outstanding contributions or exceptional performance on projects.Sign-On Bonus: Offered for highly qualified candidates upon joining.





Benefits

  • Competitive Salary: Based on experience and skills.
  • Bonuses: Quarterly and annual performance-based bonuses.
  • Health Benefits: Comprehensive health, dental, and vision coverage.
  • Retirement Plans: 401(k) with company match.
  • Paid Time Off: Generous vacation, sick leave, and holidays.
  • Professional Development: Access to training programs, certifications, and industry events.
  • Flexible Work Options: Remote work opportunities.
  • Employee Wellness: Wellness programs and fitness membership discounts.
  • Referral Bonus: Attractive incentives for successful referrals.

Responsibilities

  • Design and implement robust data pipelines and workflows on AWS.
  • Develop, test, and deploy ETL (Extract, Transform, Load) processes to process large datasets efficiently.
  • Collaborate with cross-functional teams to gather and analyze requirements, ensuring the delivery of high-quality data solutions.
  • Optimize and maintain data infrastructure for performance, scalability, and cost-efficiency.
  • Build and manage data lakes, warehouses, and streaming data solutions on AWS.
  • Implement security best practices to protect sensitive data.
  • Monitor data systems and address any issues to ensure smooth operations.
  • Stay up-to-date with the latest AWS services and data engineering technologies to propose innovative solutions.