Amazon is actively conducting a recruitment drive for the role of Data Engineer I. This is a prime opportunity for B.E / B.Tech / B.Sc graduates with 1+ Years of experience to join the International Seller Services (ISS) Central Analytics team in Bengaluru or Hyderabad. If you are passionate about Big Data, AWS, and ETL pipelines, check the details below and apply immediately.
Job Overview
- Role: Data Engineer I (International Seller Growth)
- Location: Bengaluru / Hyderabad
- Experience: 1+ Years
- Qualification: B.E / B.Tech / B.Sc
- Key Skills: SQL, ETL, AWS, Big Data (Spark/Hive), Python
- Job Type: Full-time
- Salary: Best in Industry
Job Description
For the Amazon Recruitment 2026 drive, the team is seeking a smart and highly motivated Data Engineer. You will provide technical leadership and build end-to-end analytical solutions that are highly available, scalable, stable, and secure.
You will work with huge datasets and architect advanced data ecosystems. The role involves implementing data ingestion routines using best practices in data modeling and ETL/ELT processes by leveraging AWS technologies.
Roles and Responsibilities
As a Data Engineer I at Amazon, your key responsibilities will include:
- Pipeline Development: Designing, implementing, and operating large-scale, high-volume data structures for analytics and data science.
- Data Ingestion: Implementing data ingestion routines using best practices in ETL/ELT.
- AWS Integration: Integrating data systems with AWS tools to support customer use cases.
- Optimization: identifying opportunities in existing data solutions for improvements and adopting best practices in data integrity.
- Collaboration: Collaborating with engineers to translate business requirements into robust, scalable solutions.
Skills and Eligibility Criteria
To be eligible for Amazon Recruitment 2026, candidates must meet the following criteria:
- Educational Background: B.E / B.Tech / B.Sc in Computer Science or related fields.
- Experience: 1+ years of data engineering experience.
- Mandatory Technical Skills:
- Experience with Data Modeling, Warehousing, and building ETL pipelines.
- Proficiency in SQL and query optimization for large-scale datasets.
- Experience with scripting languages like Python or KornShell.
- Preferred Qualifications:
- Experience with Big Data technologies: Hadoop, Hive, Spark, EMR.
- Knowledge of ETL tools (Informatica, Glue, etc.).
- Familiarity with AWS ecosystem.
