Get introduced to vetted companies that are still hiring

Create a profile to become searchable by hiring managers.

0
JOBS
0
COMPANIES

Data Engineer - Boston

 Oracle Cloud Infrastructure

Oracle Cloud Infrastructure

Burlington, MA, USA
USD 25.48-60.63 / hour + Equity
Posted on Feb 26, 2026

Data Migration Engineer

About Oracle FSGIU - Finergy:

The Finergy division within Oracle FSGIU is dedicated to the Banking, Financial Services, and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers, Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency, and our expert consulting services ensure technology aligns with our clients' business goals.

Job Summary:

We are seeking a skilled Senior Data Migration Engineer with expertise in Python, PySpark, Databricks, SQL and AWS to lead and execute complex data migration projects. The ideal candidate will design, develop, and implement data migration solutions to move large volumes of data from legacy systems to modern cloud-based platforms, ensuring data integrity, accuracy, and minimal downtime.

Location: On-site Boston, MA

No Visa sponsorship available for this role.


Only Oracle brings together the data, infrastructure, applications, and expertise to power everything from industry innovations to life-saving care. And with AI embedded across our products and services, we help customers turn that promise into a better future for all. Discover your potential at a company leading the way in AI and cloud solutions that impact billions of lives.

True innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing a workforce that promotes opportunities for all with competitive benefits that support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.

We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling 1-888-404-2494 in the United States.

Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.


We are seeking a skilled Data Migration Engineer with expertise in Python, PySpark, Databricks, SQL and AWS to lead and execute complex data migration projects.
Disclaimer:

Certain US customer or client-facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates.

Range and benefit information provided in this posting are specific to the stated locations only

US: Hiring Range in USD from $25.48 to $60.63 per hour; from: $53,000 to $126,100 per annum. May be eligible for bonus and equity.

Oracle maintains broad salary ranges for its roles in order to account for variations in knowledge, skills, experience, market conditions and locations, as well as reflect Oracle's differing products, industries and lines of business.
Candidates are typically placed into the range based on the preceding factors as well as internal peer equity.

Oracle US offers a comprehensive benefits package which includes the following:
1. Medical, dental, and vision insurance, including expert medical opinion
2. Short term disability and long term disability
3. Life insurance and AD&D
4. Supplemental life insurance (Employee/Spouse/Child)
5. Health care and dependent care Flexible Spending Accounts
6. Pre-tax commuter and parking benefits
7. 401(k) Savings and Investment Plan with company match
8. Paid time off: Flexible Vacation is provided to all eligible employees assigned to a salaried (non-overtime eligible) position. Accrued Vacation is provided to all other employees eligible for vacation benefits. For employees working at least 35 hours per week, the vacation accrual rate is 13 days annually for the first three years of employment and 18 days annually for subsequent years of employment. Vacation accrual is prorated for employees working between 20 and 34 hours per week. Employees working fewer than 20 hours per week are not eligible for vacation.
9. 11 paid holidays
10. Paid sick leave: 72 hours of paid sick leave upon date of hire. Refreshes each calendar year. Unused balance will carry over each year up to a maximum cap of 112 hours.
11. Paid parental leave
12. Adoption assistance
13. Employee Stock Purchase Plan
14. Financial planning and group legal
15. Voluntary benefits including auto, homeowner and pet insurance

The role will generally accept applications for at least three calendar days from the posting date or as long as the job remains posted.

Career Level - IC2



Key Responsibilities:

  • Design, develop, and maintain scalable ETL/ELT pipelines utilizing Python, PySpark, and SQL.
  • Implement workflows and data processing using Databricks in a cloud environment (AWS).
  • Collaborate with data scientists, analysts, and business stakeholders to understand requirements and deliver data solutions.
  • Ensure optimal performance, reliability, and scalability of data pipelines and infrastructure.
  • Monitor, troubleshoot, and resolve issues with data workflows and data quality.
  • Leverage AWS services (e.g., S3, Lambda, Glue, Redshift) for efficient data storage, processing, and orchestration.
  • Document data workflows, standards, and best practices.
  • Maintain data security and compliance with company and regulatory policies.

Qualifications:

  • Education: Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field. Advanced degrees are a plus.
  • 5+ Years of relevant experience
  • Location - US - Boston only
  • Demonstrable experience programming in Python for data engineering tasks.
  • Proven expertise using PySpark for large-scale distributed data processing.
  • Proficient in writing complex SQL queries for data transformation and analysis.
  • Hands-on experience with Databricks for collaborative data engineering development.
  • Experience working with AWS data and analytics services (such as S3, Glue, Redshift, Lambda, EMR).
  • Strong understanding of data modeling, ETL principles, and modern data warehousing concepts.
  • Excellent problem-solving and communication skills.

Preferred Qualifications:

  • Experience with CI/CD tools and practices for data pipeline deployment.
  • Exposure to data governance, data cataloging, and data lineage tools.
  • Familiarity with DevOps practices in a cloud-native environment.
  • Industry certifications in AWS, Databricks, or relevant technologies.

Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration abilities.
  • Ability to work in a fast-paced, agile environment.