Principal Software Engineer
Omnitracs
JOB SUMMARY
We're looking for a pragmatic, hands-on Senior Data Engineer who gets things done. You'll spend significant time building data pipelines and infrastructure while helping elevate the technical skills of the broader team. This role is ideal for someone who thrives on modernizing legacy data systems, leverages AI-assisted development tools to accelerate delivery, and isn't afraid to roll up their sleeves to architect and implement scalable data solutions. You'll balance individual contribution with mentorship, helping less experienced engineers grow their craft through practical guidance and code review.
WHAT YOU'LL DO
Build and Ship
Design and implement scalable data pipelines for both batch and real-time processing
Modernize legacy on-premises data systems and migrate to cloud-based PaaS solutions
Leverage AI-powered development tools (GitHub Copilot, ChatGPT, Claude, etc.) to accelerate feature development and data engineering workflows
Architect end-to-end data solutions from collection and storage through modeling and consumption
Guide multi-terabyte database migrations across different data-tier technologies
Build data-intensive applications with APIs and streaming data pipelines
Lead Through Example
Mentor data engineers through pairing sessions, code reviews, and practical guidance
Share best practices for AI-assisted development and modern data engineering tooling
Establish coding standards, technical documentation, and architectural patterns
Foster a culture of continuous learning and technical excellence
Technical Execution
Create conceptual and logical data models, including source-to-target mappings and data lineage
Design solutions to prepare and transform data for multi-platform consumption
Implement data quality controls and monitoring systems
Optimize data models and database designs for performance and reliability
Build and maintain cloud-based data infrastructure (AWS/Azure/GCP)
Ensure data security, compliance, and governance standards are met
Evaluate and recommend new data technologies that solve real problems
REQUIRED QUALIFICATIONS
Experience
15+ years of professional data engineering experience
5+ years in a technical leadership position
Proven track record of modernizing legacy data systems and cloud migrations
Strong experience with AI-assisted development tools and workflows
History of mentoring and developing junior engineers
Hands-on technical involvement (not purely managerial)
Core Technical Skills
Expert-level proficiency in Python and SQL
Deep experience with data warehousing solutions (Snowflake, Redshift, BigQuery, or Fabric)
Strong background in ETL/ELT design and optimization (Airflow, dbt, Glue, Informatica, or Talend)
Hands-on experience with big data technologies (Spark, Hadoop, Kafka)
Proficiency with cloud platforms: AWS (S3, RDS, Lambda, Glue) or Azure (Data Factory, Synapse, Databricks) or GCP (BigQuery, Dataflow)
Experience with both relational (PostgreSQL, MySQL, SQL Server, Oracle) and NoSQL databases (MongoDB, Cassandra, DynamoDB)
Solid understanding of containerization (Docker) and orchestration (Kubernetes)
Data Architecture Expertise
Data modeling techniques (dimensional, relational, NoSQL, ER modeling, UML)
Database design and performance optimization
Batch and real-time pipeline architecture
Data lake and lakehouse implementation
Understanding of data enrichment, transformation, security, movement, and data integrity
Experience with data mesh/data fabric architectures
Migration & Cloud Experience
Subject matter expertise in migrating on-prem applications to cloud PaaS
Familiarity with AWS migration services (DMS, SMS)
Experience creating migration patterns for MSSQL, MySQL, Oracle, IBM DB2
Soft Skills
Bias toward action and delivering working solutions
Strong communication skills with both technical and non-technical stakeholders
Ability to translate business requirements into technical solutions
Ability to manage multiple priorities and deliver results independently
Collaborative mindset with genuine interest in helping others grow
NICE TO HAVE
Master's degree in Computer Science, Engineering, or related field
Experience with Java development
Familiarity with Infrastructure as Code (Terraform, CloudFormation)
Knowledge of CI/CD tools (Jenkins, GitHub Actions, GitLab CI)
Experience with monitoring and observability tools (Datadog, Grafana)
Understanding of microservices architecture
Time-series database experience (InfluxDB, TimescaleDB)
Knowledge of Apache Iceberg or Hive
EDUCATION
Bachelor's degree in Computer Science, Engineering, or related technical field (Master's preferred)
WHAT SUCCESS LOOKS LIKE
You're consistently shipping data pipelines and infrastructure improvements
The team's data quality and engineering velocity are improving
Legacy on-prem systems are being systematically migrated to the cloud
Engineers you mentor are leveling up their skills
You're known as the person who makes things happen
Data architecture decisions are sound, scalable, and aligned with business needs
EQUAL OPPORTUNITY EMPLOYER
SOLERA HOLDINGS, INC., AND ITS US SUBSIDIARIES (TOGETHER, SOLERA) IS AN EQUAL EMPLOYMENT OPPORTUNITY EMPLOYER. THE FIRM'S POLICY IS NOT TO DISCRIMINATE AGAINST ANY APPLICANT OR EMPLOYEE BASED ON RACE, COLOR, RELIGION, NATIONAL ORIGIN, GENDER, AGE, SEXUAL ORIENTATION, GENDER IDENTITY OR EXPRESSION, MARITAL STATUS, MENTAL OR PHYSICAL DISABILITY, AND GENETIC INFORMATION, OR ANY OTHER BASIS PROTECTED BY APPLICABLE LAW. THE FIRM ALSO PROHIBITS HARASSMENT OF APPLICANTS OR EMPLOYEES BASED ON ANY OF THESE PROTECTED CATEGORIES.