Data Engineer (Lakehouse)
At Fluor, we are proud to design and build projects and careers. We are committed to fostering a welcoming and collaborative work environment that encourages big-picture thinking, brings out the best in our employees, and helps us develop innovative solutions that contribute to building a better world together. If this sounds like a culture you would like to work in, you’re invited to apply for this role.
Job Description
Role Overview
The Data Engineer – Lakehouse will design, build, and optimize scalable data platforms supporting Fluor’s global EPC projects. This role focuses on implementing modern lakehouse architectures using Databricks and Microsoft Fabric, enabling analytics, reporting, and advanced use cases across engineering, project controls, supply chain, and operations.
Key Responsibilities
- Design, develop, and maintain Lakehouse architectures using Medallion (Bronze, Silver, Gold) patterns
- Build and optimize data pipelines using Databricks (Spark, Delta Lake) and Microsoft Fabric
- Ingest, transform, and curate structured and semi‑structured data from enterprise systems
- Implement performance tuning techniques for Spark jobs, Delta tables, and Fabric workloads
- Ensure data quality, reliability, and lineage across analytical layers
- Collaborate with analytics, reporting, and business teams to deliver trusted datasets
- Implement best practices for data partitioning, indexing, and storage optimization
- Support CI/CD, monitoring, and operationalization of data pipelines
- Adhere to Fluor’s Quality, and Information Security standards
- Document technical designs, data models, and operational procedures
- Provide technical guidance and support to junior engineers as required
Basic Job Requirements
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field
- 5+ years of experience in data engineering or data platform development
- Minimum 5 years relevant data engineering experience in enterprise environments
- Strong hands‑on experience with Databricks (Apache Spark, Delta Lake)
- Experience with Microsoft Fabric (Lakehouse, Data Pipelines, OneLake)
- Proven expertise in Medallion Architecture implementation
- Strong SQL and data modeling skills
- Experience with performance tuning of large‑scale data pipelines
Other Job Requirements
Preferred Qualifications
- Experience supporting EPC, engineering, construction, or industrial projects
- Exposure to Azure data services (ADLS, Synapse, Data Factory)
- Knowledge of Power BI semantic models and analytics consumption patterns
- Experience with DevOps / CI‑CD for data platforms
- Familiarity with data governance, security, and compliance practices
- Azure / Databricks certifications preferred
To be Considered Candidates:
Must be authorized to work in the country where the position is located.
We are an equal opportunity employer. All qualified individuals will receive consideration for employment without regard to race, color, age, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, genetic information, or any other criteria protected by governing law.