Knowledge Graph & Ontology Engineer
At Fluor, we are proud to design and build projects and careers. We are committed to fostering a welcoming and collaborative work environment that encourages big-picture thinking, brings out the best in our employees, and helps us develop innovative solutions that contribute to building a better world together. If this sounds like a culture you would like to work in, you’re invited to apply for this role.
Job Description
Role Overview
Fluor’s AI Office is seeking a Knowledge Graph & Ontology Engineer to design and implement semantic data foundations that accelerate AI-enabled delivery across EPC projects and operations. In this role, you will build ontologies, knowledge graphs, and semantic integration pipelines that connect engineering, procurement, construction, and project controls data into trusted, reusable assets. You will partner with domain experts (engineering disciplines, supply chain, construction, HSE, quality) and data/platform teams to enable search, reasoning, lineage, and AI applications (e.g., copilots, recommender systems, document intelligence) at scale. The ideal candidate thrives in complex, multi-domain environments and can translate EPC concepts into practical, performant graph solutions.
Key Responsibilities
- Design and evolve EPC-domain ontologies (e.g., equipment, tags, P&IDs, materials, vendor docs, schedules, assets) using industry best practices and governance standards.
- Develop and maintain enterprise knowledge graphs using RDF/OWL and/or property graph paradigms to unify structured and unstructured data across Fluor business lines.
- Create semantic data models, taxonomies, and controlled vocabularies to improve metadata quality, discoverability, and interoperability across systems.
- Implement ingestion and semantic mapping pipelines (ETL/ELT) from engineering and enterprise sources (e.g., document repositories, BIM, EAM/ERP, procurement, project controls) into graph stores.
- Define entity resolution and relationship strategies (matching, deduplication, golden records) for critical EPC entities such as equipment, lines, instruments, materials, vendors, and assets.
- Enable semantic search and retrieval (SPARQL/Gremlin/Cypher, vector + graph hybrid patterns) to support AI copilots and knowledge-driven applications.
- Establish standards for data governance and stewardship including naming conventions, versioning, lineage, and ontology change management.
- Partner with engineering SMEs to validate models against real project use-cases (FEED, EPC execution, commissioning/turnover, operations) and ensure fitness for purpose.
- Optimize graph performance and scalability (indexing, query tuning, partitioning, caching, incremental updates) for enterprise-grade workloads.
- Develop documentation and enablement (model specs, data dictionaries, example queries, best-practice guides) to accelerate adoption by teams globally.
- Support integration with analytics/AI platforms (feature generation, graph embeddings, RAG pipelines, MLOps interfaces) while ensuring responsible data usage.
- Contribute to roadmap planning for semantic capabilities, tooling selection, and reusable accelerators across Fluor’s EPC portfolio.
Basic Job Requirements
- Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a related field (or equivalent practical experience).
- 3+ years building knowledge graphs, ontologies, or semantic data solutions in production environments.
- Hands-on experience with RDF/OWL, SPARQL, and ontology engineering practices (e.g., SHACL, reasoning, versioning).
- Experience with graph databases and query languages (e.g., Neo4j/Cypher, Amazon Neptune/Gremlin, RDF triplestores).
- Strong programming skills in Python (and/or Java/Scala) with proven data pipeline development experience.
- Understanding of data modeling (conceptual/logical/physical), metadata management, and data quality.
- Ability to translate business/domain concepts into formal semantic representations and implement them effectively.
- Strong communication skills—able to collaborate with both technical teams and EPC domain stakeholders.
- Education: Bachelor’s degree required; Master’s in CS/Data Science/AI/Information Management preferred.
- Certifications: Cloud (Azure/AWS), Data Management (e.g., DAMA/metadata), graph/semantic web certifications if available.
Other Job Requirements
Preferred Qualifications
- Experience in EPC, engineering, construction, asset-intensive industries (energy, chemicals, mining & metals, infrastructure, data centers etc.).
- Familiarity with industry standards such as ISO 15926, CFiHOS, IEC/ISA naming conventions, or engineering tag conventions.
- Experience integrating data from systems commonly used in EPC environments (e.g., AVEVA/SmartPlant, Bentley, SAP, Maximo, Primavera P6, Aconex, EDMS).
- Knowledge of entity resolution/MDM, knowledge graph + vector search patterns, and RAG architectures.
- Exposure to Azure or AWS data/AI services and CI/CD for data products (e.g., Git, DevOps pipelines).
- Prior work with governance frameworks and stewardship operating models for enterprise data products.
To be Considered Candidates:
Must be authorized to work in the country where the position is located.
We are an equal opportunity employer. All qualified individuals will receive consideration for employment without regard to race, color, age, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, genetic information, or any other criteria protected by governing law.