This Portal gives an overview on ISO 15926 and then zooms in to places where detailed information about all (or most) aspects can be found.
The scope of ISO 15926 is "Integration of life-cycle data for process plants, including oil and gas production facilities".
Of course that is not a goal in itself, but it serves two main purposes:
- interoperability that is defined as:
"a characteristic of a product or system, whose interfaces are completely understood, to work with other products or systems, present or future, in either implementation or access, without any restrictions"(definition by AFUL) .
This allows applications and systems, used globally during the life of a facility, to share information.
- archiving, collecting the information required for:
- interoperability during day-to-day activities
- operations optimalization
- the next revamp
- knowledge mining
- root cause analysis
The various aspects, shown on this diagram, will be detailed in order of appearance:
The "Part 2 Upper Ontology" refers to the data model defined in ISO 15926-2
, which will, in the next version of Part 2, be extended with entity types that detail PhysicalObjects
. This model is very generic and strongly typed. For example it defines PhysicalObject
but not Pump, that comes later in the General concepts. All entity types defined here are specializations of Thing
. And no matter how far we proceed in this Portal, that line of descent is maintained throughout.
A special feature of the Part 2 data model is that it has 129 generic Relationship classes that, together with 93 generic classes, form the basis of its Upper Ontology
. See here
for an overview of those Relationships.
The above generic model is extended with, still generic, instances of the Part 2 entity types. For example CENTRIFUGAL PUMP is an instance of ClassOfInanimatePhysicalObject. And it has many specializations that together form a class hierarchy or taxonomy. All this is done in the RDL - Reference Data Library
. Try the first set of specializations of PUMP here
and all specializations here
(this and others can be found at http://15926.org/RDL2/views/
Each of the 20,000 classes that are in the ISO 15926-4 RDL, plus 20,000 extensions for industry standards, has a URI
, accessible via the internet, and is typed with a Part 2 entity type.Pumps and other items are generically defined, so supplier-, or standardization body-, or project-specific pumps (or other object classes) are not
defined in the RDL, but in extensions thereof.
for further details.
Extensions of the RDL can be made by anybody who has private classes that are further specializations of one or more of the above generic RDL classes, thus extending the taxonomy.
- Extensions of standardization bodies - see here
- Extensions of manufacturer/suppliers
- Extensions of owner/operators or groups thereof (e.g. CFIHOS)
- Extensions of EPC contractors
- Extensions for projects
Such specialization can also be indirectly, as shown in the diagram here
At the bottom of such a taxonomy, that stretches from Thing downwards, is:
- a Requirements Class, usually defined with a technical specification
- a Product Class, as defined by
- a product specification of a manufacturer/supplier, or
- a product class that has been further specialized by configuration and pricing in a quotation.
A specialization of a class inherits everything from all its superclasses, so keep that in mind when extending a taxonomy.
Integration model subset
All above classes are possible ingredients for Templates, as defined in ISO 15926-7 and implemented in line with ISO 15926-8. At present 200+ templates have been specified in Template Specifications which can be found here
Where Part 2 is the Upper Ontology, these Templates are Elementary Ontologies
.Information, as represented by data elements in applications, can be mapped to instances of these Template classes. The template axiom defines whether such a mapping is valid. See SHACL
The result of mapping is a set of:
- template instances
Since template signatures refer to objects, it is essential that these objects have been declared as an instance of the applicable Part 2 entity type(s). See here
for more details.
The problem with mapping is that there is a shortage of data engineers. So the concept of TIP
s has been introduced as a mapping technology. A TIP signature calls for strings and numbers as these appear in the mapping source data base or spreadsheet, making ETL
easier in most cases. A wizard in under development in order to assist the mapping person.
TIPs can be specialized to make the selection of the proper TIP easier.
Once the processing unit for a data element has been made and tested, the actual mapping fetches the data element value, for example with an SQL query, and the template instance is generated.
Application model and data
The application model can be anything, as long as it has a defined structure and, preferrably, a data dictionary of some sort. In that data dictionary a definition of the semantics of each data element is expected. That definition forms
the basis for the mapping. Right now this is still a human activity, but with the rise of machine learning and AI it may become possible to let software select the applicable template class. (as an aside: The CFIHOS data model has a good data dictionary.)
Adapter - export side
Each ISO 15926-compliant application shall have an ISO 15926 Adapter. That Adapter has an export function and an import function, that uses the mapping software described above.
The exported ISO 15926-8 file shall be validated before being uploaded to a triple store. For this software implementing the W3C Recommendation Shapes Constraint Language (SHACL)
shall be used. See also here
The ISO 15926-8 file is an RDF
file in Turtle
(preferred) or RDF/XML
format. This file is converted to N-triples
and stored in a Triple Store
The query language for that is SPARQL
. Most SPARQL implementations produce a query result in JSON
Adapter - import side
One of the important goals of ISO 15926 is to achieve interoperability
. So not only exporting generated data, but also importing data produced by another application, recently or perhaps twenty years ago and now required for a revamp project. The adapter must map particular query results to the internal format and naming of the required input data.
This is about the API services that are required for the triple store. The design of ISO 15926-9 hasn't started yet, but the requirements have been formulated here
RDF Triple Store
A Triple Store (or Quad Store, as preferred by some) basically is a one table data base with three (or four) data fields. The larger commercially available triple stores can store and handle one trillion triples, see here
OWL File for reasoning
Storing the life-cycle information of a plant, its components, its streams, and its activities will, in the long run, become a treasure trove of hitherto unknown knowledge, for example in the context of energy optimization. Using SPARQL the relevant information of a domain of discourse can be collected and mapped to OWL for reasoning purposes. Other application of such information could also be for other forms of AI. But this is in the future, because first that information must be uploaded for some years.
A possible use of the above is that when you use a Workflow System a dedicated SPARQL query for each task can be designed. At the time, according the planning, that a task must be executed the query can be launched and the data imported. The resulting data then can be mapped and uploaded to the triple store.
A possible configuration for a project could look like this one (note the integration of an EDMS