Hi,
Attached is a Word document in which I give my comments to Part 9, issued 19 November 2012, as it has been sent for peer review.
Next to wordsmithing I have a number of fundamental issues, mainly in clause 1 - Scope.
You are invited to give your opinion on Part 9 and on my issues.
Regards,
Hans
PART 9 - ISSUES
-
- Posts: 283
- Joined: Sun Jan 22, 2012 10:02 pm
PART 9 - ISSUES
- Attachments
-
- HT-comments-to-ISO-15926-9-of-19-Nov-2012.docx
- (43.43 KiB) Downloaded 1315 times
Re: PART 9 - ISSUES
Warning: be careful; docx with revisioning. Cannot be handled in Open Office.
The main comment in the scope section is about methods to populate the ISO 15926 endpoints not being in scope of part 9.
To this I would like to respond.
In the old part 7, which had sections about facades; the predecessor of part 9, there was a section on populating an endpoint.
The reason to want to have this is for (among others):
However part 9 is a standard. In my opinion these functions need not to be standardized. If they are used in the industry commercially, it will be possible to do this without adhering to a standard.
Still these functions are very important, and candidate to be mentioned in a remark in part 9.
And to be handled in-depth in an ISO 15926 Implementation Guide.
The main comment in the scope section is about methods to populate the ISO 15926 endpoints not being in scope of part 9.
To this I would like to respond.
In the old part 7, which had sections about facades; the predecessor of part 9, there was a section on populating an endpoint.
The reason to want to have this is for (among others):
- Data handover (move) between facades,
- Messaging between facades,
- The method to set an object address to be redirected to another endpoint,
- The method to signal parties to open facade data through and by email.
However part 9 is a standard. In my opinion these functions need not to be standardized. If they are used in the industry commercially, it will be possible to do this without adhering to a standard.
Still these functions are very important, and candidate to be mentioned in a remark in part 9.
And to be handled in-depth in an ISO 15926 Implementation Guide.
-
- Posts: 283
- Joined: Sun Jan 22, 2012 10:02 pm
Re: PART 9 - ISSUES
A façade is supposed to behave like any other façade.
Other than coordinating access rights, no further coordination is necessary to enable façades to communicate with each other.
Part 9, as it is now, does not address that.
Perhaps I used the wrong words. What I meant is some language-independent API specification.
The implementers then translate this to software, using the same calls and parameters.
Foregive me, I'm a mere data modeller
Other than coordinating access rights, no further coordination is necessary to enable façades to communicate with each other.
Part 9, as it is now, does not address that.
Perhaps I used the wrong words. What I meant is some language-independent API specification.
The implementers then translate this to software, using the same calls and parameters.
Foregive me, I'm a mere data modeller
Re: PART 9 - ISSUES
About changes to scope section: in my opinion this standard can only operate in the boundaries of some more generic or more technical standards. Therefore all formats or methods can be defined only as concrete specializations of standard RDF representations, standard SPARQL or SPARQL Update, and similar standards (if more are really required).
Thus no ISO 15926-specific APIs are possible. There are no standardized ways to populate triple store with file content, or keep named graphs in triples - so there is no way to include it in our standard.
Thus no ISO 15926-specific APIs are possible. There are no standardized ways to populate triple store with file content, or keep named graphs in triples - so there is no way to include it in our standard.
-
- Posts: 283
- Joined: Sun Jan 22, 2012 10:02 pm
Re: PART 9 - ISSUES
These are some things that are not standard when you have an off-the-shelve triple store:
Population
Rather than blindly accept the RDF/XML file that are being offered for population, it does make a lot of sense to validate them, and to take action upon them.
For example:
- check well-formedness
- check compliance with applicable OWL schemas
- check for duplicates, and if there, remove them
- check non-violation of cardinalities
- generate instance(s) of EndOfTemporalPart templates when information changed
- invalidate instance(s) of class-of-temporal-part classes inside class templates in case information changed
Hand-over
- select data to be handed over and buffer them
- generate a transfer file
- invoke Population method in target façade
- send transfer file and upon successful transfer clear buffer
Message sending
- render one or more fill-in-the-blanks documents
- populate the blanks with data resulting from SPARQL query
- send e-mail with those documents attached in PDF format and with time-limited SPARQL query credentials
Message parsing
- fire SPARQL query, using the credentials
- parse results
- if these contain references to IDs owned by the receiving façade custodian, generate instances of the IncomingReference template, to be stored in that receiving façade.
In cases where vanilla triple stores can do all this, life is simple. Otherwise work has to be done.
So please don't say just "No", but tell me how vanilla triple stores can do all this.
PS The above is not necessarily complete nor correct. That's where we have a forum for.
Population
Rather than blindly accept the RDF/XML file that are being offered for population, it does make a lot of sense to validate them, and to take action upon them.
For example:
- check well-formedness
- check compliance with applicable OWL schemas
- check for duplicates, and if there, remove them
- check non-violation of cardinalities
- generate instance(s) of EndOfTemporalPart templates when information changed
- invalidate instance(s) of class-of-temporal-part classes inside class templates in case information changed
Hand-over
- select data to be handed over and buffer them
- generate a transfer file
- invoke Population method in target façade
- send transfer file and upon successful transfer clear buffer
Message sending
- render one or more fill-in-the-blanks documents
- populate the blanks with data resulting from SPARQL query
- send e-mail with those documents attached in PDF format and with time-limited SPARQL query credentials
Message parsing
- fire SPARQL query, using the credentials
- parse results
- if these contain references to IDs owned by the receiving façade custodian, generate instances of the IncomingReference template, to be stored in that receiving façade.
In cases where vanilla triple stores can do all this, life is simple. Otherwise work has to be done.
So please don't say just "No", but tell me how vanilla triple stores can do all this.
PS The above is not necessarily complete nor correct. That's where we have a forum for.
Re: PART 9 - ISSUES
Hans,
Returning to the today's call - please see below your scope list divided into two subcategoties:
Generic issues relevant for any endpoint (RDL or payload) are only:
Returning to the today's call - please see below your scope list divided into two subcategoties:
Generic issues relevant for any endpoint (RDL or payload) are only:
The remaining part of your lists contains specific issues relevant mostly for payload (Façade proper). The items in this second part are not applicable for Reference Data endpoint or are not in any particular way difficult for RD, but not for payload:Population
- check well-formedness
- check for duplicates, and if there, remove them
Population
- check compliance with applicable OWL schemas
- check non-violation of cardinalities
- generate instance(s) of EndOfTemporalPart templates when information changed
- invalidate instance(s) of class-of-temporal-part classes inside class templates in case information changed
Hand-over
- select data to be handed over and buffer them
- generate a transfer file
- invoke Population method in target façade
- send transfer file and upon successful transfer clear buffer
Message sending
- render one or more fill-in-the-blanks documents
- populate the blanks with data resulting from SPARQL query
- send e-mail with those documents attached in PDF format and with time-limited SPARQL query credentials
Message parsing
- fire SPARQL query, using the credentials
- parse results
- if these contain references to IDs owned by the receiving façade custodian, generate instances of the IncomingReference template, to be stored in that receiving façade.
-
- Posts: 283
- Joined: Sun Jan 22, 2012 10:02 pm
Re: PART 9 - ISSUES
Hi Victor,
From where comes the notion that an RDL store should not be a façade as well?
Reference data can change as well, because of errors. Do you have a clear view how such changes are recorded, so that the user always has an audit trail?
And how are we going to build OIMs? Part 8 states that that shall be done with specialized templates.
Regards,
Hans
From where comes the notion that an RDL store should not be a façade as well?
Reference data can change as well, because of errors. Do you have a clear view how such changes are recorded, so that the user always has an audit trail?
And how are we going to build OIMs? Part 8 states that that shall be done with specialized templates.
Regards,
Hans
Re: PART 9 - ISSUES
Hans,
But I'd prefer to discuss terms separately. In my opinion functional requirement are different, whatever terms are used.
OIMs should be used for payload verification independent of template set chosen for payload, and fully expanded form is the best to compare against whatever set. If you keep OIM as specialized templates and receive payload as instances of a different set of templates (just different URIs, it will be enough) - you'll be obliged to expand both to Part 2 level!
We've to agree on terminology. As many people noted, "façade" is a façade to something else. In case of RDL the system is rather whole - SPARQL interface to a triple-store. If it's an engineering system - it is indeed something external placed on top of something very different in nature. Therefore separation of terms looks logical for me.From where comes the notion that an RDL store should not be a façade as well?
But I'd prefer to discuss terms separately. In my opinion functional requirement are different, whatever terms are used.
More or less clear. In particular, I don't think Ur-Class idea is applicable. URIs of RD items should persist. Therefore minor editing should be done directly, major changes bring to life another entity with different URI.Reference data can change as well, because of errors. Do you have a clear view how such changes are recorded, so that the user always has an audit trail?
Any particular reason for that? I think that generic OIMs (and RDL keeps quite generic OIMs) are best preserved as Part 2 level models.And how are we going to build OIMs? Part 8 states that that shall be done with specialized templates.
OIMs should be used for payload verification independent of template set chosen for payload, and fully expanded form is the best to compare against whatever set. If you keep OIM as specialized templates and receive payload as instances of a different set of templates (just different URIs, it will be enough) - you'll be obliged to expand both to Part 2 level!
-
- Posts: 283
- Joined: Sun Jan 22, 2012 10:02 pm
Re: PART 9 - ISSUES
Hi Victor,
The RDL façade is a façade for a separate editing system called Brutus.
It depends from which scenario you start. The JORD RDL is a triple store in which classes are directly interrelated using Part 2 instances of subtypes of ClassOfRelationship. Tore may tell us how changes and an audit trail are being managed.
I don't think verification requires lifting of the templates at all.
The RDL façade is a façade for a separate editing system called Brutus.
It depends from which scenario you start. The JORD RDL is a triple store in which classes are directly interrelated using Part 2 instances of subtypes of ClassOfRelationship. Tore may tell us how changes and an audit trail are being managed.
The RDL does not have any OIM at the moment. OIMs are sets of specialized templates, that can be copied and used by anybody who does mapping. That is simpler than just do mapping at random and then find out later that that was incorrectly done. These OIMs will be the heritage of generations of mappers.I think that generic OIMs (and RDL keeps quite generic OIMs) are best preserved as Part 2 level models.
OIMs should be used for payload verification independent of template set chosen for payload, and fully expanded form is the best to compare against whatever set. If you keep OIM as specialized templates and receive payload as instances of a different set of templates (just different URIs, it will be enough) - you'll be obliged to expand both to Part 2 level!
I don't think verification requires lifting of the templates at all.