JSON-LD harmonizes the representation of Linked Data in JSON by describing a common JSON representation format for expressing directed graphs; mixing both Linked Data and non-Linked Data in a single document. The format has already been adopted by large companies such as Google in their Gmail product and is available to over 425 million customers around the world.
This is a 2nd Last Call publication for the JSON-LD 1.0 Algorithms and API specification. Changes since the previous publication include a shift to use a Future’s based API design approach, better base URL processing, and better translation of data from RDF.
All substantive technical work on the specification is complete. Feedback on both specifications is encouraged and should be sent to firstname.lastname@example.org. The 2nd Last Call period will end in 3 weeks, on June 06th 2013.
The Semantic Web Interest Group has published a new draft for the vCard-in-RDF Ontology, edited by Renato Iannella and James McKinney. The new draft updates the previous version by aligning it with the latest IETF vCard specification, ie, RFC6350.
This is a draft; If you wish to make comments regarding this document, please send them to email@example.com (subscribe, archives). The goal is to publish an Interest Group Note once there is a consensus in the community.
The W3C Provenance Working Group was chartered to develop a framework for interchanging provenance on the Web. The Working Group has now published the PROV Family of Documents as W3C Recommendations, along with corresponding supporting notes. You can find a complete list of the documents in the PROV Overview Note. PROV enables one to represent and interchange provenance information using widely available formats such as RDF and XML. In addition, it provides definitions for accessing provenance information, validating it, and mapping to Dublin Core.
JSON has proven to be a highly useful object serialization and messaging format. JSON-LD is a JSON-based format that can be used to serialize Linked Data. The syntax is designed to not disturb already deployed systems running on JSON, but provide a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines. JSON-LD is capable of serializing any RDF graph or dataset and most, but not all, JSON-LD documents can be directly transformed to RDF.
This is a Last Call publication for both specifications. All substantive technical work on each specification is complete. Feedback on both specifications is encouraged and should be sent to firstname.lastname@example.org. The Last Call period will end in 4 weeks, on May 10th 2013.
You can learn more about JSON-LD in the video introduction to JSON-LD.
The W3C RDF Working Group has published two Working Drafts today:
- RDF 1.1 Semantics. This document describes a precise semantics for the Resource Description Framework 1.1 and RDF Schema. It defines a number of distinct entailment regimes and corresponding systems of inference rules. It is part of a suite of documents which comprise the full specification of RDF 1.1.
- TriG. This document defines a textual syntax for RDF called TriG that allows an RDF dataset to be completely written in a compact and natural text form, with abbreviations for common usage patterns and datatypes. TriG is an extension of the Turtle format.
The RDF Working Group also published two Group Notes today:
On March 20th, 2013 members of the Provenance Working Group gave a tutorial on the PROV family of specifications at the EDBT conference in Genova, Italy. EDBT (“Extending Database Technology”) is widely regarded as one of the prime venues in Europe for dissemination of data management research.
The idea behind the tutorial was to provide a “database-centric” view of PROV, as a complement to the Semantic Web perspective offered by other tutorials, past and future, namely the ISWC’12 PROV tutorial (Boston, Nov. 2012) and the upcoming ESWC’13 PROV tutorial (Montpellier, France May 26-30).
The 1.5 hours tutorial was attended by about 26 participants, mostly from academia. It was structured into three parts of approximately the same length. The first two parts introduced PROV as a relational data model with constraints and inference rules, supported by a (nearly) relational notation (PROV-N). The third part presented known extensions and applications of PROV, based on the extensive PROV implementation report and implementations known to the presenter at the time.
All the presentation material is available here.
We encourage you to take a look at this tutorial material or attend one of the future tutorials on PROV.
This week we’ll hear from Maori Ito (National Institute of Biomedical Innovation, Osaka) on schema.org extensions for biomedical databases, with an opportunity to discuss these in depth.
Abstract: Lack of unified annotation makes it difficult to find specific information across a set of life science databases. Here, we discuss proposed extensions to schema.org to semantically annotate biological databases and their entries using the microdata format. We have applied this to Japanase biomedical data & resources to provide additional fields in our search results. We hope to finalize this proposal and encourage databases to adopt the extension, thereby improving the quality of search results.
W3C Wiki page : http://www.w3.org/wiki/WebSchemas/BioDatabases#BiologicalDatabaseEntry
Properties and example of markup are shown in this page.
BioHackathon2012 : https://github.com/dbcls/bh12/wiki/Schema.org-extension
How and why, discussions and useful links are shown in this link.
BH12.12 (Japanese) : http://wiki.lifesciencedb.jp/mw/index.php/BH12.12/schema.org Concrete examples of markup and search results, discussions and comments are shown in this link.
Toll / Intl #: +1 (646) 583-7415
Toll free #:
Attendee PIN #: 47479369
The W3C Provenance Working Group has released a set of documents to define a framework for interchanging and representing provenance on the Web. One of these documents is the Dublin Core to PROV mapping, which describes a partial mapping from Dublin Core Terms to the PROV-O OWL2 ontology.
But why do we need a mapping between Dublin Core and PROV? Dublin Core has been typically used to describe document metadata in the Web. Many of its terms are directly related to provenance, which describe how the document has been modified, who participated in its creation, or when it was created, issued or published. Dublin Core is widely used and it has a strong community of users behind, so the alignment to a W3C specification for provenance is crucial for interoperability in the Web.
How can you use the mapping? If you are currently using Dublin Core terms and you want to derive direct PROV statements from your assertions, take a look at the direct mappings section. If you are interested in obtaining more refined qualified statements from your Dublin Core metadata, we suggest you to explore the Complex Mappings section.
How can you contribute? By sending us comments, suggestions and feedback. We aim to do a final release of the document before the end April and your comments would very appreciated.
– Post by Daniel Garijo
The W3C RDFa Working Group has added some new predefined prefixes to the RDFa Core Initial Context:
prov” is predefined prefix for “
http://www.w3.org/ns/prov#“, used by the “PROV Provenance Ontology”, currently a W3C Proposed Recommendation
rr” is a predefined prefix for “
http://www.w3.org/ns/r2rml#“, used by the “R2RML: RDB to RDF Mapping Language” W3C Recommendation
sd” is a predefined prefix for “
http://www.w3.org/ns/sparql-service-description#“, used by the “SPARQL 1.1 Service Description” W3C Recommendation
dc11” is a predefined prefix for “
http://purl.org/dc/elements/1.1/“, used by the “Dublin Core Metadata Element Set, Version 1.1”.
RDFa 1.1 predefined prefixes, part of the RDFa 1.1 Initial Contexts, is a convenience mechanism that can be used by RDFa 1.1 authors as CURIE prefixes without the necessity to define those explicitly through the @prefix attribute of RDFa 1.1; the RDFa Core Initial Context defines prefixes that are valid for all Host Languages for RDFa 1.1.. Implementations may choose to access the list of these prefixes (also available in Turtle format) dynamically, or hard code them in their distribution. Authors should check whether their particular tool has already been upgraded to include these new prefixes; implementers are encouraged to add them as soon as possible.
The SPARQL Working Group has completed development of its full-featured system for querying and managing data using the flexible RDF data model. It has now published eleven Recommendations for SPARQL 1.1, detailed in SPARQL 1.1 Overview. SPARQL 1.1 extends the 2008 Recommendation for SPARQL 1.0 by adding features to the query language such as aggregates, subqueries, negation, property paths, and an expanded set of functions and operators. Beyond the query language, SPARQL 1.1 adds other features that were widely requested, including update, service description, a JSON results format, and support for entailment reasoning.
Last week, the W3C Provenance Working group released 13 documents simultaneously that together define a framework for interchanging provenance on the Web. We are really excited about this release as it a complete, full and stable definition of PROV and includes 4 Proposed Recommendations.
While 13 documents is a lot, this is because we have broken down PROV into chunks designed for particular communities and usages. As users of PROV you won’t have to focus on the entire framework just the parts that you need. For an overview of this family of documents and the intended audience check out the PROV Overview.
Here, I wanted to provide you a bit of a guide to the PROV framework and the role of the various documents.
The Core: A Data Model
At the center of PROV is a data model, PROV-DM, that defines a vocabulary for describing provenance. These terms allow for the description of provenance from data, process and agent perspectives. PROV-DM is can be written down in multiple serialization technologies. PROV defines 3 serializations.
- PROV-O is a lightweight OWL2 ontology designed for Linked Data and Semantic Web applications.
- PROV-N is a compact syntax aimed at human consumption.
- PROV-XML is a native xml schema specifically designed for the XML community.
Using these serializations, applications can expose and interchange provenance. PROV-DM and its serializations have specifically been designed with extensibility in mind. We already have several extensions of PROV-O designed for specific communities.
PROV-DM and the associated serializations were purposely designed to allow for flexibility in writing provenance. We wanted to make it as easy to get started as possible and to allow for adaptability as PROV is increasingly used. However, we also realized that some users want a guide as to ensure that their provenance is consistent. Just like there are HTML validators we wanted to provide PROV validators. PROV Constraints defines a set of constraints that can be used to implement validators. PROV-Constraints is backed by a formal semantics defined in PROV Sem.
Data Model extensions
Two use cases for modeling provenance are seen in multiple applications, one is the case of aggregating information into collection/dictionary type structures (e.g. a folder with files) and the other is connecting multiple provenance traces together. PROV-Dictionary and PROV-Links provide define constructs to help model these constructs.
Finally, once you’ve modeled your provenance, you want to be able to easily expose it. PROV-AQ defines how to use already existing web mechanisms, like link headers, to make provenance available. A key part of the design of PROV-AQ was to make it independent of any serialization format, so you can use whatever best fits your needs.
Dublin Core is one of the most widely published vocabularies and many of its terms are associated with provenance. Working with the DC community, we’ve defined a mapping between Dublin Core and PROV-O ( PROV-DC ). This means that applications who support PROV can easily consume provenance already exposed as Dublin Core
PROV provides a framework for writing down, validating and exchanging provenance information in an interoperable way. Already over 60 implementations support PROV and we expect more in the future. If you have an implementation, there’s still time to register yours using one of our surveys. See the Call for Implementations page for more information. PROV contains both recommendations and notes. The classification was primarily based on the amount of prior work and implementation experience the specification has.
What you can do
We are still looking for feedback on the documents: PROV-Primer, PROV-XML, PROV-DC, PROV-Dictionary, PROV-Links, PROV-Sem. You can also report your implementation. If you have questions or comments, please contact email@example.com
Finally, if your a W3C member and think that PROV should be a final recommendation of the W3C encourage your AC Representative to vote for the specification.
The W3C Government Linked Data Working Group has published two Last Call Working Drafts:
- The RDF Data Cube Vocabulary. This is an RDF vocabulary for publishing multidimensional data, particularly statistical data. It is compatible with the cube model that underlies SDMX (Statistical Data and Metadata eXchange), a widely used ISO standard. The Data Cube Vocabulary brings essential SDMX elements to RDF, providing a standard way for governments to publish statistical information as Linked Data. Comments are welcome through 08 April.
- Data Catalog Vocabulary (DCAT). This is an RDF vocabulary for expressing the contents of data catalogs, such as government data portals. DCAT is for catalogs of all kinds of data (not just RDF data), but uses RDF to support easy aggregation of catalogs and construction of services which can search across many unrelated catalogs. Comments are welcome through 08 April.
The W3C Provenance Working Group has published four Proposed Recommendation Documents along with corresponding supporting notes. You can find a complete list at the PROV Overview draft. These documents provide a framework for interchanging provenance on the Web. PROV enables one to represent and interchange provenance information using widely available formats such as RDF and XML. In addition, it provides definitions for accessing provenance information, validating it, and mapping to Dublin Core. Comments are welcome through 9 April 2013.
The W3C Linked Data Platform (LDP) Working Group has published a Working Draft of Linked Data Platform 1.0. This document specifies a set of best practices and simple approach for a read-write Linked Data architecture, based on HTTP access to web resources that describe their state using the RDF data model.
Turtle, the Terse RDF Triple Language, was published by the RDF Working Group as a Candidate Recommendation. This document is intended to become a W3C Recommendation. Turtle is a concrete syntax for RDF. A Turtle document allows writing down an RDF graph in a compact textual form.
W3C publishes a Candidate Recommendation to indicate that the document is believed to be stable and to encourage implementation by the developer community. This Candidate Recommendation is expected to advance to Proposed Recommendation in the course of 2013.
The RDF Working Group specifically solicits implementations of Turtle and submission of implementation reports. Please send implementation reports and any other comments to firstname.lastname@example.org (subscribe, archives). The Candidate Recommendation period ends 26 March 2013. All feedback is welcome.
The W3C RDF Working Group has published a Candidate Recommendataion of Turtle – A Terse RDF Triple Language. This document defines a textual syntax for RDF called Turtle that allows an RDF graph to be completely written in a compact and natural text form, with abbreviations for common usage patterns and datatypes. Turtle provides levels of compatibility with the existing N-Triples format as well as the triple pattern syntax of the SPARQL W3C Recommendation.
The W3C RDFa Working Group has published a Last Call Working Draft of HTML+RDFa 1.1. This specification defines rules and guidelines for adapting the RDFa Core 1.1 and RDFa Lite 1.1 specifications for use in HTML5 and XHTML5. The rules defined in this specification not only apply to HTML5 documents in non-XML and XML mode, but also to HTML4 and XHTML documents interpreted through the HTML5 parsing rules. Comments are welcome through 28 February.
W3C published the Second Edition of the Rule Interchange Format (RIF). RIF was developed through a joint effort of members of the Business Rules, Semantic Web, and Logic Programming communities. It allows rules systems to be connected together for highly-structured knowledge to be accurately exchanged as explained in RIF Use Cases and Requirements. The Second Edition includes editorial improvements and a number of small corrections to the original specification, along with a new RIF Primer.
The six new standards are:
- RIF Core Dialect (Second Edition), which provides a standard, base level of functionality for interchange,
- RIF Basic Logic Dialect (Second Edition) and RIF Production Rule Dialect (Second Edition), which provide extended functionality matching two common classes of rule engines,
- RIF Framework for Logic Dialects (Second Edition), which describes how to extend RIF for use with a large class of systems,
- RIF Datatypes and Built-Ins 1.0 (Second Edition), which borrows heavily from XQuery and XPath for a set of basic operations,
- and RIF RDF and OWL Compatibility (Second Edition), which specifies how RIF works with RDF data and OWL ontologies.
Along with these standards, the RIF Working Group published today six related documents: RIF Overview (Second Edition),RIF Use Cases and Requirements (Second Edition), RIF Test Cases (Second Edition), OWL 2 RL in RIF (Second Edition), RIF Combination with XML data (Second Edition), and RIF In RDF (Second Edition).