Semantic Web: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Blake Willmarth
No edit summary
imported>Blake Willmarth
No edit summary
Line 84: Line 84:
===RDF Schema===
===RDF Schema===


RDF makes no assumptions on the vocabularies used to describe object properties. RDF Schema (RDFS) allows for the definition of these vocabularies. The [[W3C specification|http://www.w3.org/TR/rdf-schema/]] defines a vocabulary description language for RDF for this purpose and contributes some high-level RDF vocabularies that may be shared across domains.
The RDF data model makes no assumptions on the vocabularies used to describe object properties. RDF Schema (RDFS) allows for the definition of these vocabularies. The W3C specification defines a vocabulary description language for RDF for this purpose and contributes some high-level RDF vocabularies that may be shared across domains.<ref name="W3C RDF Schema">{{cite web|url=http://www.w3.org/TR/rdf-schema|title=RDF Vocabulary Description Language 1.0: RDF Schema}}</ref>
 
In RDFS, Classes and Properties distinguish between ''types'' of objects from ''specific'', individual objects. Classes form a hierarchy of types where objects belonging to that class are referred to as instances. This is specified using the rdf:type predicate. The use of classes in RDFS imposes restrictions on what kinds of statements can be made about objects. For instance, the "year-born" predicate should expect a person on as the subject and a date as the object. In this example, date is the ''range'' (rdfs:range) of the property and person is the ''domain'' (rdfs:domain).


===OWL===
===OWL===

Revision as of 17:01, 10 August 2010

All unapproved Citizendium articles may contain errors of fact, bias, grammar etc. A version of an article is unapproved unless it is marked as citable with a dedicated green template at the top of the page, as in this version of the 'Biology' article. Citable articles are intended to be of reasonably high quality. The participants in the Citizendium project make no representations about the reliability of Citizendium articles or, generally, their suitability for any purpose.

Nuvola apps kbounce green.png
Nuvola apps kbounce green.png
This article is currently being developed as part of an Eduzendium student project. The course homepage can be found at CZ:Special_Topics_2010.
To provide students with experience in collaboration, you are warmly invited to join in here, or to leave comments on the discussion page. The anticipated date of course completion is 13 August 2010. One month after that date at the latest, this notice shall be removed.
Besides, many other Citizendium articles welcome your collaboration!


This article is a stub and thus not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

Overview

The Semantic Web is a concept, first named by Tim Berners-Lee, for a "web of knowledge" in which data on the world wide web, whether in structured data stores or loosely-structured documents, would be annotated and classified so that machines can access and infer relationships based on the semantic information - that is, what the content means - rather than simply on the matching of text strings.[1] There is also a W3C standards effort[2] related to this concept.

In order to associate meaning with content, Semantic Web utilizes structures for identification, categorization and linking data. While a web page about soccer might specify how pictures and text should be arranged, what colors and font to use, and other presentation data, a similar Semantic Web document would convey the fact that the data pertained to the sport of soccer, perhaps a list of teams, scores of recent matches, and other data in categorization containers. This presentation allows other consumers (mainly programs) of the data to parse and utilize the data in meaningful ways. As opposed to modern web crawlers which must catalogue, index, and apply a certain amount of artificial intelligence to derive the meaning of documents on the web, semantic web allows data to be parsed easily for meaning - ultimately resulting in greater ability to share and discover information.

One interesting challenge that faces semantic web is the ability to not only transmit data, but also to associate metadata. Metadata is descriptive information that conveys relationships between data types. In order to provide a flexible framework that is capable of transmitting multiple different types of data, as well as the meaning and relationships of that data, semantic web has integrated metadata into the format. This allows dynamic and unpredictable data formats and types to be transmitted and consumed by facilitating consumers' ability to process data by utilizing the embedded metadata to parse and understand data and inter-relationships.[3]

What differentiates the Semantic Web from existing data exchange formats is the use of URIs to uniquely identify things, and relationships between things. The sort of problem scenario that Semantic Web technologies try to solve are those involving multiple disparate source of data - for instance, hooking together train timetables and class timetables, so a student can automatically plan their travel itinerary without having to manually match the data together.

The W3C have put forward a variety of standards built on top of the Resource Description Framework (RDF), a formal semantic model for representing things and the relationships between them.

Competing Visions

The "Semantic Web" concept has evolved under competing understandings and visions. Historically, efforts in Artificial Intelligence, notably Cyc and the Knowledge Interchange Format (KIF), sought to provide a technological backbone for a grand vision of a universal knowledge store enabling intelligent agents to apply human reasoning. Apple's Knowledge Navigator represented a vision of networked hypertext with intelligent agent mediators.[4] Very similarly, the Semantic Web was conceived with these goals in mind, but by embedding and extending existing technologies in the WWW stack while giving a formal and standardized structure to the relationships and means of data exchange of arbitrary data exposed on the web.[5]

These quite distinct provenances have confused a common understanding of the scope and goals of a "Semantic Web." On the one hand, the Semantic Web aims to create a machine-readable web through the coordinated linking of data and knowledge on a massive scale, such that intelligent agents could be devised to provide precise answers to and analysis of queries of arbitrary depth and nuance. On the other hand, it also seeks to improve human interaction and traditional web querying and information retrieval by giving a more formal structure to the web. It aims to do so by establishing connections in an incremental fashion among individual pieces of data both embedded in documents and realized as micro-transactions of activity that are conventionally stored in relational databases. In this way success is defined as the improvement of social welfare through a superior user experience of day-to-day web activities.[6]

Under the latter perspective, Semantic Web was developed to meet a specific deficiency in web based communications and is often referred to as Web 3.0[7]. Although well defined in RFC's, HTML is architected to perform exchange of information that is delimited and optimized for presentation. That is, the use of HTML is designed to communicate the appearance of documents within web browsers. This is useful when attempting to create a document that will render in the same form across multiple platforms (or web browsers) but is problematic for transmitting meaning of data. There are a few HTML specifications (notably META tags and other document head elements[8]) that convey meaning, but these are precious few.

In this way the Semantic Web is closely tied to microformats which are an alternative way to embed meaning into HTML documents. Microformats use standard HTML tags along with generally agreed upon conventions for attributes, in order to delineate certain data within documents. For instance, microformats can be used to embed contact data or calendar data in web pages for easy integration with other programs. This can allow users of popular calendaring or contact management software to simply click on elements within web pages and import calendar events, or contacts, directly into their calendaring or address book software.[9]

There is a final perspective that focuses less on the machine-readable component of Semantic Web (linking data in terms of relationships) than the universal metadata cataloging and tagging of existing documents and data for human consumption. This perspective has received less attention, especially as advanced indexing and search tools - both with Google on the web as a whole and in individual curated collections - have largely addressed these needs.

Recent efforts have focused on enabling the mechanical inference of relationships between "particular" data in "islands" of tightly-coupled domains where applications have been individually-designed with an understanding of the problems to be solved.[10] This type of application has also been referred to as Cooperative Information Systems.[6] By adopting domain-invariant standards, however, this is perhaps done with the tacit understanding of contributing to a more universal "web of knowledge".

Linked Data

"Linked Data" is a term coined by Tim Berners-Lee to describe the way in which a "Giant Global Graph" of semantic data serialized in triplestore serves as the core of Semantic Web. "Linked Data" connotes the dual concept of not only exposing data stores in a standard format (RDF) but also establishing individual links and semantic vocabularies among individual pieces of data. It narrows the focus of Semantic Web from an abstraction to actual data linked between arbitrary things, which are identified by URIs and described by RDF.

Because data are identified, modeled, described and linked in a formal standard, linked data by itself permits browsing, searching and combining different sources and domains of data. Machine crawlers and indexers can be applied to the graph data in a tractable way and applications can solve sophisticated problems by utilizing the data and its relationships. Humans can interface the data by browsing and structured querying via SPARQL and related interfaces (ex. facets). All this can, and perhaps must, evolve before the Semantic Web enables more advanced agent intelligence.

Semantic Web Technologies

The stack of technologies comprising the Semantic Web infrastructure is largely standard and mature. HTTP URIs identify concepts and objects, RDF (Resource Description Framework) describes a data model, OWL expresses ontological vocabularies, and the SPARQL permits operations on the resultant graph data.

Triplestore

Triplestore is the data convention utilized by Semantic Web and RDF to relate objects and meaning. Triplestore is a rather simple linguistic convention that makes it easy to classify data and make connections. Triplestore takes the form "Subject" - "Predicate" - "Object". For example:

Garden location Backyard
Firstrow location Garden
Firstrow plantedWith Beets
Firstrow plantedWith Carrots

Using this standard convention it is easy to catalogue data and to trace relationships between them. For instance, using the above example I can figure out what is planted in the first row of the garden in the backyard by tracing the relationships:

?Garden location Backyard -> finds the Garden I'm looking for
?Firstrow location Garden -> finds the row in the Garden just retrieved
Firstrow plantedwith ?Veggie -> gets the vegetable planted in the first row

This rather simple model makes it possible to define (and query) complex relationships without first having a defined data model. This convention gives semantic web the adaptability to handle evolving dynamic data without constraining that data. This also means that the model doesn't have to be redefined to deal with emerging data types.

Triplestores can be used to create complex graphs of data. When expressing these data using RDF/XML they are typically rendered as N-Triples, which are expressed in plain text and used for transmitting this data across the network. N-triples do contain redundancy, however, so when moving N-triples across the wire it is common to utilize the RDF N3 notation, which compresses the data by removing duplication.

RDFa

Although using RDF is compact, it is not easily human readable. RDFa is a response to the disparity of data presentations between XHTML and RDF. RDFa allows RDF data to be embedded in XHTML content. Using standard XHTML tags like the <span> tag semantic web data can be mixed into XHTML presentation. For example:

<span xmlns:example="http://example.tld/example/0.a" about="http://foo.tld/bar.rd#ts" property="example:bar" content="some_data">Some XHTML for presentation</span>

RDF Schema

The RDF data model makes no assumptions on the vocabularies used to describe object properties. RDF Schema (RDFS) allows for the definition of these vocabularies. The W3C specification defines a vocabulary description language for RDF for this purpose and contributes some high-level RDF vocabularies that may be shared across domains.[11]

In RDFS, Classes and Properties distinguish between types of objects from specific, individual objects. Classes form a hierarchy of types where objects belonging to that class are referred to as instances. This is specified using the rdf:type predicate. The use of classes in RDFS imposes restrictions on what kinds of statements can be made about objects. For instance, the "year-born" predicate should expect a person on as the subject and a date as the object. In this example, date is the range (rdfs:range) of the property and person is the domain (rdfs:domain).

OWL

TODO

SPARQL

TODO


Programming with Semantic Web

Because RDF is an open format, libraries exist for almost every programming language to make it easy for programmers to produce and consume RDF data. Some examples include the RDF.rb[12] library for Ruby, JRDF[13] for Java, a PEAR[14] RDF package for PHP and many more.

Domain-specific semantic models

Medicine

Semantic models seem the major trend in expert support to medicine. As an example of how semantic methodologies are used, consider several isolated concepts, which could be considered "nouns":

One of the notations for relationships is the Unified Medical Language System® (UMLS®). Informally, some of the "verb" semantic relationships among the above could be:

  • beta-adrenergic antagonists TREAT hypertension and benign hand tremor
  • beta-adrenergic antagonists CAUSE bradycardia
  • beta-adrenergic antagonists TRIGGER asthma

"Hypertension" would have a number of other TREATS relations, from drug classes such as thiazide diuretics, angiotensin-II converting enzyme antagonists, calcium channel blockers, angiotensin-II receptor blockers, etc.

ULMS is now being extended with formal ontologies: [15]

Semantic Web in CMS

Content management systems (CMS) can benefit greatly from RDF features. RDF is an expressive means by which CMS can both publish and consume data. Because RDF makes data more easily machine readable it is perfect for systems that integrate data (such as CMS).

Drupal

The Drupal content management system is making a big push to include RDF and semantic web as part of the upcoming Drupal 7 release.[16] There is a Drupal group devoted to semantic web as well as a code sprint devoted to the topic. Drupal 7 will automatically include RDFa elements in page presentation. The will mean that new Drupal 7 sites will automatically include RDFa data without any additional overhead, coding, or administration necessary from site administrators. This powerful new feature will allow site users to leverage RDFa seamlessly. With over significant and growing market share of CMS, Drupal's support of semantic web will mean a vast increase in implementation of RDF.[17]

Wordpress

Wordpress has several third party plugins that implement RDF.[18]

MediaWiki

MediaWiki has Semantic MediaWiki to integrate the Semantic Web in a wiki setting.

Other Notable Uses

The BBC made heavy use of semantic web technologies for their internet coverage of the 2010 World Cup games.[19]

Facebook recently announced support for open graph protocol which is an RDF implementation of semantic web.

Google has announced support for "Rich Snippets" which appear as summary data in search results (for things like customer reviews, map location, etc.) utilizing RDFa. [20]

DBpedia is a project designed to extract structured data from the popular Wikipedia site.

Issues and Criticism

For the Semantic Web to reach its potential, it must overcome a number of technical and social hurdles:

Consistency: Links between data must be consistent - that is, they must convey information that doesn't conflict and using the same naming standards. It requires repetition of the same information on the part of many parties. Databases like DBPedia that are sourced by a multitude of public contributors are certain to contain inconsistent information, although widely varying by domain. Further, these sources are less likely to have the requisite formal structure to fully expose them to the broader web.[21]

Completeness: One-way assertions make inferring relationships more difficult and error-prone and two-way browsing impossible.[5] Completeness also requires multiple copies of the same data, rendering inefficient storage of the data.

Privacy: Exposing personal data on an open semantic web, possibly without one's knowledge or consent, may reveal more information than the originator wished to share by doing so. The advent of popular applications and platforms adopting semantic web technologies compounds this already growing concern.

Quality: High standards of data quality must be maintained for the information conveyed therein to be useful and robust in the face of ambiguity and spoofing. It is not a necessary consequence of Semantic Web technologies that it would be easy for semantic web users to be able to discern the quality of the data being used, either explicitly or implicitly.

Precision: Because the Semantic Web purposefully aims to capture the broadest aspects of human knowledge, it's rendered difficult to establish meaning from human-oriented abstractions.[10] For instance, presenting an opinion of someone on the semantic web as unvarnished data ("X is a good person"; "Y is a funny movie") one runs into the problem of interpreting the meaning of those statements - what makes someone good or something funny? Much of the embedded semantic knowledge on the web is not apolitical, and would be difficult to automate meaning from. It does enable, however, a more limited use of the data if the query seeks to extract objective information from the data - the percentage of reviewers who thought movie Y was funny, for example. For more universal uses however the widespread adoption of precise and well-defined ontologies are required.

Domain-Transferability: The same concept in two domains can represent different information that a machine would not differentiate.[10]For example, the concept of "cost" could represent either a budgetary amount or a moral abstraction, as in "the cost of war."

Ambiguity: Much of knowledge on the web loses information when transformed from its wider setting into a simple subject-predicate-object triple representation. Building context-aware representations of data requires transferring implicit domain knowledge in way that isn't obvious from standard constructions.

Trust: Related to quality, inferring relationships within and across domains requires trust in the sources of those data. Robust systems need be developed to circumvent the inevitable noise and spoofing should a generally useful Semantic Web materialize. There is also a distinction to be made between "material" and "intellectual" trust; for example, information that is naturally quantifiable like prices is more tractable than assertions or distillations of facts made by organizations and individuals.[10] Addressing this issue requires intelligent agents working in coordination with the addition of yet to be standardized mechanisms within the Semantic Web stack of technologies.[22]

User Cognition: Adoption of Semantic Web concepts and technical constructs has been slow to develop, but have quickened over the last few years. Nevertheless, an additional burden is placed on individuals, both technical and non-technical, who wish to contribute meaningfully the semantic information embedded in their web content. [23] Tools can ease the transition, however merely the act of explicating these relationships can be a complex and subtle task that requires learning, in detail, their domain-specific representations. Well-meaning individuals may unwittingly attach false or ambiguous claims to their data.

Altruism: Semantic Web adoption itself relies in large part on the altruistic spirit of individuals and organizations. Early adopters are unlikely to see any immediate gains from open semantic publishing. Due to the scope of the project and difficulties mentioned, attaining a "critical mass" of responsible contributors may prove impossible the realization of a vision that approaches the expectations of the Artificial Intelligence field.

References

  1. The Semantic Web, Scientific American Magazine, 2001
  2. W3C Semantic Web Frequently Asked Questions. W3C (2010). Retrieved on 2010-07-11.
  3. Segaran, Toby; Colin Evanas, Jamie Taylor (2009). Programming the Semantic Web. O'Reilly. 
  4. Knowledge Navigator. The Wikimedia Foundation. Retrieved on 2010-08-10.
  5. 5.0 5.1 Berners-Lee, Tim (1998). What the Semantic Web can represent.
  6. 6.0 6.1 Antoniou, Grigoris; Frank van Harmelen (2008). A Semantic Web Primer, 2nd. MIT Press. 
  7. Entrepreneurs See a Web Guided by Common Sense. New York Times (2006).
  8. The global structure of an HTML document. W3C.
  9. Microformats hCal example. Microformats.org (2010).
  10. 10.0 10.1 10.2 10.3 Marshall, C. C.; Shipman, F. M. (2003). "Which Semantic Web?". Proceedings of the Fourteenth ACM Conference on Hypertext and Hypermedia, ACM.
  11. RDF Vocabulary Description Language 1.0: RDF Schema.
  12. RDF library for the Ruby programming language.
  13. JRDF - An RDF Library in Java.
  14. RDF library for PHP from PEAR (PHP Extension and Application Repository).
  15. Burgun, Anita & Olivier Bodenreider, Mapping the UMLS Semantic Network into General Ontologies
  16. The RDFa initiative in Drupal 7, and how it will impact the Semantic Web.
  17. Drupal RDF Mapping API. Drupal.org (2009).
  18. Does Facebook Really Want a Semantic Web?. ReadWriteWeb (2010).
  19. BBC World Cup 2010 dynamic semantic publishing (2010).
  20. Google introduces rich snippets (2009).
  21. wiki.dbpedia.org: Use Cases.
  22. Hartig, Olaf (2009). "Querying Trust in RDF Data with tSPARQL". 6th Annual European Semantic Web Conference, 5-20.
  23. (1999) "Formality Considered Harmful: Experiences, Emerging Themes, and Directions on the Use of Formal Representations in Interactive Systems". Computer Supported Cooperative Work (CSCW) 8 (4): 333-352.