Semantic Web: Difference between revisions
imported>Blake Willmarth No edit summary |
imported>Blake Willmarth No edit summary |
||
Line 30: | Line 30: | ||
===Competing Visions=== | ===Competing Visions=== | ||
The "Semantic Web" concept has evolved under competing understandings and visions. Historically, efforts in Artificial Intelligence, notably [[Cyc]] and the [[Knowledge Interchange Format]] (KIF), sought to provide a technological backbone for a grand vision of a universal knowledge store enabling intelligent agents to apply human reasoning. Apple's [[Knowledge Navigator]] represented a vision of networked hypertext with intelligent agent mediators.<ref=" | The "Semantic Web" concept has evolved under competing understandings and visions. Historically, efforts in Artificial Intelligence, notably [[Cyc]] and the [[Knowledge Interchange Format]] (KIF), sought to provide a technological backbone for a grand vision of a universal knowledge store enabling intelligent agents to apply human reasoning. Apple's [[Knowledge Navigator]] represented a vision of networked hypertext with intelligent agent mediators.<ref name="Knowledge Navigator">{{cite web|title=Knowledge Navigator|accessdate=2010-08-10|publisher=The Wikimedia Foundation|url=http://en.wikipedia.org/wiki/Knowledge_Navigator}}</ref> Very similarly, the Semantic Web was conceived with these goals in mind, but by embedding and extending existing technologies in the WWW stack while giving a formal and standardized structure to the relationships and means of data exchange of arbitrary data exposed on the web.<ref name="RDFnot">{{cite web|url=http://www.w3.org/DesignIssues/RDFnot.html|title=What the Semantic Web can represent|last=Berners-Lee|first=Tim|year=1998}}</ref> | ||
These quite different sources of inspiration have aided in confusing a common understanding of "Semantic Web." On the one hand, the Semantic Web aims to create a machine-readable web through the coordinated linking of data and knowledge on a massive scale, such that intelligent agents could be devised to provide precise answers to and analysis of queries of arbitrary depth and nuance. On the other hand, it also seeks to improve human interaction and traditional indexing and search of the web by a more formal structure to the web. It aims to do so by establishing connections in an incremental fashion among individual pieces of data both embedded in documents and realized as "micro-transactions" of web activity that are conventionally stored in relational databases. | These quite different sources of inspiration have aided in confusing a common understanding of "Semantic Web." On the one hand, the Semantic Web aims to create a machine-readable web through the coordinated linking of data and knowledge on a massive scale, such that intelligent agents could be devised to provide precise answers to and analysis of queries of arbitrary depth and nuance. On the other hand, it also seeks to improve human interaction and traditional indexing and search of the web by a more formal structure to the web. It aims to do so by establishing connections in an incremental fashion among individual pieces of data both embedded in documents and realized as "micro-transactions" of web activity that are conventionally stored in relational databases. |
Revision as of 23:26, 9 August 2010
To provide students with experience in collaboration, you are warmly invited to join in here, or to leave comments on the discussion page. The anticipated date of course completion is 13 August 2010. One month after that date at the latest, this notice shall be removed. Besides, many other Citizendium articles welcome your collaboration! |
Overview
The Semantic Web is a concept, first named by Tim Berners-Lee, for a "web of knowledge" in which data on the world wide web, whether in structured data stores or loosely-structured documents, would be annotated and classified so that machines can infer relationships based on the semantic information - that is, what the content means - rather than simply on the matching of text strings.[1] There is also a W3C standards effort[2] related to this concept.
In order to associate meaning with content, Semantic Web utilizes structures for identification, categorization and linking data. While a web page about soccer might specify how pictures and text should be arranged, what colors and font to use, and other presentation data, a similar Semantic Web document would convey the fact that the data pertained to the sport of soccer, perhaps a list of teams, scores of recent matches, and other data in categorization containers. This presentation allows other consumers (mainly programs) of the data to parse and utilize the data in meaningful ways. As opposed to modern web crawlers which must catalogue, index, and apply a certain amount of artificial intelligence to derive the meaning of documents on the web, semantic web allows data to be parsed easily for meaning - ultimately resulting in greater ability to share and discover information.
One interesting challenge that faces semantic web is the ability to not only transmit data, but also to associate metadata. Metadata is descriptive information that conveys relationships between data types. In order to provide a flexible framework that is capable of transmitting multiple different types of data, as well as the meaning and relationships of that data, semantic web has integrated metadata into the format. This allows dynamic and unpredictable data formats and types to be transmitted and consumed by facilitating consumers' ability to process data by utilizing the embedded metadata to parse and understand data and inter-relationships.[3]
What differentiates the Semantic Web from existing data exchange formats is the use of URIs to uniquely identify things, and relationships between things. The sort of problem scenario that Semantic Web technologies try to solve are those involving multiple disparate source of data - for instance, hooking together train timetables and class timetables, so a student can automatically plan their travel itinerary without having to manually match the data together.
The W3C have put forward a variety of standards built on top of the Resource Description Framework, a formal semantic model for representing things and the relationships between them.
Competing Visions
The "Semantic Web" concept has evolved under competing understandings and visions. Historically, efforts in Artificial Intelligence, notably Cyc and the Knowledge Interchange Format (KIF), sought to provide a technological backbone for a grand vision of a universal knowledge store enabling intelligent agents to apply human reasoning. Apple's Knowledge Navigator represented a vision of networked hypertext with intelligent agent mediators.[4] Very similarly, the Semantic Web was conceived with these goals in mind, but by embedding and extending existing technologies in the WWW stack while giving a formal and standardized structure to the relationships and means of data exchange of arbitrary data exposed on the web.[5]
These quite different sources of inspiration have aided in confusing a common understanding of "Semantic Web." On the one hand, the Semantic Web aims to create a machine-readable web through the coordinated linking of data and knowledge on a massive scale, such that intelligent agents could be devised to provide precise answers to and analysis of queries of arbitrary depth and nuance. On the other hand, it also seeks to improve human interaction and traditional indexing and search of the web by a more formal structure to the web. It aims to do so by establishing connections in an incremental fashion among individual pieces of data both embedded in documents and realized as "micro-transactions" of web activity that are conventionally stored in relational databases.
Under the latter perspective, Semantic Web was developed to meet a specific deficiency in web based communications and is often referred to as Web 3.0[6]. Although well defined in RFC's, HTML is architected to perform exchange of information that is delimited and optimized for presentation. That is, the use of HTML is designed to communicate the appearance of documents within web browsers. This is useful when attempting to create a document that will render in the same form across multiple platforms (or web browsers) but is problematic for transmitting meaning of data. There are a few HTML specifications (notably META tags and other document head elements[7]) that convey meaning, but these are precious few.
In this way the Semantic Web is closely tied to microformats which are an alternative way to embed meaning into HTML documents. Microformats use standard HTML tags along with generally agreed upon conventions for attributes, in order to delineate certain data within documents. For instance, microformats can be used to embed contact data or calendar data in web pages for easy integration with other programs. This can allow users of popular calendaring or contact management software to simply click on elements within web pages and import calendar events, or contacts, directly into their calendaring or address book software.[8]
There is a final perspective that focuses less on the machine-readable component of Semantic Web (linking data in terms of relationships) than the universal metadata cataloging and tagging of existing documents and data. This perspective has received less attention, especially as advanced indexing and search tools - both with Google on the web as a whole and in individual curated collections - have largely addressed these needs.
Recent efforts have focused on enabling the mechanical inference of relationships between "particular" data in tightly-coupled yet loosely-connected ontological domains.[9] Their application has been largely domain-oriented and individually-designed with an understanding of the problems that they will be used to solve. By adopting domain-invariant standards, however, this is perhaps done with the tacit understanding of contributing to a more universal "web of knowledge" .
Linked Data
"Linked Data" is a term coined by Tim Berners-Lee to describe the way in which a "Giant Global Graph" of semantic data expressed as triplestore can come into fruition. Its meaning is designed to connote the dual concept of not only exposing data stores in a standard format (RDF), but also establishing individual links and semantic vocabularies among individual pieces of data. The term narrows the focus of Semantic Web from an abstraction to actual data linked between arbitrary things, which are identified by URIs and described by RDF.
Because data are identified, modeled, described and linked in a formal standard, linked data permits browsing, searching and combining different sources and domains. Machine crawlers and indexers can be applied to the graph data in a tractable way and applications can solve sophisticated problems by utilizing the data and its relationships. Humans can interface the data by browsing and structured querying via SPARQL and related interfaces (ex. facets).
Semantic Web Technologies
The stack of technologies comprising the Semantic Web infrastructure is largely standard and mature. HTTP URIs identify concepts and objects, RDF (Resource Description Framework) describes a data model, OWL expresses ontological vocabularies, and the SPARQL permits operations on the resultant graph data.
Triplestore
Triplestore is the data convention utilized by Semantic Web and RDF to relate objects and meaning. Triplestore is a rather simple linguistic convention that makes it easy to classify data and make connections. Triplestore takes the form "Subject" - "Predicate" - "Object". For example:
Garden location Backyard Firstrow location Garden Firstrow plantedWith Beets Firstrow plantedWith Carrots
Using this standard convention it is easy to catalogue data and to trace relationships between them. For instance, using the above example I can figure out what is planted in the first row of the garden in the backyard by tracing the relationships:
?Garden location Backyard -> finds the Garden I'm looking for ?Firstrow location Garden -> finds the row in the Garden just retrieved Firstrow plantedwith ?Veggie -> gets the vegetable planted in the first row
This rather simple model makes it possible to define (and query) complex relationships without first having a defined data model. This convention gives semantic web the adaptability to handle evolving dynamic data without constraining that data. This also means that the model doesn't have to be redefined to deal with emerging data types.
Triplestores can be used to create complex graphs of data. When expressing these data using RDF/XML they are typically rendered as N-Triples, which are expressed in plain text and used for transmitting this data across the network. N-triples do contain redundancy, however, so when moving N-triples across the wire it is common to utilize the RDF N3 notation, which compresses the data by removing duplication.
RDFa
Although using RDF is compact, it is not easily human readable. RDFa is a response to the disparity of data presentations between XHTML and RDF. RDFa allows RDF data to be embedded in XHTML content. Using standard XHTML tags like the <span> tag semantic web data can be mixed into XHTML presentation. For example:
<span xmlns:example="http://example.tld/example/0.a" about="http://foo.tld/bar.rd#ts" property="example:bar" content="some_data">Some XHTML for presentation</span>
OWL
TODO
SPARQL
TODO
Programming with Semantic Web
Because RDF is an open format, libraries exist for almost every programming language to make it easy for programmers to produce and consume RDF data. Some examples include the RDF.rb[10] library for Ruby, JRDF[11] for Java, a PEAR[12] RDF package for PHP and many more.
Domain-specific semantic models
Medicine
Semantic models seem the major trend in expert support to medicine. As an example of how semantic methodologies are used, consider several isolated concepts, which could be considered "nouns":
- beta-adrenergic antagonists (i.e., beta blockers)
- bradycardia (i.e., slow pulse)
- asthma
- hypertension
- benign hand tremor
One of the notations for relationships is the Unified Medical Language System® (UMLS®). Informally, some of the "verb" semantic relationships among the above could be:
- beta-adrenergic antagonists TREAT hypertension and benign hand tremor
- beta-adrenergic antagonists CAUSE bradycardia
- beta-adrenergic antagonists TRIGGER asthma
"Hypertension" would have a number of other TREATS relations, from drug classes such as thiazide diuretics, angiotensin-II converting enzyme antagonists, calcium channel blockers, angiotensin-II receptor blockers, etc.
ULMS is now being extended with formal ontologies: [13]
Semantic Web in CMS
Content management systems (CMS) can benefit greatly from RDF features. RDF is an expressive means by which CMS can both publish and consume data. Because RDF makes data more easily machine readable it is perfect for systems that integrate data (such as CMS).
Drupal
The Drupal content management system is making a big push to include RDF and semantic web as part of the upcoming Drupal 7 release.[14] There is a Drupal group devoted to semantic web as well as a code sprint devoted to the topic. Drupal 7 will automatically include RDFa elements in page presentation. The will mean that new Drupal 7 sites will automatically include RDFa data without any additional overhead, coding, or administration necessary from site administrators. This powerful new feature will allow site users to leverage RDFa seamlessly. With over significant and growing market share of CMS, Drupal's support of semantic web will mean a vast increase in implementation of RDF.[15]
Wordpress
Wordpress has several third party plugins that implement RDF.[16]
MediaWiki
MediaWiki has Semantic MediaWiki to integrate the Semantic Web in a wiki setting.
Other Notable Uses
The BBC made heavy use of semantic web technologies for their internet coverage of the 2010 World Cup games.[17]
Facebook recently announced support for open graph protocol which is an RDF implementation of semantic web.
Google has announced support for "Rich Snippets" which appear as summary data in search results (for things like customer reviews, map location, etc.) utilizing RDFa. [18]
DBpedia is a project designed to extract structured data from the popular Wikipedia site.
Issues and Criticism
For the Semantic Web to reach its potential, it must overcome a number of technical and social hurdles:
Consistency: Links between data must be consistent - that is, they must convey information that doesn't conflict and using the same naming standards. It requires repetition of the same information on the part of many parties. Databases like DBPedia that are sourced by a multitude of public contributors are certain to contain inconsistent information, although widely varying by domain. Further, these sources are less likely to have the requisite formal structure to fully expose them to the broader web.[19]
Completeness: One-way assertions make inferring relationships more difficult and error-prone and two-way browsing impossible.[5] Completeness also requires multiple copies of the same data, making efficient use of the data more difficult.
Privacy: Exposing personal data on an open semantic web, possibly unwittingly, may reveal more information than the originator wished to share by doing so. The advent of popular applications and platforms adopting semantic web technologies compounds this already growing concern.
Quality: High standards of data quality must be maintained for the information conveyed therein to be useful and robust in the face of ambiguity and spoofing. It is not a necessary consequence of Semantic Web technologies that it would be easy for semantic web users to be able to discern the quality of the data being used, either explicitly or implicitly.
Precision: Because the Semantic Web purposefully aims to capture the broadest aspects of human knowledge, it's rendered difficult to establish meaning from human-oriented abstractions.[9] For instance, presenting an opinion of someone on the semantic web as unvarnished data ("X is a good person"; "Y is a funny movie") one runs into the problem of interpreting the meaning of those statements - what makes someone good or something funny? Much of the embedded semantic knowledge on the web is not apolitical, and would be difficult to automate meaning from. It does enable, however, a more limited use of the data if the query seeks to extract objective information from the data - the percentage of reviewers who thought movie Y was funny, for example.
Domain-Transferability: The same concept in two domains can represent different information that a machine would not differentiate.[9]For example, the concept of "cost" could represent either a budgetary amount or a moral abstraction, as in "the cost of war."
Ambiguity: Much of knowledge on the web loses information when transformed from its wider setting into a simple subject-predicate-object triple representation. Building context-aware representations of data requires transferring implicit domain knowledge in way that isn't obvious from standard constructions.
Trust: Related to quality, inferring relationships within and across domains requires trust in the sources of those data. Robust systems need be developed to circumvent the inevitable noise and spoofing should a generally useful Semantic Web materialize. There is also a distinction to be made between "material" and "intellectual" trust; for example, information that is naturally quantifiable like prices is more tractable than assertions or distillations of facts made by organizations and individuals.[9] Addressing this issue requires intelligent agents working in coordination with the addition of yet to be standardized mechanisms within the Semantic Web stack of technologies.
User Cognition: Adoption of Semantic Web concepts and technical constructs has been slow to develop, but have quickened over the last few years. Nevertheless, an additional burden is placed on individuals, both technical and non-technical, who wish to contribute meaningfully the semantic information embedded in their web content. [20] Tools can ease the transition, however merely the act of explicating these relationships can be a complex and subtle task and requires learning standard representations of their domain. Well-meaning individuals may unwittingly make false or ambiguous claims to their data.
References
- ↑ The Semantic Web, Scientific American Magazine, 2001
- ↑ W3C Semantic Web Frequently Asked Questions. W3C (2010). Retrieved on 2010-07-11.
- ↑ Segaran, Toby; Colin Evanas, Jamie Taylor (2009). Programming the Semantic Web. O'Reilly.
- ↑ 5.0 5.1 Berners-Lee, Tim (1998). What the Semantic Web can represent.
- ↑ Entrepreneurs See a Web Guided by Common Sense. New York Times (2006).
- ↑ The global structure of an HTML document. W3C.
- ↑ Microformats hCal example. Microformats.org (2010).
- ↑ 9.0 9.1 9.2 9.3 (2003) "Which Semantic Web?". Proceedings of the Fourteenth ACM Conference on Hypertext and Hypermedia, ACM.
- ↑ RDF library for the Ruby programming language.
- ↑ JRDF - An RDF Library in Java.
- ↑ RDF library for PHP from PEAR (PHP Extension and Application Repository).
- ↑ Burgun, Anita & Olivier Bodenreider, Mapping the UMLS Semantic Network into General Ontologies
- ↑ The RDFa initiative in Drupal 7, and how it will impact the Semantic Web.
- ↑ Drupal RDF Mapping API. Drupal.org (2009).
- ↑ Does Facebook Really Want a Semantic Web?. ReadWriteWeb (2010).
- ↑ BBC World Cup 2010 dynamic semantic publishing (2010).
- ↑ Google introduces rich snippets (2009).
- ↑ wiki.dbpedia.org: Use Cases.
- ↑ (1999) "Formality Considered Harmful: Experiences, Emerging Themes, and Directions on the Use of Formal Representations in Interactive Systems". Computer Supported Cooperative Work (CSCW) 8 (4): 333-352.