阴道放屁是什么原因| 破财免灾什么意思| 自贸区是什么意思| 血栓是什么病| 对唔嗨住什么意思| 阴骘是什么意思| 老年人睡眠多是什么原因| 喝盐水有什么作用和功效| 嘴贫是什么意思| vans什么意思| 什么的| 乌鸡汤放什么补气补血| 嘴巴里长泡是什么原因| 什么时候恢复的高考| 身份证号最后一位代表什么| sneakers是什么意思| 定向招生是什么意思| 什么是骨科| 2月4号是什么星座| 人属于什么界门纲目科属种| 1月21号是什么星座| 淋巴细胞百分比偏高是什么原因| 人被老鼠咬了什么预兆| 假释是什么意思| 暗无天日是什么意思| 茶叶有什么功效与作用| 复方板蓝根和板蓝根有什么区别| 男属鸡的和什么属相最配| 茉莉毛尖属于什么茶| 骨质疏松吃什么钙片好| x片和ct有什么区别| 高是什么意思| 阴道镜活检是什么意思| 欧洲为什么没有统一| egm是什么意思| 铁锭是什么意思| 孕妇吃什么菜| 蕾丝边是什么意思| 麦芽糊精是什么| 胃下垂是什么症状| 小孩半夜哭闹是什么原因| 钡餐造影能查出什么| 贫血的人适合喝什么茶| 安宫牛黄丸什么时候吃最好| 梦见自己大出血是什么征兆| 左眼皮一直跳是什么预兆| 上火是什么意思| 菜场附近开什么店好| 怀孕后不能吃什么| 去湿气喝什么茶| 病毒是什么生物| 口且念什么| 失策是什么意思| 颈动脉在什么位置| 高粱是什么粮食| 甲亢吃什么食物好| 放低姿态是什么意思| 半夜醒来口干舌燥是什么原因| 什么的哲理| 玉字是什么结构| 孔雀开屏寓意什么意思| 眼前有亮光闪是什么问题| 什么乐器最好学| 静脉石是什么意思| 身份证x代表什么意思| 三个土是什么字| 磕头虫吃什么| 男人眉心有痣代表什么| 薰衣草什么时候开花| 鱼油吃多了有什么副作用| 烂漫什么意思| 胆囊结石不能吃什么| 什么人不适合做纹绣师| 什么是肋骨骨折| 威士忌属于什么酒| opv是什么疫苗| bhcg是什么意思| 魔芋是什么东西做的| 微波炉蒸鸡蛋羹几分钟用什么火| 长智齿一般什么年龄| 12月份是什么星座的| 凉拌菜用什么醋好| 卡卡是什么意思| 7月15日是什么节日| 减肥挂什么科| 幽门螺旋杆菌有什么危害| 头发白缺什么| 贯众是什么植物| 为什么要写作业| 子宫内膜14mm说明什么| 周围神经炎是什么症状| 21金维他什么时候吃效果最好| tin是什么| 什么空调最省电| 茯苓的功效与作用是什么| 氟化钠是什么| 芊芊是什么颜色| 尿有味是什么原因| 事业编有什么好处| 4月28日什么星座| 胡人是什么民族| 吉祥动物是什么生肖| 什么是跨境电商| 美籍华裔是什么意思| 砂舞是什么意思| 周易和易经有什么区别| 前三个月怀孕注意什么| 女性胆固醇高吃什么好| 怀孕抽烟对孩子有什么影响| 炸膛什么意思| 早上打碎碗是什么兆头| 胃窦炎是什么病| 大便有粘液是什么原因| 阴囊湿疹用什么药膏效果最好| 薄凉是什么意思| 被臭虫咬了擦什么药| 良人什么意思| 什么人不适合戴翡翠| 核医学科是检查什么的| 同房是什么意思| 畸胎瘤是什么病严重吗| 兔死狐悲是什么生肖| 鼻子大的男人说明什么| 嬛嬛一袅楚宫腰什么意思| 喝牛奶胀气是什么原因| 人生苦短什么意思| 血常规五项能检查出什么病| 螨虫什么样子| 喝咖啡胃疼是什么原因| 气血不足吃什么食物好| 火龙果和香蕉榨汁有什么功效| 眼神迷离是什么意思| 欢愉是什么意思| 高锰酸钾是什么| 陌上花开可缓缓归矣什么意思| 罚金属于什么处罚| 治疗结石最好的方法是什么| 高油酸是什么意思| 什么食用油最好最健康| 肺心病是什么原因引起的| 11月5日是什么星座| 减肥去医院挂什么科| 静脉曲张吃什么食物| 什么是头七| 持续发烧不退是什么原因| 六度万行 是什么意思| 耄耋是什么意思| 潘氏试验阳性说明什么| 胃酸分泌过多吃什么药| 香蕉不能和什么水果一起吃| 草字头内念什么| 手串断了寓意什么| 容易紧张是什么原因| 掼蛋是什么意思| 六味地黄丸什么时候吃最好| 89年属蛇是什么命| 换手率是什么意思| 取环挂什么科室| 颈动脉彩超查什么| 雨花斋靠什么盈利| 三文鱼是什么鱼| 什么防晒霜效果最好| 穿什么衣服显白| 美国的国宝是什么动物| 晚上11点到12点是什么时辰| 四个火字念什么| 婴儿口臭是什么原因引起的| 血色素是什么意思| 夏天怕冷是什么原因| 四联单是什么| 肩周炎有什么症状| 血糖高不能吃什么水果| 什么是腰间盘突出| 长得什么| 前囟门什么时候闭合| 肝不好吃什么中成药| 停休是什么意思| 龙眼和桂圆有什么区别| 9月是什么季节| 什么食物含磷高| 姨妈是什么| 代入感是什么意思| 股癣是什么样的| 荷花是什么形状的| 米果念什么| 高血压中医叫什么| 血压低头晕是什么原因导致的| 大姨妈有黑色血块是什么原因| 备孕是什么意思| 荨麻疹打什么针好得快| 直视是什么意思| 甲胎蛋白是什么| 头发染什么颜色显皮肤白显年轻| 体寒湿气重喝什么茶好| 头发稀少是什么原因导致的| 2000属什么生肖| 孕妇早上吃什么早餐好| 经期吃什么补血| 更年期吃什么药| pyq是什么| 什么故事| 食物发霉是什么菌| 什么的大象| 眼睛为什么老是流眼泪| 海底椰是什么| 孕晚期缺铁对胎儿有什么影响| 什么是房颤| 白气是什么物态变化| 牛肉汤配什么菜好吃| 什么言什么语| 吃洋葱有什么好处和坏处| 失眠看什么科| 前壁后壁有什么区别| 嘴唇上长水泡是什么原因| 补充蛋白质吃什么食物| 夏季有什么花| 特需门诊和专家门诊有什么区别| 心电图窦性心律不齐是什么意思| 中国最大的湖泊是什么湖| 跖疣用什么药| 四风是什么| 谷草转氨酶是什么意思| 耳朵发痒是什么原因| 夕阳西下是什么意思| 吃苹果有什么好处| 6月16号是什么星座| 夏天吃羊肉有什么好处| 操逼什么意思| 梦见拔牙是什么预兆| 老是拉肚子什么原因| 皮疹是什么样子的| 右侧疼痛是什么原因| 精液是什么颜色| 出道是什么意思| 胎儿头围偏大什么原因| 翻糖是什么| 肉桂和桂皮有什么区别| 康复治疗学主要学什么| 敛是什么意思| 86年属什么的| 牵牛花什么时候开| 木鱼花为什么会动| 痔疮瘙痒用什么药| 活检是什么检查| 72年属什么生肖属相| 老人大小便失禁是什么原因造成的| 女性尿道炎挂什么科| 流鼻涕打喷嚏吃什么药| 四川九寨沟什么时候去最好| 大学没毕业算什么学历| 孔子的真名叫什么| 出虚汗是什么原因引起的怎么调理| 乙肝两对半阳性是什么意思| 米虫长什么样| 日本人为什么喜欢喝冰水| 大姨妈吃什么水果最好| 李逵属什么生肖| 8.1是什么星座| 阴道是什么意思| tct什么意思| 低筋面粉能做什么| 宋江是什么生肖| 心衰应该注意什么| 耳舌念什么| 百度
Tim Berners-Lee

Date: September 1998. Last modified: $Date: 1998/10/14 20:17:13 $

Status: An attempt to give a high-level plan of the architecture of the Semantic WWW. Editing status: Draft. Comments welcome

Up to Design Issues


福建海警开展海上实战训练(组图)-地方新闻-时政频道-中工网

百度 面对紧张的工作节奏和巨大的工作压力,这批上有老下有小的年轻人毫无怨言,几乎将生活的全部重心都倾斜到工作上,绝不因个人生活影响型号进度。

A road map for the future, an architectural plan untested by anything except thought experiments.

This was written as part of a requested road map for future Web design, from a level of 20,000ft. It was spun off from an Architectural overview for an area which required more elaboration than that overview could afford.

Necessarily, from 20,000 feet, large things seem to get a small mention. It is architecture, then, in the sense of how things hopefully will fit together. So we should recognize that while it might be slowly changing, this is also a living document.

This document is a plan for achieving a set of connected applications for data on the Web in such a way as to form a consistent logical web of data (semantic web).

Introduction

The Web was designed as an information space, with the goal that it should be useful not only for human-human communication, but also that machines would be able to participate and help. One of the major obstacles to this has been the fact that most information on the Web is designed for human consumption, and even if it was derived from a database with well defined meanings (in at least some terms) for its columns, that the structure of the data is not evident to a robot browsing the web. Leaving aside the artificial intelligence problem of training machines to behave like people, the Semantic Web approach instead develops languages for expressing information in a machine processable form.

This document gives a road map - a sequence for the incremental introduction of technology to take us, step by step, from the Web of today to a Web in which machine reasoning will be ubiquitous and devastatingly powerful.

It follows the note on the architecture of the Web, which defines existing design decisions and principles for what has been accomplished to date.

Machine-Understandable information: Semantic Web

The Semantic Web is a web of data, in some ways like a global database. The rationale for creating such an infrastructure is given elsewhere [Web future talks &c] here I only outline the architecture as I see it.

The basic assertion model

When looking at a possible formulation of a universal Web of semantic assertions, the principle of minimalist design requires that it be based on a common model of great generality. Only when the common model is general can any prospective application be mapped onto the model. The general model is the Resource Description Framework.

See the RDF Model and Syntax Specification

Being general, this is very simple. Being simple there is nothing much you can do with the model itself without layering many things on top. The basic model contains just the concept of an assertion, and the concept of quotation - making assertions about assertions. This is introduced because (a) it will be needed later anyway and (b) most of the initial RDF applications are for data about data ("metadata") in which assertions about assertions are basic, even before logic. (Because for the target applications of RDF, assertions are part of a description of some resource, that resource is often an implicit parameter and the assertion is known as a property of a resource).

As far as mathematics goes, the language at this point has no negation or implication, and is therefore very limited. Given a set of facts, it is easy to say whether a proof exists or not for any given question, because neither the facts nor the questions can have enough power to make the problem intractable.

Applications at this level are very numerous. Most of the applications for the representation of metadata can be handled by RDF at this level. Examples include card index information (the Dublin Core), Privacy information (P3P), associations of style sheets with documents, intellectual property rights labeling and PICS labels. We are talking about the representation of data here, which is typically simple: not languages for expressing queries or inference rules.

RDF documents at this level do not have great power, and sometimes it is less than evident why one should bother to map an application in RDF. The answer is that we expect this data, while limited and simple within an application, to be combined, later, with data from other applications into a Web. Applications which run over the whole web must be able to use a common framework for combining information from all these applications. For example, access control logic may use a combination of privacy and group membership and data type information to actually allow or deny access. Queries may later allow powerful logical expressions referring to data from domains in which, individually, the data representation language is not very expressive. The purpose of this document is partly to show the plan by which this might happen.

The Schema layer

The basic model of the RDF allows us to do a lot on the blackboard, but does not give us many tools. It gives us a model of assertions and quotations on which we can map the data in any new format.

We next need a schema layer to declare the existence of new property. We need at the same time to say a little more about it. We want to be able to constrain the way it used. Typically we want to constrain the types of object it can apply to. These meta-assertions make it possible to do rudimentary checks on a document. Much as in SGML the "DTD" allows one to check whether elements have been used in appropriate positions, so in RDF a schema will allow us to check that, for example, a driver's license has the name of a person, and not a model of car, as its "name".

It is not clear to me exactly what primitives have to be introduced, and whether much useful language can be defined at this level without also defining the next level. There is currently a RDF Schema working group in this area. The schema language typically makes simple assertions about permitted combinations. If the SGML DTD is used as a model, the schema can be in a language of very limited power. The constraints expressed in the schema language are easily expanded into a more powerful logical layer expressions (the next layer), but one chose at this point, in order to limit the power, not to do that. For example: one can say in a schema that a property foo is unique. Expanded, that is that for any x, if y is the foo of x, and z is the foo of x, then y equals z. This uses logical expressions which are not available at this level, but that is OK so long as the schema language is, for the moment, going to be handled by specialized schema engines only, not by a general reasoning engine.

When we do this sort of thing with a language - and I think it will be very common - we must be careful that the language is still well defined logically. Later on, we may want to make inferences which can only be made by understanding the semantics of the schema language in logical terms, and combining it with other logical information.

Conversion language

A requirement of namespaces work for evolvability is that one must, with knowledge of common RDF at some level, be able to follow rules for converting a document in one RDF schema into another one (which presumably one has an innate understanding of how to process).

By the principle of least power, this language can in fact be made to have implication (inference rules) without having negation. (This might seem a fine point to make, when in fact one can easily write a rule which defines inference from a statement A of another statement B which actually happens to be false, even though the language has no way of actually stating "False". However, still formally the language does not have the power needed to write a paradox, which comforts some people. In the following, though, as the language gets more expressive, we rely not on an inherent ability to make paradoxical statements, but on applications specifically limiting the expressive power of particular documents. Schemas provide a convenient place to describe those restrictions.)

Links between the table for EmpA simple example of the application of this layer is when two databases, constructed independently and then put on the web, are linked by semantic links which allow queries on one to converted into queries on another. Here, someone noticed that "where" in the friends table and "zip" in a places table mean the same thing. Someone else documented that "zip" in the places table meant the same things as "zip" in the employees table, and so on as shown by arrows. Given this information, a search for any employee called Fred with zip 02139 can be widened from employees to in include friends. All that is needed some RDF "equivalent" property.

The logical layer

The next layer, then is the logical layer. We need ways of writing logic into documents to allow such things as, for example, rules the deduction of one type of document from a document of another type; the checking of a document against a set of rules of self-consistency; and the resolution of a query by conversion from terms unknown into terms known. Given that we have quotation in the language already, the next layer is predicate logic (not, and, etc) and the next layer quantification (for all x, y(x)).

The applications of RDF at this level are basically limited only by the imagination. A simple example of the application of this layer is when two databases, constructed independently and then put on the web, are linked by semantic links which allow queries on one to converted into queries on another. Many things which may have seemed to have needed a new language become suddenly simply a question of writing down the right RDF. Once you have a language which has the great power of predicate calculus with quotation, then when defining a new language for a specific application, two things are required:

See also, if unconvinced:

The metro map below shows a key loop in the semantic web. The Web part, on the left, shows how a URI is, using HTTP, turned into a representation of a document as a string of bits with some MIME type. It is then parsed into XML and then into RDF, to produce an RDF graph or, at the logic level, a logical formula. On the right hand side, the Semantic part, shows how the RDF graph contains a reference to the URI. It is the trust from the key, combined with the meaning of the statements contained in the document, which may cause a Semantic Web engine to dereference another URI.

URI gets document which a parse

Proof Validation - a language for proof

The RDF model does not say anything about the form of reasoning engine, and it is obviously an open question, as there is no definitively perfect algorithm for answering questions - or, basically, finding proofs. At this stage in the development of the Semantic Web, though, we do not tackle that problem. Most applications construction of a proof is done according to some fairly constrained rules, and all that the other party has to do is validate a general proof. This is trivial.

For example, when someone is granted access to a web site, they can be given a document which explains to the web server why they should have access. The proof will be a chain [well, DAG] of assertions and reasoning rules with pointers to all the supporting material.

The same will be true of transactions involving privacy, and most of electronic commerce. The documents sent across the net will be written in a complete language. However, they will be constrained so that, if queries, the results will be computable, and in most cases they will be proofs. The HTTP "GET" will contain a proof that the client has a right to the response. the response will be a proof that the response is in deed what was asked for.

Evolution rules Language

RDF at the logical level already has the power to express inference rules. For example, you should be able to say such things as "If the zipcode of the organization of x is y then the work-zipcode of x is y". As noted above, just scattering the Web with such remarks will in the end be very interesting, but in the short term won't produce repeatable results unless we restrict the expressiveness of documents to solve particular application problems.

Two fundamental functions we require RDF engines to be able to do are

  1. for a version n implementation to be able to read enough RDF schema to be able to deduce how to read a version n+1 document;
  2. for a type A application developed quite independently of a type B application which has the same or similar function to be able to read and process enough schema information to be able to process data from the type B application.

(See evolvability article)

The RDF logic level is sufficient to be usable as a language for making inference rules. Note it does not address the heuristics of any particular reasoning engine, which which is an open field made all the more open and fruitful by the Semantic Web. In other words, RDF will allow you to write rules but won't tell anyone at this stage in which order to apply them.

Where for example a library of congress schema talks of an "author", and a British Library talks of a "creator", a small bit of RDF would be able to say that for any person x and any resource y, if x is the (LoC) author of y, then x is the (BL) creator of y. This is the sort of rule which solves the evolvability problems. Where would a processor find it? In the case of a program which finds a version 2 document and wants to find the rules to convert it into a version 1 document, then the version 2 schema would naturally contain or point to the rules. In the case of retrospective documentation of the relationship between two independently invented schemas, then of course pointers to the rules could be added to either schema, but if that is not (socially) practical, then we have another example of the the annotation problem. This can be solved by third party indexes which can be searched for connections between two schemata. In practice of course search engines provide this function very effectively - you would just have to ask a search engine for all references to one schema and check the results for rules which like the two.

Query languages

One is a query language. A query can be thought of as an assertion about the result to be returned. Fundamentally, RDF at the logical level is sufficient to represent this in any case. However, in practice a query engine has specific algorithms and indexes available with which to work, and can therefore answer specific sorts of query.

It may of course in practice to develop a vocabulary which helps in either of two ways:

  1. It allows common powerful query types to be expressed succinctly with fewer pages of mathematics, or
  2. It allows certain constrained queries to be expressed, which are interesting because they have certain computability properties.

SQL is an example of a language which does both.

It is clearly important that the query language be defined in terms of RDF logic. For example, to query a server for the author of a resource, one would ask for an assertion of the form "x is the author of p1" for some x. To ask for a definitive list of all authors, one would ask for a set of authors such that any author was in the set and everyone in the set was an author. And so on.

In practice, the diversity of algorithms in search engines on the web, and proof-finding algorithms in pre-web logical systems suggests that there will in a semantic web be many forms of agent able to provide answers to different forms of query.

One useful step the specification of specific query engines for for example searches to a finite level of depth in a specified subset of the Web (such as a web site). Of course there could be several alternatives for different occasions.

Another metastep is the specification of a query engine description language -- basically a specification of the sort of query the engine can return in a general way. This would open the door to agents chaining together searches and inference across many intermediate engines.

Digital Signature

Public key cryptography is a remarkable technology which completely changes what is possible. While one can add a digital signature block as decoration on an existing document, attempts to add the logic of trust as icing on the cake of a reasoning system have to date been restricted to systems limited in their generality. For reasoning to be able to take trust into account, the common logical model requires extension to include the keys with which assertions have been signed.

Like all logic, the basis of this, may not seem appealing at first until one has seen what can be built on top. This basis is the introduction of keys as first class objects (where the URI can be the literal value of a public key), and a the introduction of general reasoning about assertions attributable to keys.

In an implementation, this means that reasoning engine will have to be tied to the signature verification system . Documents will be parsed not just into trees of assertions, but into into trees of assertions about who has signed what assertions. Proof validation will, for inference rules, check the logic, but for assertions that a document has been signed, check the signature.

The result will be a system which can express and reason about relationships across the whole range of public-key based security and trust systems.

Digital signature becomes interesting when RDF is developed to the level that a proof language exists. However, it can be developed in parallel with RDF for the most part.

In the W3C, input to the digital signature work comes from many directions, including experience with DSig1.0 signed "pics" labels, and various submissions for digitally signed documents.

Indexes of terms

Given a worldwide semantic web of assertions, the search engine technology currently (1998) applied to HTML pages will presumably translate directly into indexes not of words, but of RDF objects. This itself will allow much more efficient searching of the Web as though it were one giant database, rather than one giant book.

The Version A to Version B translation requirement has now been met, and so when two databases exist as for example large arrays of (probably virtual) RDF files, then even though the initial schemas may not have been the same, a retrospective documentation of their equivalence would allow a search engine to satisfy queries by searching across both databases.

Engines of the Future

While search engines which index HTML pages find many answers to searches and cover a huge part of the Web, then return many inappropriate answers. There is no notion of "correctness" to such searches. By contrast, logical engines have typically been able to restrict their output to that which is provably correct answer, but have suffered from the inability to rummage through the mass of intertwined data to construct valid answers. The combinatorial explosion of possibilities to be traced has been quite intractable.

However, the scale upon which search engines have been successful may force us to reexamine our assumptions here. If an engine of the future combines a reasoning engine with a search engine, it may be able to get the best of both worlds, and actually be able to construct proofs in a certain number of cases of very real impact. It will be able to reach out to indexes which contain very complete lists of all occurrences of a given term, and then use logic to weed out all but those which can be of use in solving the given problem.

So while nothing will make the combinatorial explosion go away, many real life problems can be solved using just a few (say two) steps of inference out on the wild web, the rest of the reasoning being in a realm in which proofs are give, or there are constrains and well understood computable algorithms. I also expect a string commercial incentive to develop engines and algorithms which will efficiently tackle specific types of problem. This may involve making caches of intermediate results much analogous to the search engines' indexes of today.

Though there will still not be a machine which can guarantee to answer arbitrary questions, the power to answer real questions which are the stuff of our daily lives and especially of commerce may be quite remarkable.


In this series:

References

The CYC Representation Language

Knowledge Interchange Format (KIF)

@@

Acknowledgements

This plan is based in discussions with the W3C team, and various W3C member companies. Thanks also to David Karger and Daniel Jackson of MIT/LCS.

Up to Design Issues

梦见补的牙齿掉了是什么意思 印鉴是什么意思 绿豆的功效与作用是什么 全麦粉和小麦粉的区别是什么 7d是什么意思
警察代表什么生肖 碳酸钠是什么 宁夏有什么特产 鱼缸底部铺什么好 dbm是什么意思
什么是中线 胸为什么一大一小 胃糜烂吃什么药可以根治 杯酒释兵权是什么意思 达喜是什么药
神经性皮炎用什么药膏效果最好 难为情是什么意思 狗狗取什么名字 10月27日什么星座 脾虚吃什么食物
腰间盘突出压迫神经腿疼吃什么药sanhestory.com puella是什么牌子衣服hcv8jop0ns9r.cn 气血不足吃什么补最快hcv8jop2ns5r.cn 孕妇可以吃什么感冒药hcv8jop7ns2r.cn 黄金分割点是什么hcv8jop9ns1r.cn
什么枝什么叶hcv8jop6ns8r.cn 梳头有什么好处hcv7jop5ns1r.cn 甲状腺双叶回声欠均匀是什么意思hcv8jop9ns1r.cn 下巴下面长痘痘是什么原因hcv8jop0ns1r.cn 频繁流鼻血是什么病的前兆1949doufunao.com
出国旅游需要什么手续和证件hcv8jop8ns3r.cn 曜字五行属什么hcv7jop5ns2r.cn 气化是什么意思hcv7jop4ns6r.cn 一九八四年属什么生肖hlguo.com 腿肿是什么原因引起的怎么办hcv7jop9ns3r.cn
梦见照相是什么意思hcv9jop0ns7r.cn 属虎的本命佛是什么佛hcv8jop2ns1r.cn 宝宝吐奶是什么原因hcv9jop1ns5r.cn david是什么意思hcv9jop3ns0r.cn 奶水不足吃什么下奶最快luyiluode.com
百度