Normalization VS Denormalization [转]
Denormalizationis the process of attempting to optimize the read performance of adatabaseby adding redundant data or by grouping data.In some cases, denormalization helps cover up the inefficiencies inherent inrelationaldatabase software.
Denormalization is the process of attempting to optimize the read performance of a database by adding redundant data or by grouping data. In some cases, denormalization helps cover up the inefficiencies inherent in relational database software. A relational normalized database imposes a heavy access load over physical storage of data even if it is well tuned for high performance.
A normalized design will often store different but related pieces of information in separate logical tables (called relations). If these relations are stored physically as separate disk files, completing a database query that draws information from several relations (a join operation) can be slow. If many relations are joined, it may be prohibitively slow. There are two strategies for dealing with this. The preferred method is to keep the logical design normalized, but allow the database management system (DBMS) to store additional redundant information on disk to optimize query response. In this case it is the DBMS software's responsibility to ensure that any redundant copies are kept consistent. This method is often implemented in SQL as indexed views (Microsoft SQL Server) ormaterialized views (Oracle). A view represents information in a format convenient for querying, and the index ensures that queries against the view are optimized.
The more usual approach is to denormalize the logical data design. With care this can achieve a similar improvement in query response, but at a cost—it is now the database designer's responsibility to ensure that the denormalized database does not become inconsistent. This is done by creating rules in the database called constraints, that specify how the redundant copies of information must be kept synchronized. It is the increase in logical complexity of the database design and the added complexity of the additional constraints that make this approach hazardous. Moreover, constraints introduce a trade-off, speeding up reads (SELECT in SQL) while slowing down writes (INSERT, UPDATE, and DELETE). This means a denormalized database under heavy write load may actually offerworse performance than its functionally equivalent normalized counterpart.
A denormalized data model is not the same as a data model that has not been normalized, and denormalization should only take place after a satisfactory level of normalization has taken place and that any required constraints and/or rules have been created to deal with the inherent anomalies in the design. For example, all the relations are in third normal form and any relations with join and multi-valued dependencies are handled appropriately.
Examples of denormalization techniques include:
Denormalization techniques are often used to improve the scalability of Web applications.]
原文地址:
Example: a shopping cart order
Suppose that we are designing a schema for a shopping cart application. Our application
stores orders in MongoDB, but what information should an order contain?
Normalized schema
A product: { : productId, : name, : price, : description } An order: { : orderId, : userInfo, : [ productId1, productId2, productId3 ] } ,美国服务器,美国空间,香港虚拟主机
上一篇: MySQL 触发器的基础操作(六)
推荐阅读
-
Normalization VS Denormalization [转]
-
关于VS2005中C#代码用F12转到定义时,总是显示从元数据的问题
-
转:Comparable vs Comparator in Java
-
html表单控件禁用属性:readonly VS disabled【转】
-
转:SpringCloud服务注册中心比较:Consul vs Zookeeper vs Etcd vs Eureka
-
如何区分并记住常见的几种 Normalization 算法(转)
-
[转帖]解决VS2005SP1无法安装问题
-
[转]Go语言(Golang)的Web框架比较:gin VS echo
-
【转】Win10年度更新开发必备:VS2015 Update 3正式版下载汇总
-
关于VS2005中C#代码用F12转到定义时,总是显示从元数据的问题