In-database processing
![]() | This article appears to be both in categories to be diffused and in their subcategories, or is in too many categories, and may need cleanup. (October 2011) |
In-database processing, sometimes referred to as in-database analytics, refers to the integration of data analytics into data warehousing functionality. In-database processing eliminates the overhead of moving large data sets from the enterprise data warehouse to a separate analytic software application, providing significant performance benefits.[1]
In-database processing accelerates data analysis, making it relevant for applications requiring high-throughput, real-time advanced analytics, including fraud detection, transaction processing, pricing and margin analysis, usage-based micro-segmenting, behavioral ad targeting and recommendation engines. In-database processing is performed and promoted as a feature by many of the major data warehousing vendors, including Teradata, Netezza, Greenplum and Aster Data Systems.[2]
In-database processing is one of several technologies focused on improving data warehousing performance, including parallel computing, shared nothing architectures and massive parallel processing. database-embedded calculations respond to growing demand for high-throughput, operational analytics for needs such as fraud detection, credit scoring, and risk management. It is an important step towards improving predictive analytics capabilities.[3]
References
- ^ [1] "Adding Competitive Muscle with In-Database Analytics," "Database Trends & Applications," May 10, 2010
- ^ [2] "In-Database Analytics: A Passing Lane for Complex Analysis," "Intelligent Enterprise," December 15, 2008
- ^ [3] "Isn't In-database processing old news yet?," "Blog by Tim Manns (Data Mining Blog)," January 8, 2009