Getting Started with Impala (e-bog) af Russell, John
Russell, John (forfatter)

Getting Started with Impala e-bog

166,54 DKK (inkl. moms 208,18 DKK)
Learn how to write, tune, and port SQL queries and other statements for a Big Data environment, using Impalathe massively parallel processing SQL query engine for Apache Hadoop. The best practices in this practical guide help you design database schemas that not only interoperate with other Hadoop components, and are convenient for administers to manage and monitor, but also accommodate future ...
E-bog 166,54 DKK
Forfattere Russell, John (forfatter)
Udgivet 25 september 2014
Længde 152 sider
Genrer Database software
Sprog English
Format epub
Beskyttelse LCP
ISBN 9781491905722
Learn how to write, tune, and port SQL queries and other statements for a Big Data environment, using Impalathe massively parallel processing SQL query engine for Apache Hadoop. The best practices in this practical guide help you design database schemas that not only interoperate with other Hadoop components, and are convenient for administers to manage and monitor, but also accommodate future expansion in data size and evolution of software capabilities.Written by John Russell, documentation lead for the Cloudera Impala project, this book gets you working with the most recent Impala releases quickly. Ideal for database developers and business analysts, the latest revision covers analytics functions, complex types, incremental statistics, subqueries, and submission to the Apache incubator.Getting Started with Impala includes advice from Clouderas development team, as well as insights from its consulting engagements with customers.Learn how Impala integrates with a wide range of Hadoop componentsAttain high performance and scalability for huge data sets on production clustersExplore common developer tasks, such as porting code to Impala and optimizing performanceUse tutorials for working with billion-row tables, date- and time-based values, and other techniquesLearn how to transition from rigid schemas to a flexible model that evolves as needs changeTake a deep dive into joins and the roles of statistics