Parallel Database design, query processing, and case study In this post, we are going to analyze the concepts of Parallel Database design, query processing, and case study. Parallel database design Hardware Architecture Data partitioning Query processing Hardware Architecture Shared Memory Shared Disk> Shared Nothing Data Partitioning Partitioning a relation involves distributing its tuples over several disks Three Kinds – Range Partitioning Round-robin Partitioning Hashing Partitioning Range Partitioning is good for Ideal for point and range queries on the partitioning attribute Hash partitioning Ideal for point queries based on the partitioning attribute Ideal for sequential sca...
PHP Topics | Latest Features | Lesser known Facts | Important for Entrance Exams In this post, I am going to add my other personal projects that I have worked on in various technologies Reflection in PHP5 $class = new \ReflectionClass(MyClass’); $method = new \ReflectionMethod('MyClass',’method’); $args = array(); $method->invoke($method->getDeclaringClass()->newInstanceArgs($args)); Traits in PHP A Trait is similar to a class, but only intended to group functionality in a fine-grained and consistent way. It is not possible to instantiate a Trait on its own. It is an addition to traditional inheritance and enables horizontal composition of behavior; that is, the application of class members without requiring inheritance. Avoids the problems of Multiple inheritance. Is a parent constructor called implicitly in PHP? When a constructor is defined for a class in PHP, the parent construct...
Starting with Apache Hadoop In Hadoop, a single master is managing many slaves The master node consists of a JobTracker , Tasktracker , NameNode , and DataNode . A slave or worker node acts as both DataNode and TaskTracker though it is possible to have data-only worker node, and compute-only workerNodes. NameNode holds the file system metadata. The files are broken up and spread over the DataNode and JobTracker schedules and the manager's job. The TaskTracker executes the individual map and reduced function. If a machine fails, Hadoop continues to operate the cluster by shifting work to the remaining machines. The input file, which resides on a distributed file system throughout the cluster, is split into even-sized chunks replicated for fault tolerance. Haddopp divides each map to reduce jobs into a set of tasks. Each chunk of input is processed by a map task, which outputs a list of key-value pairs. In Hadoop, the shuffle phase o...
Comments
Post a Comment