public marks

PUBLIC MARKS with tag cluster

April 2008

Google : un mystère fascinant et bien gardé

by holyver (via)
L’infrastructure utilisée par le géant de la recherche Google est un mystère que beaucoup aimeraient percer, que ce soit les concurrents ou les utilisateurs étonnés de la réactivité sans faille des services malgré un nombre d’utilisateurs record.

Google Architecture | High Scalability

by holyver & 1 other (via)
Google is the King of scalability. Everyone knows Google for their large, sophisticated, and fast searching, but they don't just shine in search. Their platform approach to building scalable applications allows them to roll out internet scale applications at an alarmingly high competition crushing rate. Their goal is always to build a higher performing higher scaling infrastructure to support their products. How do they do that?

Vdoop - Manage your virtual cluster - Vdoop

by camel
Every day, search companies like Google download terabytes of data from the Internet, store it on clusters of thousands of machines, and process it so that it can be easily searched. To make this possible, these companies need sophisticated distributed file system and parallel programing architectures. Have you ever heard of the Map/Reduce distributed parallel programing paradigm? If you are a computer scientist, you should have, because every time you submit a Google search, you are using Map/Reduce. Despite growing demand from companies like Google, Yahoo, and Microsoft, few computer science majors have even heard of Map/Reduce, let alone graduate well versed in its use. Unfortunately, several barriers exist to integrating Map/Reduce into computer science curricula. Obtaining a large cluster, configuring it, and installing complicated distributed file system and parallel programing software is difficult, time consuming, and expensive. In the past, Google's solution to this problem has been to ship entire clusters pre-configured with Map/Reduce software to select universities. In essence, Vdoop does same thing, with exactly the same software, except for our clusters are virtual, and hence free.

scalr - Google Code

by camel & 3 others
Scalr is a fully redundant, self-curing and self-scaling hosting environment utilizing Amazon's EC2. It allows you to create server farms through a web-based interface using prebuilt AMI's for load balancers (pound or nginx), app servers (apache, others), databases (mysql master-slave, others), and a generic AMI to build on top of. The health of the farm is continuously monitored and maintained. When the Load Average on a type of node goes above a configurable threshold a new node is inserted into the farm to spread the load and the cluster is reconfigured. When a node crashes a new machine of that type is inserted into the farm to replace it. 4 AMI's are provided for load balancers, mysql databases, application servers, and a generic base image to customize. Scalr allows you to further customize each image, bundle the image and use that for future nodes that are inserted into the farm. You can make changes to one machine and use that for a specific type of node. New machines of this type will be brought online to meet current levels and the old machines are terminated one by one. The project is still very young, but we're hoping that by open sourcing it the AWS development community can turn this into a robust hosting platform and give users an alternative to the current fee based services available.

SourceForge.net: Cluster SSH - Cluster Admin Via SSH

by holyver (via)
ClusterSSH controls a number of xterm windows via a single graphical console window to allow commands to be interactively run on multiple servers over an ssh connection.

Loadbalanced High-Availability Apache Cluster Using Ultramonkey

by camel
how to set up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another "Single Point Of Failure", we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently.

March 2008

Xen-AoE — xenaoe.org

by camel
Xen-AoE is a cluster server architecture which provides more efficient use of cpu, disk, and memory resources than traditional servers while providing zero single points of failure ensuring high availability and greater server maintainability. It does this through the use of server virtualization, utilization of SAN technologies, decoupling of disk from cpu, and taking advantage of commodity hardware such as x86 cpu's and readily available gigabit ethernet infrastructure. Xen-AoE can save your business money by better utilizing hardware resources, simplifying management, and decreasing losses due to downtime.

Le travail à la chaîne, c'est has been ! - Linux attitude

by camel
voici quelques outils pour travailler efficacement sur un groupe de machines

February 2008

Heartbeat2 Xen cluster with drbd8 and OCFS2

by camel
The idea behind the whole set-up is to get a High availability two node Cluster with redundant data. The two identical Servers are installed with Xen hypervisor and almost same configuration as Cluster nodes. The configuration and image files of Xen virtual machines are stored on drbd device for redundancy. Drbd8 and OCFS2 allows simultaneous mounting on both nodes, which is required for live migration of xen virtual machines. This Article describes Heartbeat2 Xen cluster Using Ubuntu (7.10) OS, drbd8 and OCFS2 (Ver. 1.39) File system. Although here Ubuntu is used it can be done in almost same way with Debian

puppet - Trac

by camel & 1 other
Put simply, Puppet is a system for automating system administration tasks. To learn more, read our big picture overview of Puppet, or take a deeper look at what Puppet can do with the Puppet Introduction. There's also a Puppet Brochure which gives the highlights of Puppet's functionality.

ONLamp.com -- Using Xen for High Availability Clusters

by camel
The idea of using virtual machines to build high available clusters is not new. Some software companies claim that virtualization is the answer to your HA problems, off course that's not true. Yes, you can reduce downtime by migrating virtual machines to another physical machine for maintenance purposes or when you think hardware is about to fail, but if an application crashes you still need to make sure another application instance takes over the service. And by the time your hardware fails, it's usually already too late to initiate the migration.

Xen Virtualization and Linux Clustering, Part 1

by camel
In this article, I briefly introduce the concepts of Xen virtualization and Linux clustering. From there, I show you how to set up multiple operating systems on a single computer using Xen and how to configure them for use with clustering. I should point out that a cluster implemented in this manner does not provide the computational power of multiple physical computers. It does, however, offer a way to prototype a cluster as well as provide a cost-effective development environment for cluster-based software. Even if you're not interested in clustering, this article gives you hands-on experience using Xen virtualization.

Cluster haute-disponibilité avec équilibrage de charge » UNIX Garden

by camel
À travers un exemple concret, nous vous proposons de déjouer les pièges de la mise en œuvre d’un cluster haute-disponibilité économique, avec équilibrage de charge et constitué uniquement de deux machines !

January 2008

Heartbeat2 Xen cluster with drbd8 and OCFS2 -- Ubuntu Geek

by camel
This Article describes Heartbeat2 Xen cluster Using Ubuntu (7.10) OS, drbd8 and OCFS2 (Ver. 1.39) File system. Although here Ubuntu is used it can be done in almost same way with Debian Idea The idea behind the whole set-up is to get a High availability two node Cluster with redundant data. The two identical Servers are installed with Xen hypervisor and almost same configuration as Cluster nodes. The configuration and image files of Xen virtual machines are stored on drbd device for redundancy. Drbd8 and OCFS2 allows simultaneous mounting on both nodes, which is required for live migration of xen virtual machines.

ClusterMonkey - Building A Virtual Cluster with Xen (Part One)

by camel
This guide is the first of a series in which I give you detailed step-by-step instructions on how to build a virtual cluster with Xen. The cluster thus built might not be appropriate for your case, and does reflect the author's preferences and/or needs, but if you are new to clusters or Xen, it will hopefully help you get started with both. The goal is to start it simple and then add more complexity as we progress, so in this first guide I show you how to get do the basics: * A Xen installation, the creation of 5 virtual machines (one to act as the master and four slaves), * Shared storage through NFS, * The network configuration on which to build the virtual cluster. The network structure of this first attempt will be very simple, the master having two network cards, one to the outside world and the other one connected through a switch to the slaves.

December 2007

HA Xen Cluster with DRBD, LVM and heartbeat

by camel
We have implemented a 2-node HA Xen cluster, which consists of two physical machines (hosts,) and runs several virtual servers (guests) each, for our company's internal services (mail, web applications, development, etc.) When one host gets down unexpectedly, the other host physically kills it (STONITH - power down or reset) and then takes over all the guests the failed host was running. When we want to shutdown a host machine for maintenance (to replace a fan, add disk or memory, etc.), we just type the usual shutdown command, and the guests are automatically live-migrated to the other host. Since the guest servers keep running throughout the migration process, except for the less than a second pause, users would never even notice the event.

ganeti - Google Code

by camel & 1 other
Ganeti is a virtual server management software tool built on top of Xen virtual machine monitor and other Open Source software. However, Ganeti requires pre-installed virtualization software on your servers in order to function. Once installed, the tool will take over the management part of the virtual instances (Xen DomU), e.g. disk creation management, operating system installation for these instances (in co-operation with OS-specific install scripts), and startup, shutdown, failover between physical systems. It has been designed to facilitate cluster management of virtual servers and to provide fast and simple recovery after physical failures using commodity hardware.

Xen Cluster Management With Ganeti On Debian Etch | HowtoForge - Linux Howtos and Tutorials

by camel & 1 other
Ganeti is a cluster virtualization management system based on Xen. In this tutorial I will explain how to create one virtual Xen machine (called an instance) on a cluster of two physical nodes, and how to manage and failover this instance between the two physical nodes.

Kerrighed

by rodo
Kerrighed is a Single System Image operating system for clusters. Kerrighed offers the view of a unique SMP machine on top of a cluster of standard PCs.

November 2007

ssh on multiple servers Using cluster ssh -- Debian Admin

by camel
Ever had to make the same change on more than one Linux/unix server? Find it annoyingly painful to keep repeating the exact same commands again and again and again? This tool addresses exactly this problem. You run a utility (cssh) providing a number of server names as parameters, and then xterms opens up to each server with an extra “console” window. Anything typed into the console is replicated into each server window (so, for examples, you can edit the same file on N machines at the same time, or run the same commands with the same parameters across those servers). It is also possible to type into the server windows directly, or temporarily disable replication to one or more of the servers through the “Hosts” menu.