HA Xen
HA Xen stands for High Availability Xen. I am working on this project with my good friend Todd Deshane on this project for 1 credit under the ITL. What the goal is, is to make a desktop environment running Linux on top of Xen, and use other currenty existing server tech to make a more secure system that will detect and recover from outside attacks. Protection and recovery will be overseen by the rapid recovery system.
Xen Summit/IBM
From April 17th to the 20th, I was with Todd down in Poughkeepsie visiting both the IBM research center in Yorktown and the center in Poughkeepsie. The summit was more for learning where Xen was going and things that are in development for ideas for Todd's research and for the Xen book currently being written. One of the key discussions was a private one with Todd's IBM mentor, Sean Dague, where we given suggestions about the design of the environment. Really good ideas, in fact, that are making us rethink how the storage, and even the OS, will be implemented and run.
Currently Running
At this point in time Todd and I are using a dual Opteron blade on the blade server, running Ubuntu Dapper Drake on Xen. On this system we are running the high availability service heartbeat and have six guests running:
- 2x test nodes on heartbeat
- 2x test nodes on heartbeat
- Openfiler storage server
- Mercurial Version Control
Rapid Recovery
The system we want to develop and implement is a project Todd was involved in before. It would make use of virtual networks between internally run virtual machines. The guests running applications would interact with the storage server filesystem, using a contract based read-write system to assign what that app is allowed to access. On top of this these guests can only access the external internet through the base domain so it is less likely to face attack. To further the security, the base domain of the system will be running intrusion detection software to check for any suspicious access to the system, notifying the system to take action to make its recovery, behind the scenes, allowing the user to continue working without worry of the system crashing.
The Test Nodes
When we first started testing heartbeat on Xen guests we began with the basics. We set up two guests, each running heartbeat configured to perform a check on the other by using their broadcasted node name. This is the way that heartbeat HA works; two nodes checks the vitals of the other, in an active/inactive state setup. This is the reason behind having two pairs of nodes running, though there was a problem faced with starting up two more heartbeat guests. By default, heartbeat nodes run with bcast, which only supports broadcasting from two nodes; so, in order to allow a new pair of nodes to be brought online, we needed to change the settings of the nodes to make use of mcast, or multicast, allowing for all four nodes to broadcast their vitals to their assigned partner node.
Openfiler
Part of the plan is have all of the users personal information stored on a separate guest, so it can stay secure from outside attack because of it's private placement within the rapid recovery system. The idea is that the storage can only be accessed by other guest nodes on the same internal virtual network. Openfiler is a distro the backend of this storage server, and contains fairly advanced features all accessible on a web interface.
Mercurial
Mercurial is our version control guest. As of right now we just have the basic service up and are currently not making use of it.
Recent Entries
- Added Xen Summit Apr 22 07
- Added project Apr 02 07
- Added notes Apr 03 07