obey-robots.txt
Web Master Trouble
Users Online
Guests Online: 1

Members Online: 0

Total Members: 3
Newest Member: Proxyhotdeals
Login
Username

Password



Not a member yet?
Click here to register.

Forgotten your password?
Request a new one here.
Last Seen Users
Your ad here below
View Thread
Web Master Trouble » Linux Apache MySQL PHP optimalisation tips tricks » Linux tutorials, Tips & Tricks
Linux tutorials, Tips & Tricks Getting started with SSH An introduction to NGINX, PHP-FPM and MariaDB on CentOS An introduction to NGINX, PHP-FPM and MariaDB on Ubuntu Getting started with OpenVZ! Handle DNS properly with cPanel Munin amass, all your server charts in one place Monitoring server performance with Munin! Getting started with OpenVPN (client) Getting started with OpenVPN (server) Your own mail server with Virtualmin IPtables: IPv6 and more rules
 Print Thread

Tutorial ? The LowEndCluster ? Part 4

NuclearFusion

lowendtutorial


Itr17;s time for the fourth and final part in the LowEndCluster series, a series of tutorials aimed at effectively using LowEndSpirit or very budget/low-resource boxes to create a redundant cluster hosting a WordPress website for less than $50/year!


As I said last time, wer17;re focusing on redundancy, not automated fail-over or scaling; thatr17;s for future tutorials. Ir17;m using the easiest approach possible on all aspects of this, which gives us plenty of room to improve in the future as well as keeping it easy to understand right now. While Ir17;m writing this as part of a LowEndCluster series, each tutorial in itself has value as well and many of them can be applied to other situations as well. For example, the MariaDB master-slave tutorial and the filesystem one today can be used perfectly fine to keep a spare copy of, say, an existing Observium machine. The only limit is your imagination ;-)


This week wer17;re going to install our filesystem, which will be SSHFS-based and wer17;ll be using rsync to move all the data to the second server. Itr17;s really quite simple to be honest, but very effective! Letr17;s get cracking!



The web node r11; Part 1


First wer17;re going to create a user account and SSH keys on one of the web nodes. Wer17;re then going to repeat the user creation and copy the SSH keys to the second web node.


Letr17;s create the user first called r16;clusterr17;. Wer17;ll use this user to log into the filesystem server:


sudo adduser cluster


Your17;ll be asked for a password here. Please provide a strong one. You wonr17;t need it further down the road. After you have provided a password, your17;ll be answered a number of questions. Feel free to fill those out, but none of them is required.


Now your17;ve got a new user, switch to that user:


sudo su cluster


And go to its home directory:


cd


From the home directory, run the following command to generate an SSH key pair:


ssh-keygen


Your17;ll again be asked some questions, more specifically a password. Leave this empty and ENTER through it. We donr17;t want a password as itr17;s going to give us issues while mounting the remote filesystem.


Finally, create a directory where wer17;re going to mount the remote filesystem:


sudo mkdir /filesystem


And ensure the user r16;clusterr17; owns what directory:


sudo chown cluster. /filesystem


So, the situation we have right now is as follows:



  • You have a r16;clusterr17; user with a strong password

  • You have an SSH key pair for the cluster user without a password

  • You have a /filesystem directory own by the user r16;clusterr17; that will function as a mount point in the future


Repeat the user creation on the second web node, but do not create an SSH key pair there. You should copy the SSH files from your first web node to your second web node:


scp -r .ssh/ node2.example.net:/home/cluster/.ssh


The above command should be run as the user r16;clusterr17; from the first web node. What this does, is copy the .ssh directory and its contents over to the second web node. Both web nodes now have the same SSH key pair, which will eventually give them access to the filesystem node.


Before we can take that step, though, wer17;ll head over to the filesystem node.


The filesystem node r11; Part 1


On the first filesystem node node, repeat the user creation step from above:


sudo adduser cluster


Again, pick a strong password. It doesnr17;t need to match that of the one on the web node. You do need to be able to fill it out later, though.


Once this has been done, you should create a folder on the filesystem node where you want to put your files:


sudo mkdir /filesystem


This will create a directory called r16;filesystemr17; in the root of your server. Feel free to put it somewhere else, but Ir17;m using this location to keep things simple.


Now, ensure the user r16;clusterr17; owns what directory:


sudo chown cluster. /filesystem


And you should be good!


On the filesystem node, you now have the following situation:



  • A user r16;clusterr17; with a strong password

  • A directory (/filesystem) to host your files owned by the user (and user group) r16;clusterr17;


You should now repeat the above steps on the second filesystem node before you continue.


A short note: initially, I anticipated the need for three filesystem nodes. This is no longer the case. Two will suffice (they donr17;t have to be KVM either), which means you will have a total of 8 servers: 2 load balancers, 2 web nodes, 2 database nodes, and 2 filesystem nodes. I will elaborate on this in the final notes.


The web node r11; Part 2


Back on the web node, you now can copy the public key of the r16;clusterr17; userr17;s SSH key pair to the filesystem nodes. As the user r16;clusterr17;, from the home directory, run:


ssh-copy-id filesystemnode.example.net


Replace filesystemnode.example.net with the hostname of your first filesystem node. You will be asked for a password: use the password your17;ve set for the user on the filesystem node! The public SSH key should then be copied. Give this a try by trying to access the filesystem node via SSH:


ssh filesystemnode.example.net


You should now be logged in without it asking for your password. If that is the case, repeat the first step for the second filesystem node:


ssh-copy-id filesystemnode2.example.net


And that should now also have your public SSH key on it.


So, short recap. Right now, you have:



  • Two web nodes with a r16;clusterr17; user sharing the same SSH key pair

  • Two filesystem nodes with a r16;clusterr17; user that have the web nodesr17; userr17;s SSH public key in the authorized_keys file

  • The ability to access either filesystem node from either web node using SSH without being prompted for a password


With that in mind, we can now mount the remote filesystem on the web nodes.


In order to be able to mount the remote filesystem, you need SSHFS installed on your server. For this to work, you need either an OpenVZ box with FUSE enabled, or a KVM machine. Most providers enable FUSE for your on demand; some have it built into their panel. SolusVM does not give users an option to enable/disable FUSE, so if your provider uses SolusVM, please contact them.


Letr17;s install SSHFS. On both web nodes, run:


sudo apt-get install sshfs


With SSHFS installed, you can actually mount the remote filesystem right away. Wer17;ll make it r0;stickr1; in a bit, as a mount from the CLI wonr17;t survive a reboot, but itr17;s good to test it. From one of the web nodes, as the user r16;clusterr17;, run:


sshfs filesystemnode.example.net:/filesystem /filesystem


Replace filesystemnode.example.net with the hostname or IP address of your first filesystem node. Wer17;re going to work with the first one from now on, but just when accessing it from the web nodes.


If this doesnr17;t give any errors (and it shouldnr17;t), head over to the /filesystem directory on the web node:


cd /filesystem


And try to create a file there:


touch README.md


With that file having been created, letr17;s go back to the filesystem node.


The filesystem node r11; Part 2


On the filesystem node, first check if the file your17;ve just created from the web node is present:


ls -al /filesystem


In the output, you should see the file r16;README.mdr17; listed. Neat, right?


OK, so now we can use the remote filesystem from the web node, we have a situation which we can work from. Before we start moving WordPress to the remote filesystem, though, Ir17;d like to set up redundancy. I want to make sure that if the first filesystem node is offline, I can easily switch over my web nodes to the second filesystem node and have the same files there.


Ir17;m going to use rsync for that. This tool should already be installed, but if it isnr17;t, herer17;s how you install it:


sudo apt-get install rsync


And thatr17;s all.


In order to be able to rsync from the first filesystem node to the second filesystem node, the r16;clusterr17; user on the first filesystem node needs to be able to access the second filesystem node.


As the cluster user on the first filesystem node, from the home directory of the user, run:


ssh-keygen


Follow the same rule as before: no password.


Now, copy this over to the second filesystem node:


ssh-copy-id filesystemnode2.example.net


And you should be able to access that no problem, no passwords asked, via SSH.


Now, back to rsync. Itr17;s actually extremely easy to get this working. From the first web node, run the following command:


rsync -a /filesystem filesystemnode2.example.net:/filesystem


What this does is recursively synchronize all files from the first filesystem node to the second filesystem node.


The r16;-ar17; flag does a lot of cool things that you want to happen:



  • Performs a recursive sync (all directories and files under /filesystem in this case)

  • Copies symlinks as actual symlinks

  • Preserves permissions

  • Preserves modification times

  • Preserves the owner and group

  • Preserves several other files


So, this will actually be a copy of the situation as it is on the first filesystem node rather than a half-assed backup.


But running the command by hand isnr17;t going to help you any. You want to have this ran on a regular basis. To solve this issue, I want to add the command to cron. Depending on the site of the filesystem and the amount of file modifications you make, Ir17;d say you could run this every 15 minutes for a site with few modifications. Worst-case scenario is you loose 15 minutes of changes to files (not the database). You can change this to fit your needs, but keep in mind that rsync need to have enough time to complete before running it again.


On the first filesystem node, as the user r16;clusterr17;, run:


crontab -e


This will open an editor, or ask you to pick one. If it asks you to pick one, either pick one or press enter. The default is r16;nanor17;, which should be easiest for most people.


In the file that opens, add the following line:


*/15 * * * * rsync -a /filesystem filesystemnode2.example.net:/filesystem


Now save the file. From this point on, rsync should back up the files from the first filesystem node to the second filesystem node every 15 mintes!


Itr17;s time to head for the final step when it comes to the filesystem: permanently mounting the remote filesystem on the web nodes. Before we start with that, however, letr17;s do another recap. We now have:



  • Two web nodes with access to both filesystem nodes over SSH without needing a password

  • SSHFS installed on both web nodes

  • Two filesystem nodes with the first one having access to the second one over SSH without the need for a password

  • A cron running an rsync command every 15 minutes to back up the files from the first filesystem node to the second filesystem node

  • A working situation for mounting the remote filesystem on the web nodes


The web node r11; Part 3


In order to mount the remote filesystem permanently, you need to add an entry to /etc/fstab, which lists the filesystems that should be mounted. In order to be able to mount it, though, you need to be able to log in to the remote filesystem as the user r16;clusterr17; from the user r16;rootr17;, since root mounts the filesystems at boot. In order to do this, the private SSH key of the r16;clusterr17; user needs to be copied to the .ssh directory of the r16;rootr17; user.


Make yourself root:


sudo su root


And head to your home directory:


cd


From there, run the following command:


cp /home/cluster/.ssh/id_rsa .ssh/


This copies the private key to the root userr17;s .ssh directory, giving it the ability to log in to the filesystem nodes as the user r16;clusterr17;.


Now, open up /etc/fstab in your favorite editor (Ir17;m using vim):


vim /etc/fstab


And add the following line:


cluster@filesystemnode.example.net:/filesystem    /filesystem     fuse.sshfs  defaults,_netdev  0  0


Save the file. In order to test this, first unmount your test mount (if you had any):


fusermount -u /filesystem


And then try the fstab file:


mount -a


If there are no errors, you should see the remote filesystem mounted under /filesystem with all your files there. Do this on both web nodes to enable quick switching in case of an issue.


Once this is done, itr17;s time for the grand finale step: moving your WordPress files to the remote filesystem!


The r0;bigr1; migration


Therer17;s one last r16;butr17; to this: the web server runs as r16;www-datar17; and needs to be able to access the files owned by cluster. Since all files have user+group access, all you need to do it add the r16;www-datar17; user on both web nodes to the r16;clusterr17; group:


sudo usermod -a -G cluster www-data


This modifies the user r16;www-datar17; and adds it to the group r16;clusterr17;.


Now you can safely migrate your files to the remote filesystem. Since we stored the files in a userr17;s home directory, wer17;re going to copy them from there to the /filesystem directory:


sudo cp -rp /home/username/public_html /filesystem


This copies the files recursively from the old location to the new one, preserving ownership and timestamps. After having done that, make sure all files are owned by r16;clusterr17; in order to prevent any issues. Since r16;www-datar17; is in the group r16;clusterr17; this should not be an issue:


sudo chown -R cluster. /filesystem


Finally, with all the files on the remote filesystem, you need to take one more step for this to work: switching the web server to a different document root. Open up the file /etc/nginx/sites-available/hostname.conf and look for this line:


root /home/username/public_html;


Change that to:


root /filesystem;


And restart NGINX:


sudo service nginx restart


Thatr17;s it! It was quite some work, but you now have your (manual-intervention-required) fully redundant LowEndCluster!


Final notes


As this is quite an elaborate series, I make make this one into a lengthy guide in the future and/or expand on it. What Ir17;ve done so far it touched the bare essentials of the possibilities and as technology develops, those possibilities will only increase.


I do need to note, though, that for a high-traffic website this may not be the best solution. Especially not with servers spread across the planet.


To get back to the required servers, herer17;s the list of servers Ir17;ve actually used:



  • 2x BudgetVZ 128MB r11; Load balancers with IPv4 and IPv6 r11; ?4/year each r11; ?8/year total

  • 2x MegaVZ 512MB r11; Database servers with NAT IPv4 and IPv6 r11; ?5.5o/year each r11; ?11/year total

  • 2x LowEndSpirit 128MB r11; Web nodes with NAT IPv4 and IPv6 r11; ?3/year each r11; ?6/year total

  • 2x LowEndSpirit 128MB r11; Filesystem nodes with NAT IPv4 and IPv6 r11; ?3/year each r11; ?6/year total


It differs from my initial list in that KVM machines are no longer required for the filesystem. OpenVZ with FUSE works fine. This means you can actually create this 8-server cluster for less than $35/year (?31)! Thatr17;s both LowEnd and fantastic!


I hope your17;ve enjoyed reading this series and I look forward to getting back to it in the future. Thank you for reading!


Warning about ParkingCrew.com! Case: ParkingCrew.com acquires NameDrive.com but earnings are not transferred despite assurances and promises. Inquiries about this are ignored! It's just a con compagny. Don't do business with them!
 
Jump to Forum:
New Thread Post Reply
Use this BBcode or HTML to refer to; 'Tutorial ? The LowEndCluster ? Part 4'
BBcode:
HTML:
Vergelijkbare onderwerpen
Thread Forum Replies Last Post
Finding your way on your first ever VPS ? Part III Linux tutorials, Tips & Tricks 1 17-10-2016 09:24
Server monitoring with Icinga 2 ? Part 1: the server (Ubuntu host) Linux tutorials, Tips & Tricks 1 17-10-2016 09:21
Server monitoring with Icinga 2 ? Part 2: the node (Ubuntu host) Linux tutorials, Tips & Tricks 1 03-09-2015 07:15
Tutorial ? OpenVPN for internal-ip only virtual machine hosts Linux tutorials, Tips & Tricks 1 13-06-2015 07:15
Tutorial ? The LowEndCluster ? Part 3 Linux tutorials, Tips & Tricks 1 24-05-2015 13:17