Using my HP DL380 G6 and HP N54L together

Posted by under Servers, on 23 October 2017 @ 10:58pm.

Having now had time to set up and play with my new server, I have now given it proper tasks of running virtual machines rather than just sitting idle.

One of those tasks include running PFSense, a router operating system designed to handle the same tasks as your regular router but on a larger scale. It handles DHCP, routing, port forwarding, etc. This will give me more control over my internal network including the ability to monitor it better. My original router is still in the loop since it is also the VDSL modem, but it is fully DMZ to the PFSense so it acts like it’s not even there. PFSense does still get a local IP on the WAN port though.

Now that I am using the DL380 properly, this means I could retire the N45L from its original job as being the sole server. Since the DL380 only has 2.5″ drive bays this meant that I was unable to use my existing 3.5″ drives unless I found another solution. I was able to re-use the N54L as a NAS by using the FreeNAS OS. This turns the system into a fully fledged NAS to work in any way that I liked.

I had been dabbling with the use of iSCSI attached drives, but found Windows’ implementation poor as it required the use of a virtual hard drive container file to store everything. I didn’t like this because corruption of that file means anything contained within it is lost. For a virtual OS this is not so bad but for my regular data this was not an option. Thankfully, FreeNAS’s implementation allows you to just use a physical disk over iSCSI so that’s what I have done. This is then attached directly to the appropriate VM on the DL380.

The original N54L roles have now been switched to a VM on the DL380. This gives it more power to handle things like the CCTV, my web development setup, my Plex server that I am now able to run thanks to the additional power, and more should I ever need it.

At present the system has only 24GB of RAM with the main server VM using 8GB of that (up to 12GB dynamic). PFSense is allocated 2GB. That gives me plenty to start up more machines if I need it. A friend also passed to me another 48GB of RAM to add when I get the chance bringing the total to 72GB. That will be more than enough for me for the foreseeable future.

Eventually I will run other things on the server including a Steam cache (which I successfully trialled at the weekend and it works great). I just need some additional storage caddies so I can put in some more local drives. Without local drives I will be limited by the network connection to the NAS (1Gbps at present). Future plans include trying to team two NIC’s together for a 2Gbps NAS connection.

All in all it’s been a fun experience so far. One thing I did notice though is that my electricity bill has now doubled thanks to the server using a fair bit of power, but that aside it’s working just as I intended.

This is my setup at the moment. One of the N54L’s is a NAS and the DL380 on the bottom. At the top you can see my HP 1800-8G switch and BT Homehub 5 router/modem. In future I plan to get a rack to put these in but I just don’t have the space at the moment.

 

 

Server Virtualisation – A New Journey

Posted by under Servers, on 5 September 2017 @ 7:45pm.

Recently I managed to acquire myself an old HP DL380 G6 rack server. It’s the kind that you would find in a data center or business server rack. For just £125 it has 2x Intel Xeon processors and 24GB DDR3 ECC memory, a fully fledged RAID controller and redundant power supplies. It’s the kind of upgrade I needed for my ageing HP N54L micro servers.

Because of the amount of power that it holds processing wise, I decided it would be time to try out virtualisation. I decided to go with Microsoft Hyper-V because I’m familiar with Windows operating systems. I haven’t really had time to play with this server too much but I did get PFSense set up as my router as a virtualised operating system, and it’s working very well.

I gained a little experience from doing this and even more recently I decided to upgrade my web server (the one running this website) to a new one, allowing virtualisation. It has an Intel Core i7-7700, 32GB RAM and 2x500GB SSD’s. I currently run BetaArchive, this website and several others for friends from the server.

To separate BetaArchive from the rest I decided to virtualise the whole machine and install two operating systems and gave them both their own IP addresses. I had never done this before so I had to learn how to assign IP’s to each virtual machine as well as understand how the virtual hard disks worked (fixed sizes, dynamic sizes, etc). It’s all very complicated, and that’s the basic parts of it.

One thing that I did find when doing this on both my home server and web server is that SSD drives are an absolute must. Without them your system will run extremely slow because they just aren’t quick enough with standard mechanical drives. That quickly led me on to an issue that made me spend all day thinking there was a hardware issue with the new web server. I discovered that when I ran drive benchmarks that the SSD’s were not performing the way I expected. The reading was mostly OK but the writing was slow. I couldn’t understand it. I thought there was a disk issue so I asked for a replacement since the 2nd disk seemed to be OK.

The disk was replaced and I saw no change. I then theorised it might be a SATA cable issue so I asked for that to be changed. Again no difference. I then asked for the whole server to be swapped out except for the disks, and again, no difference! What?! That made no sense, so it had to be a software issue. For another 2 hours I was stumped, and then it clicked.

Kicking myself, I checked the drive cache settings. Write caching was disabled. Doh! Turning it on instantly gave the speed boost I was expecting. Why it was off by default I don’t know. Is it always off by default? I’ve honestly never noticed this on other systems that I have installed SSD’s into. I feel sorry for the tech that had to do all of those swaps for me!

Disk I/O problems isn’t something just caused by virtualising, but it is very noticeable when you try to run multiple machines at once and they fight for disk I/O. If one machine uses up all of the I/O, the other systems can lock up – not ideal! Thankfully you can restrict the maximum amount of I/O that each machine is allowed to use, as well as give each machine a minimum I/O that it is allowed to be restricted to if another machine is using a lot of I/O too. This stops one from using it all and starving the other.

So now that this is set up, everything is running just great. There are other considerations to make such as RAM and CPU allocation, but you can set up minimums and maximums for those as well.

Hopefully this is something that I can get into more and work with at home when I find the time to do it. The biggest issue I have at home is that my disks are 3.5″ disks but the new server only takes 2.5″, so I will need to use the old one as a storage array. I’ll get around to it. Eventually…

 

 

Solution to slow FTP Server speeds (Filezilla and others)

Posted by under Servers, on 23 March 2016 @ 10:50pm.

I recently found that despite having a 70Mbps (8.75MB/s) internet connection and a 1Gbps (125MB/s) dedicated server to download from, I could only seem to download from the FTP server at about 16.8Mbps (2.1MB/s) on a single thread. However over HTTP I could easily manage about 65.6Mbps (8.2MB/s) on a single thread. This confused me, as there should be no reason why the speeds should differ so wildly. I’d expect a little difference, but not that much.

After some forum discussions on Beta Archive regarding this (I looked into things after a user complained about slow speeds), a user told me that he uses IIS FTP Server with no such speed issues. This instantly told me that something wasn’t implemented or optimised properly on Filezilla and Gene6 FTP Server (the two servers I use). I started looking at the possibilities and quickly found the solution.

Solution!

The “internal transfer buffer size” and the “socket buffer size” values were set quite small on the server at just 32KB and 64KB. There is a notice that says too low or too high can affect transfer speeds. So I did what anyone would do… I bumped it up about 10 notches to 512KB on both of them! Instantly my transfer speeds hit 65.6Mbps (8.2MB/s), the same as I was getting over HTTP. Perfect!

filezilla_ftp_socket_and_buffer_size_options

More tests

I did a few more tests to make sure that I didn’t set it too high or too low, but it seemed OK. Going from 64KB to 128KB made the speed hit about 46.4Mbps (5.8MB/s). Better, but not good enough. 256KB buffer allowed me to hit 65.6Mbps (8.2MB/s), which is the maximum I’m likely to get due to protocol overheads.

Assuming that the buffer size doubling also doubles the speed, a buffer of 512KB should allow up to about 192Mbps (24MB/s) which really is more than enough for the things I need it to do. Given my connection is much slower than this, and broadband in the UK doesn’t really hit those speeds either, it should be plenty for now.

Filezilla only allows a maximum buffer size of 999,999 (almost 1024KB or 1MB) so the maximum it should allow (again assuming that the buffer doubling = speed doubling) is about 384Mbps (48MB/s). Other software might allow higher so by all means use it if you need to.

 

 

Fiber broadband

Posted by under Servers, Technology, on 24 December 2013 @ 9:30pm.

I posted a few weeks ago about me getting fiber broadband. Well its finally arrived after I called up to get an earlier appointment.

In was quoted 79.9Mbps and that’s exactly what I got on the sync speed. Speed tests give me 75Mbps down and 17Mbps up which is fantastic! The upload speed has already come in incredibly useful for sending files to friends quickly and uploading photos to my websites.

I’ve seen 9MB/s download on files from Microsoft and 2.1MB/s uploading via FTP. I’m certainly not going to complain at all for the extra £4 it’s costing me. I would highly recommend it to anyone who has FTTC available in their area. So far I’ve seen no throttling either which is brilliant. BT have a superior network to handle fiber so its not surprising.

I’m in the process of utilising it more and hopefully I will have a few servers on it soon. I’ll also be using it to download the 7.5TB backup of my site (yes, TB not GB!). I just need to get my 2nd server set up with the backup drives.

 

 

Fiber broadband is finally here!

Posted by under Life, Servers, Technology, on 4 December 2013 @ 11:45pm.

For many the fiber broadband revolution started many years ago. Those people that got virgin media for example were given quick speeds from an early age. Those of us stuck on ADSL however weren’t so fortunate. The line length was always a factor and prevented most people getting a fast connection. You would have to be no more than 500m away from the exchange to get anything over 15Mbps. Now that fiber is here that’s a thing of the past.

The new fiber cabs are no more than 200m away from most properties. And because it uses VDSL technology instead of ADSL you also get a bump in speed from that as well. It’s currently capable of 300Mbps with trials of higher in progress. Its something we should have got many years ago.

Anyway the point of this blog is that its finally arrived for me! I’ve been checking weekly for almost 2 years waiting to see the long awaited ‘available now’ message and this week it finally happened.

I initially signed up online and was given a date of early January. I was surprised by this as other people got theirs much faster. I decided I’d be OK with it as I understood they were quite busy. But after speaking to a friend who ordered at the same time as me he got his for next week! I asked how and its because he had phoned up and not done it online. I decided to try my luck phone up to see if I could get an earlier date. Thankfully they could! I was expecting a 2 week wait but they said they could do it next Monday! Brilliant!

I was expecting 65Mbps based on BT’s estimates which isn’t too shabby at all. But when I signed up the email I got said 79.9Mbps. I thought it might have been an error but checking their estimate page again it had been changed! So with any luck I’ll get the maximum that the up to 80Mbps package offers (the fastest package you can buy right now).

I’m just waiting for the BT home hub to come in the post in the next few days and then I’m all set. The engineer is scheduled for Monday to get it swapped and all working. Wish me luck! I’ll probably make another post with my speed test results soon enough!

 

 

Solution to webcam issue on Windows Server 2008 R2

Posted by under Servers, on 29 June 2013 @ 6:01pm.
broken-broadcast
Do not adjust your TV set…

After getting a new HP Micro Server I decided it was time to stop using a desktop OS (Windows 7) and move to a proper server OS (Windows Server 2008 R2). It’s a good move in many respects, but the most important is that it’s designed to be a server and run 24/7 where as a desktop OS isn’t.

Everything went smoothly until I got to setting up my CCTV software. I use a regular cheapo webcam for my CCTV needs. It’s simple but it does the job. On Windows 7 this worked great with the exception of USB webcams causing crashes on occasion. The problem I encountered was that the webcam simply didn’t work at all. The driver had been installed, and I even tried a webcam that was driverless (using built in Windows drivers). That also didn’t work. All I was getting was a black screen.

What also struck me as odd is that one of my webcam applications wouldn’t run. Windows 2008 R2 is based on Windows 7, so the application should run without a problem unless an artificial restriction has been put in place. I knew it hadn’t as my friend made the software himself and confirmed it. He suggested it was just a “general incompatibility”.

Regardless, I tried another piece of his software that also dealt with webcams but this also had problems running. However in this case I got a different error message about “wmvcore.dll”. I decided to do a quick search for solutions by copying the error message, and it came up with something I didn’t know about. Windows Server 2008 R2 doesn’t come with the “desktop experience” package installed. This is installed on Windows 7 by default of course, but a server doesn’t require it because it’s not a desktop OS. Part of the desktop experience is Windows Media Player, and this is what the error was referring to. I decided it was a long shot but I installed the desktop experience package.

Low and behold, after it installed (and the system rebooted twice) the webcams were now working! Hurray! It was a simple fix but had I not been able to try the 2nd application I would probably not have found the solution without it’s error message.

So now I’m happy that it’s working. What I’m not happy about is that compared to my old server, this one is a little less powerful so it’s using more of the CPU to run the CCTV software (75-85% rather than 50% or so) but that’s not a huge issue. I’ll monitor it and make sure it’s not affecting backups etc.

I hope this long winded article helps someone else with the same issue I’ve been having.

 

 

UPS Battery Woes

Posted by under Servers, on 20 February 2013 @ 9:59pm.

upsFor those of you who don’t know what a UPS is, it’s an uninterruptable power supply, or “battery backup” for your PC/servers and other electrical items which takes over when the mains fails. I have a couple of them, one for my PC, one for my server and one for my network equipment.

Recently, the batteries in one of those UPS’s decided they didn’t want to play anymore and suffered catastrophic failure. This meant that the UPS would no longer hold if the mains power dropped. Typically the batteries should last about 3-5 years but these only lasted a tad over 2 years. I was disappointed but then I began reading why this varies between UPS’s.

I have 2 UPS’s on my desktop PC. One for the monitors and one
for the PC itself. I have it this way because my PC is fairly high end so it can reach a fairly high power usage when being used heavily. A single UPS for all of this was unlikely to hold very long. These UPS’s are about 5 years old and still had their original batteries. I had known for some time these were getting poor but it wasn’t until now I decided to replace them with new ones. I have retired the UPS that had batteries die recently and took the monitors UPS and put that on my server instead. The remaining UPS for my PC now holds one of my monitors and the PC only.

I began to think why these batteries lasted 5 years and still had life in them but these others lasted just 2. I discovered that the “float” voltage, the voltage that the battery is held at once fully charged was likely too high on the UPS that recently died. It tended to hold it at 13.8v. A fully charged 12v battery sits at about 12.8-13.3v. These other UPS’s seem to pulse the power into the battery at around 13.3-13.6v rather than holding it at 13.8v. This is likely the explanation to their longer life, so it makes sense to stop using the other one and use these instead.

This discovered I decided to buy 2 new batteries, one for each UPS. I did my best to select the best brand I could because I read that they tend to last longer as well. However when these batteries arrived they were not the brand I had thought I ordered. Despite this I tried them in my UPS’s anyway and immediately had a problem.

Firstly, I checked the weight of them against the old ones. They were 0.5KG lighter which suggests a lower lead density and thus a lower energy density as well. This was worrying because I opted for the “best brand” available.

batteryAlthough when I got the batteries they were not fully charged, I would have expected them to put out enough power to at least keep the PC’s online and/or boot them up. During testing they would switch to batteries just fine, but they would not start the PC from cold (powered off). This isn’t how a UPS is supposed to work, it should work both ways. During testing these batteries reached 2 minutes of sustained run time before encountering a low battery condition and shutting off. This wasn’t acceptable. The original batteries would have held the load for at least 10-15 minutes!ble. Despite that I carried on with testing.

Unhappy with this I thought I’d let them charge for 24-48 hours and see how that goes. I then tried the test again and managed around the same sustained run time. I contacted the seller on eBay that I got them from and explained I wasn’t happy. He replied saying that they are likely being trickle charged and it could take up to 3-4 days to fully charge them. The specs say 8 hours to fully charge, but they could be wrong. He assured me they were the one of the best brands and suggested I try them again in 3-4 days. He also said the weight difference should offer little difference to their output.

3-4 days later I tried another test and managed 12 and 14 minutes between them. This was significantly better but not quite what I expected. I contacted the seller again who said it might take a couple of charge/discharge cycles to get them to 100%. I agreed as batteries do sometimes need this cycling so I said I would try again in a few more days. It’s only been 2 so I haven’t tested them yet, but I am hopeful that they will be slightly better again.

All in all it’s been a bit of a pain but it was also a learning curve involving a fair amount of research. This leads me on to my next blog about battery knowledge, as I have learned a few things I didn’t know about how to care for batteries recently.

 

 

MB, MiB, GB, GiB, what the differences are and why it causes confusion

Posted by under Rants, Servers, on 5 February 2012 @ 7:50pm.

OK so you’ve probably heard of MB and GB (MegaBytes and GigaBytes), they’re used on all sorts of devices from phones to computers. But what are they? Better still, what are MiB and GiB (MibiBytes and GibiBytes)? Well both are units for measuring memory size but they have differences. The difference is that one is calculated using base 2 and one is calculated using base 10.

For these examples we’ll stick to GigaBytes and GibiBytes for our sizes.

A GigaByte that we’re all so used to is base 10 (1,000,000,000 bytes to 1 Gigabyte).
A GibiByte that you may have heard of is in base 2 (1,073,741,824 bytes to 1 GibiByte).

gib-vs-gb-table
Table taken from http://www.pcguide.com/intro/fun/bindec.htm

So why does it cause confusion? Several reasons. Most sizes are referred to in Mega or Giga bytes so many people have been accustomed to this. For example in magazines or advertisements selling electronic equipment (tablet’s, digital cameras, etc.) and computers running Windows they’re referred to as this, but that’s where the confusion comes in, and a bit of a rant because of it.

The difference between 1GiB and 1GB is marginal, but when you increase this to several tens or thousands, it throws the scale way off. Not only that, Windows runs in base 2, but actually displays it using the prefix for base 10. So when you think it’s 1 GigaByte it’s actually showing 1 GibiByte instead but with the wrong unit! I have no idea why Microsoft decided to do this but it’s confusing as hell when you’re trying to work out the differences in file size on a program that really does show it properly. Gah!

The problem I had recently was on one of my sites, BetaArchive. I was trying to find discrepancies in the total archive size counter. It wasn’t showing the right size but we were adamant we had it right. In the end it turned out to be in the units.

Now I believe Mac and Linux use GigaBytes correctly and are also switchable from what I have read, you just have to find the option for it so Windows should have no excuse getting it right. Many people have complained to Microsoft but they’ve never done anything about it for some reason, so this problem continues to plague developers and people like myself trying to work out these discrepancies.

Suffice to say I wasted 2 hours trying to figure out where the missing data was. As a result of this cock-up I’ve even had to put a message next on the display on BetaArchive so people know it’s actually showing the right unit on the site but Windows shows the wrong unit!

I don’t doubt this will plague people for years to come as I doubt it will be fixed in Windows 8 either. I just find it hard to believe Microsoft have got away with it for this long.

 

 

Hard drive death and switch to new home server hardware at last

Posted by under Servers, on 3 January 2012 @ 9:57pm.

Happy new year to everyone, I hope you welcomed the year in style and with a stomach full of booze. I know I did!

This month has been an interesting one, not just with Christmas and New Year but also with hardware as well. A few weeks ago my server randomly reset during the night. I thought nothing of it and let is continue doing what it does, I never checked any logs. A few days later the same happened again so I started keeping an eye on it. Again another few days passed and it reset again, this then started to concern me so I did some chkdsk scans and everything came back clean. As I had other things on my mind I left it for a few days.

One day several days later I started to get backup failure e-mails so I went investigating. Hold on, the backup drive is missing, it’s just gone. I reset the server but it didn’t come back, so I tried a hard reset instead (full power off and on again). It came back. I decided it would be a nice idea to check the SMART stats and to my dismay it had “Raw Read Errors”. The drive was dying.

Several things can cause raw read errors but the most common are either controller failure or physical media failure. No matter which it was, the drive needed replacing. Luckily for me I had a spare 1TB drive which I hadn’t used and was just sat there waiting. While making the change I decided that perhaps now would be a good time to switch to my new home server hardware which has been gathering dust for the better part of 6 months.

The one part I hate about switching hardware is the OS re-installation and reconfiguration, but this time I decided to just switch the boot drive and see what happens. To my delight the OS started and worked just as it did on the old hardware with just a small wait for the new drivers to install. After that I put in the other drives and the server was fully operational without any reconfiguration! All I had to do was initialise the new backup drive, set up the new directory structure and off it went.

Needless to say I was very happy that I didn’t have to spend hours reconfiguring it all. So what are the new specs you might be asking? Nothing special.

AMD Phenom II 550 Dual Core CPU @ 3.10GHz
10GB DDR3 Corsair 1333MHz (2x4GB and 1x2GB)
ASUS M4A77T (6xSATA)
3×1.0TB Samsung
1×1.5TB Samsung
2×2.0TB Samsung

It does allow me to use hardware virtualisation if I need to though which is what the extra memory was for originally.

Now, I better not get another disk failure as I have no more spares and they’re still not cheap enough to buy new ones!

 

 

Second Home Server – I Need More Storage Space!

Posted by under Servers, on 17 October 2011 @ 9:55pm.

I’ve recently realised I’m rapidly running out of space and processing power on my current home server. It’s been running well for the last 3 years and currently has a capacity of almost 8TB, but that space is filling up. Running multiple websites, including the all huge BetaArchive, this space is primarily filled with backups of those sites. As BetaArchive continues to expand, more space is going to be needed for the archive backups.

current-server-statsStatistics from my home server

Although it looks like I have plenty of space left, I prefer to keep certain tasks kept to one drive. For example I don’t mix my main data with archive backups, or media with betas, etc. This keeps the layout clean and makes things easier to find (or lose!).

The BA archive backup currently occupies a little under 2.3TB spanned over a 2TB disk and a 1.5TB disk. The data is split between them with everything except PC Beta Operating Systems on one drive, and those being on another. Our FTP admin has recently released information that means another 1TB of data will be coming to the site in the near future. For this reason it means I have to prepare the extra space for this data so I will have a backup of it.

The problem with the current server is that not only is it suffering from being under powered, due to firstly having an older CPU, and lack of RAM (2GB doesn’t go very far, and the motherboard has issues taking any more than that due to faulty hardware), but there are no more spare SATA slots for more hard drives. I bought a replacement AMD Phenom II X2 and motherboard some months ago but it’s not yet been put into service. Even more recently I got myself a second 4U rack case to house it.

new-serverBottom: Current Server. Top: New Server

This new board now has 10GB of DDR3 memory (2x4GB and 1x2GB DIMMs) and 6 SATA-II slots for additional hard drives. My original plan was to use this as an ESXi or Hyper-V server but I’m not so sure now. I’ll probably stick with the single OS solution as I’m not familiar enough with ESXi to risk using it on a production system. If anything I would use Hyper-V.

I have 2 hard drives ready to go in it. An older Samsung F1 1TB and a brand new and currently untouched Samsung F4 2TB. I can instantly bump my storage space up to 11TB when I bring this server online. I hope to replace all of the 1TB drives with 2TB’s as I go along, retiring the 1TB’s to other tasks.

So what will I use the new server for? To begin with I may just move all of the BetaArchive archive backups to it, and use it as an offline server for the time being. Running two servers will be higher in electricity costs so if possible I want to reduce that until it’s necessary. It costs about £13 a month to run a single server at the moment (130Watts at 14.7p/kWh).

As I have some spare time later this week I will see what I can do to bring it online and get it running a copy of a server OS. I have a couple of beta keys I can use temporarily while I play around. I won’t keep the server online 24/7 though as I still need to wait for the fan extension cables. I can’t safely run the server 24/7 without risking damage from the heat.

Right that’s probably enough talk about servers for now, I’ll make another update during the week if I get around to having a play around.