Thursday, 29 December 2016

Announcement : My Book On Kindle!

You can pre-order my Kindle e-book today for delivery the 1st of January 2017!



For all those times you wonder about not knowing quite how to approach programming, or if you've lost your confidence with overly technical tomes trying to teach you, try my personal, one to one approach!

The text takes on the same format as the books I learned to program from in the 1980's, it helps you understand the basics never throwing into fits of rage or despair, I am right there with you to guide you.

This is the first book in what I hope to make a series, it takes you through basic variables, numbers, math, text processing, into the basics of lists and then object orientated programming, ending up with error handling and file processing.

I think the Python programming, despite a few quirks, is an excellent starting point for any beginner today, it lends itself to introducing programming across multiple platforms oh so very quickly, but one can still find jobs out there specifically requiring Python skills!

Pre-Order today!  And I'll see you inside!



Saturday, 24 December 2016

Administrator : Blocking Spammers & Hackers (Basics)

To see whom has been trying to connect to your Debian (or Ubuntu) server, use:

cat /var/log/auth.log | grep "Failed"

This will list out the failed attempts, then add the successful with:

cat /var/log/auth.log | grep "session opened" | grep "LOGIN"

You can note the IP addresses for the unwanted attempts and block them with iptables, like this:

sudo iptables -I INPUT -s AAA.BBB.CCC.DDD -p tcp --dport ZZZ -j REJECT

Where the IP to block is "AAA.BBB.CCC.DDD" and the port is "ZZZ" as a number, so to block 192.168.0.1 on port 7000 you would do:

sudo iptables -I INPUT -s 192.168.0.1 -p tcp --dport 7000 -j REJECT

Instead of REJECT you can use DROP, and in place of tcp you can use udp and icmp protocols.

To block a whole subnet range I just do this:

sudo iptables -A INPUT -s AAA.BBB.CCC.000/AAA.BBB.CCC.255 -p tcp --dport ZZZ -j DROP

This makes all addresses in the range not respond, the range could have been 192.168.0.0/192.168.0.255, or you could block higher up the range like this 192.168.0.0/192.160.255.255, the first address range blocks just the last section subnet mask, the second blocks the last two sections of the subnet mask!

You can view the iptables in use with:

sudo iptables -L



Why Does This Exist?
Its not often I have to actually turn a server towards the outside world, my personal servers usually sit on my LAN and never route to the internet, like-wise the items I provision in the office are for internal use...

Yesterday however, I had the pleasure of being told to make a service available to the outside world...

No big deal, it's Apache2 on a Ubuntu host, set up done... And I only opened port 80 then left it... All was fine...

It has run for six hours... six... On a brand new acquired IP address, no-one but the recipient at the far end knows about the server being there, it has no DNS entry, it has no other services, just port 80 and ssh open...

Yes, I have had hacker, poking, security breach attempts from China, Vietnam, the British Virgin Islands, Canada, the Netherlands and Russia...

The mind boggles at quite how much hacking and infiltration is going on out there...

I've been checking the mainly ssh breach attempts with the command:

cat /var/log/auth.log | grep "Failed"

I run this to a file and then have a python script to log the IP addresses into a table for me, and I can then just block them individually or as a subnet range, though iptables.

I also check for successful logins just in case with:

cat /var/log/auth.log | grep "session opened" | grep "LOGIN"

I wonder however whether a python script to manage all this for me might be in order... Hmmm, project time!

Friday, 23 December 2016

People: I am Xelous (Someone else has an Identity Crisis)

I bring this to the public attention of these pages as I had what could only be described as an ill conceived lecture last night about my name, now let us be clear we all know "Xelous" is not my actual name, it is however a name I have use and evolved online since 1996.

If you google "xelous" you will find me.. I admit also a young lady on Picorama... but you find me....

If you google my actual name you will find the Gillingham Goal Keeper, and a well known film franchise, you will not find me.

You will like-wise NOT find me vaunting myself on LinkedIn, again I have been challenged about this.  My first reason for not really embracing Social Media in this manner is that I get a lot of spam, an awful lot of spam, the second is that it's simply not mature enough for me yet, Facebook is too much a dictatorship and not professional and LnkedIn is not mature and insecure.

I noted how that LinkedIn hack disappeared from the radar pretty quickly in the Summer... I however remembered it.

I've had the comment that "Xelous" is a silly name, it's a name... It is actually a name, I thank "Nacel Xelous Elal Ybab" of the Philippines for letting me make this point.

I have worked professionally in the Entertainment Software industry under the pseudonym "Lord Xelous" as well, and it is the name on my YouTube channel.

So to that nay-sayer, you were wrong, and sounded silly.  Indeed I looked you up online, with your buttoned down collar supposedly "correct" name, you were WWAAAAY down the google search listings and even came second on the LinkedIn listing and your surname is quite unique in of itself!  FAIL!

Monday, 19 December 2016

Programming : Using Boost File System Directory Iterator to Find a File

PLEASE NOTE, THIS POST CONTAINS UPDATES TO PREVIOUS POSTS - BOOST 1.62.0 NO LONGER TAKES THE "MinGW" DIRECTIVE, TO USE MINGW YOU NOW ONLY NEED PASS THE "GCC" DIRECTIVE; see section 5.2.2 at this link...

Today I'm going to do some code, this is a code example with Code::Blocks using the Mingw compiler on Windows, to build the boost libraries and then use the boost.filesytem library to find a specific file from a given root, first just looking in that directory at all its files and the finally making it recurse down the tree of files.

First things first, lets create a folder to work in, I'm going to call it "FindaFile", inside this I'm going to create "ext" for external items, which is where I'll place & build boost; and I'll create "src" for source which is where our project file & source code will reside... Lets get started...

Next...


Extract the boost library, I happen to be using 1.62.0, which is the current version at the time of writing... And I then need a command prompt which knows the location of the compiler for mingw (i.e. we've added it to the PATH environment variable)...


We're preparing to build boost with the mingw toolset...


And then performing the boost build, which on this machine is going to take a long time as I only have 2gb of RAM... Hit that donate button to help me improve some of my machines!


So the path was set, the bootstrap is performed:

bootstrap gcc

and then I start the build

b2 toolset=gcc


Once the build is complete the two folders we will be using in our code-blocks project are the "boost" folder within the boost_1_62_0 folder; this contains all the header files for the boost library.  Some of the libraries are header only, so just including this folder into your builds (on gcc with "-I /foldername/") is enough to use boost, items like "lexical_cast" are perfect examples of this.


The other folder is the "/stage/lib" folder, this will contain all the build library binary's.  When you are using the file system it is not a header-only library, you must link against the "boost_filesystem" library file, and this is where it will reside!

You can leave the boost build running and open Code::Blocks now...


And create a new project, with a little bit of normal code, to check everything is working & we have set it to use C++11 (I would like to use a newer C++, but I only have this old compiler installed)....




Now, lets set the build options in the project...


I am going to set C++11, all warnings and stop on fatal errors...


Then I am going to set the compiler to look in the boost folder for the headers...


And now we tell the linker were to look for the libraries....


Our final step is to tell our program to use the filesystem library, when doing this you also need to use the system library... So lets switch to the linker settings and insert those libraries...

Now, the file name we are going to add is:

libboost_system-mgw47-mt-d-1_62.a

Lets break this down, from left to right we are told the this is part of the boost library "libboost" that is is the "system" sub-library, that it was build with "mingw v4.7", that is it the multi-threaded library, that it is built "debug" and it came from boost v1.62... You need to know this, because in our previous pages you will see we ONLY set our search directories and settings for the "Debug" version of the project... When you switch to the clean, smaller, faster "Release" build you will need to set everything again!

We add this library then into the linker settings, within the link library section... We need to add this file and the filesystem file...


The IDE might ask you to add the library as a relative path, this is directly pointing to the file, and I personally do not advise this, as we've set up the "search directories" we need only enter the filename into the library list.

Now, once the boost build is completed in the background, we can use the boost file system library in our code, and check for the presence of a file....


Lets write some code to take a directory to search and a file to search for at the command line and just output them, as a starting point....


We can go into "Project" on the menu to set the parameters for the program to some useful values... I have added a "Help" function already to point out when a mistake is made... I am going to just check my code by running it without parameters, then with bad parameters and finally with a valid folder name & filename... The targets I am using is "C:\Code" a folder I know exists, and "Program.cs" to look for all the program C# files I have in that folder...

Lets see our program output at this point...


Our first calls to the boost library now are going to turn the search directory string into a path, which is easier to work with in the boost library, you don't have to perform this step most all the boost filesystem library functions will automatically cast strings or cstrings or wstrings you pass to them into boost::filesystem::path or wpath instances on the fly, however, it's in the long run quicker for your code if you convert them into paths once, rather than have the library create and throw away instances of paths over and over as you do various calls.

We will also need to check that the search directory is indeed a directory to start off from...


Next we need to iterate through the directory and for each file check if its name is a match... I am going to put this into a function straight away, so we can call  into the function whenever we meet a sub-folder we can automatically queue if for searching as well....


And the code for this Search Function looks like this:

void SearchDirectory(const boost::filesystem::path& p_Directory, const std::string& p_Filename)
{
    std::cout << "Searching [" << p_Directory << "]...\r\n";
    std::cout.flush();


    auto l_Iterator = boost::filesystem::directory_iterator (p_Directory);

    // A blank iterator is the "end" point

    auto l_End = boost::filesystem::directory_iterator();           


    for ( ; l_Iterator != l_End; ++l_Iterator)

    {
        // This is the type "boost::filesystem::directory_entry"
        auto l_DirectoryEntry = (*l_Iterator);  


        // Look for subdirectories, files or errors....

        if ( boost::filesystem::is_directory(l_DirectoryEntry) )
        {
            // Recurse down into the sub tree
            SearchDirectory(l_DirectoryEntry, p_Filename);
        }
        else if ( boost::filesystem::is_regular_file(l_DirectoryEntry) )
        {
            std::cout << "Found File... [" 
                << l_DirectoryEntry.path().string() << "]\r\n";


            // We need to just have



            // Regular files are NOT symlinks, or short cuts etc...

            // So we look for a match!
            if ( l_DirectoryEntry.path().filename().string() == p_Filename )
            {
                // Notice here that we call to get a "path"
                // and then a "string" from that path!
                std::cout << "!!!! Match Found [" 
                     << l_DirectoryEntry.path().string() << "] !!!\r\n";
            }
        }
        else
        {
            // Unknown directory or file type
            std::cout << "Error, unknown directory entry type\r\n";
        }
    }


    std::cout.flush();

}

You will notice we receive a "directory_entry" from the "directory_iterator", then we get the "path" out of that, and finally the "filename" from that path... We can output any of them along the way, but we only compare the filename with the search pattern.

If the "directory_entry" was found to be a directory itself we simply recurse into another search.

This code now works....


This completes this little tutorial, if you found it of some use, please follow the blog, if you really really liked it and want to help me develop more ideas, or suggest more ideas, the donate button and e-mail link are at the top right!

Good Luck!

Friday, 16 December 2016

Software Engineering : Is not Engineering

Right I'm guilty, and annoyed at myself, and making a change... Though I might still keep this as a tab on posts... I AM NOT GOING TO USE THE TERM "SOFTWARE ENGINEER" anymore....

This makes my degree certificate wrong, as it clearly states "Software Engineering", but even though that is indeed what I do every day, and what I read about every night, it is not what; nor who; I am... I am a programmer, a hacker (in the traditional sense), a tinkerer and a student of all things software.

Many other writers have call us programmers out on this, and finally, I'm going to eat humble pie and agree, when one sits down to write code one is not doing what the great engineers did, we are not forging rail-ways, bridges, hulls of great ships or physical tangible results which must stand the test of time.

We are building a more ethereal, almost smoke and mirror concept, results through the action of our instructions through another, that is programming it is what I do.

Why do I want to make this distinction?  Well, as you may tell from some of the recent posts around here, I've been involved in merging parts of teams and companies, meeting both incoming and shifting personnel to fit them into the matrix that spells "results" for a company.

No code has yet been cut, but a new team, and new ideas might very well be needed.  In turn I have reached out there and been talking to others, to recruiters, to other companies, and indeed I've sat before other people.

My friends also call upon my expertise, as one of the few from our graduating class still working in Software or indeed technology, I am often called upon for a little technical guidance.

Results have been mixed, but the determinable difference I have had between success and failure has relied, nearly exclusively, on the other party understanding the term "Software Engineer", it does not mean we can programme your VCR, set the clock on your Microwave, or save your phone contacts to your SIM card.  It means we are able to employ structured methods, to define procedure, and to design, write and test then document code as products for use or sale.

This does not include our being Electrical, Mechanical or Structural Engineers!

I am not trying, willing, or able to build the next Channel Tunnel, or Skylab, or HMS Bullshit.  I am able to cut code to make an existing system, or device, bend to the will of requirements upon it, I am able to look at the said device and decide whether it is fit for the purpose or not, I am not creating that device!

Creating said device is Mechanical or Electrical Engineering, I am Software, the use of the "Engineer" moniker is causing some confusion, some blurring of lines and so to help delimit this boundary and stop this confusion from now on I will self identify as a Programmer, and cease to try to explain all that this entails.

I am a Programmer, a Lead Programmer, a Systems Programmer, a Device Programmer, a Prototype Programmer, a Senior Programmer, a Team Leading Programmer, a Development Provisioning Programmer, no longer am I an Engineer!

Wednesday, 14 December 2016

IT Sexism : What is Modern Development (for a Woman)?

Recently, I was challenged, almost to a fault, to enthuse myself about "modern development"... The quiz master expected me to extol about the virtues of distributed code verses centralised, to effervesce about sprints, about round tabling, about all these other buzz words and cockeyed concepts.

I simple asked the quiz master, quite what they meant by "modern development" ...

So surprised were they that they broke through a little bit of their shell and just talked to me, it was a refreshing change, moving away from methods and procedure, we talked about the system itself, avoiding even the language they were working in.

It soon became apparent we both worked from the same hymn sheet, just perhaps mine is an older copy... For you see, to me a good development job, doesn't necessarily sit with the work at hand, it sits with how you are allowed to approach that work, if you are just bombarded with red-tape and requirements and  micromanagement, then the person micromanaging you may as well just do the job on their own; and I have worked as the poor soul doing the work in just such a situation.  And it soon became clear this was the problem with the team this person was controlling.

It appeared they had gone through personality, philosophy and finally technical clashes, to the point that the four developers on the team didn't do much more than talk to one another about computers and draw their cheque each month.

The manager was doing all the work, not because she wanted to, not because she was female and they pushed it onto her, but because she had set high standards and promised delivery, but was getting no input from those below her.

In a paradoxical, and pretty toxic, situation, she was enabled to hire one more head, but not able to fire the useless four... She had ideas I might be that fifth chair, not a thought I relished as the conversation expanded into the actual problems she faced.

Now I've run into dysfunctional teams, but I know in the end the work either gets done or you all get fired, sometimes both.  But to sit and provocatively do nothing in the manner these folks did was tantamount to embezzlement, when I was introduced to them, they just sniffed and looked at me; I was egging myself onto ask them technical questions, but I was whisked into a private meeting room where this sad talk unfolds from.

After twenty minutes lamenting the problems I asked "What do you use as a way to leverage the work, what is that pedal you press to put on the pressure or get results?"...

She looked at me rather blankly, almost as though I were a magician showing a dog a card trick, as her eyes blinked she started to reel of a bunch of methods... Waterfall this and sprint that, and engagement in progress... yadda yadda yadda....

I put my hand palm up and asked again "not what procedures, what do you physically do?"

She didn't do anything, she sat in her office away from them all day and did code.  She never got out her chair and challenged them, she never spoke to then, she never pushed them in person... "Why not?"

And I was flabbergast to hear an intelligent, clearly able, mid twenty something, graduate and clearly decent programmer trying to manage a team say to me... "Because I'm a woman, they don't listen".

She was serious too!  Going on to explain that when she started she worked for a chap about my age, and they listened to him, she was one of them, but took a lot of ribbing, she moved into the supervisory role, still in the same room as them without issue, then was appointed to manage this new project when the other chap left... And that's where the problem started.

I wasn't sure whether it was green eyed envy at her progressing above them, or just because she had boobs, but whatever the reason I thought it extremely uncomfortable.  I saw her wanting, desperately to have a puppet in a more senior position to the team, to invoke her will upon them, and wanted myself (clearly 10 years older than them, and twenty something years her senior in age) just as a sort of enforcer and inquisition.

And I thought, her idea was "modern development", lots of buzz words and concepts, she had lost the "just talk to people" vibe.  Sadly because she was being intimidated, perhaps even sexually harassed... That, was the sad state, I found modern development in... Sickening and not a place I wanted to involve myself.

Monday, 12 December 2016

What TLA are you today?

TLA... TLA... A kingdom for your TLA... That's three letter acronym by the way... Well, strictly three letters is not the limit, but I'm extremely annoyed this week at hearing all these random seeming strange acronyms being thrown around...

I've heard....

CCNA, ITIL, MSCE, CIT, BACCT and EMECH today alone...

Some I have heard of, some I've not, the last is so generic as to be equivalent to "I read a book, once".

So lets cover my opinion of  a few of these, the Cisco certification, it's clearly fine and required to work with their kit, I've always found reading Cisco manuals to be a torture; but as a pfSense convert I no longer have many issues with Cisco kit.

ITIL, I actually myself started to read this course, it was... It wasn't that useful, practicality wise at least, it relied a lot on Microsoft specific software, run to Linux and be set free!

MSCE, this was the classic "well you did graduate a few years ago, but now you need this" qualification.... However, now... Not so much, at all... Especially when you use a compiler other than Microsoft's more often than not!

The rest, well... I'll be honest I'd never heard of them... However, I don't take an effective CV as being one containing a whole bunch of these acronyms.  I am not the kind of reader, or reviewer, to just think "shit I don't know that acronym the writer of this CV must be real smart!"... No, I actually look it up, and you know if I google for your qualification and can find only adverts for places offering the course, I think the course is only good for those selling it, it's not actually very good for those of us trying to decipher your ability....

Lets draw a line under this discussion and just focus on something else...

I just talked about your ability, if you want to quantify your ability go right a head, but I'll personally take your worth, how much do you value the work, how do you communicate your wish and will to succeed?... That's more important than any paid to read cookie cutter course in my opinion.

Followers....

I have resisted adding the followers item to the blog, for years, however I've just enabled it... So, join me now... Follow me into the light....

Monday, 5 December 2016

Parkinsons from the Gut?

Let me preface this post with, I AM NOT A DOCTOR, but I would love to hear from one about my thoughts...

I've just got through reading this article, it's very brief, on the BBC... Which talks about the study of genetically identical mice being tested for Parkinsons, and that gut microbes played a huge part in making one set begin with the disease, indeed being one of the root causes above and beyond the genetic pre-disposition to developing the disease.

This is a huge break through, however, the article talks about this being "discovered for the first time a biological link between the gut microbiome and Parkinson's disease".  Which may very well be true, but it goes onto talk about the microbes in question were "break[ing] down fibre into short-chain fatty acids"...

So, this is not the first time such a link has been biologically established, we've already seen this in studies of adrenoleukodystrophy (ALD), many people know this story from the Hollywood movie "Lorenzo's Oil", the story of the struggle of Augusto and Michaela Odone to find a cure, or therapy for ALD for their son Lorenzo.

Augusto undertook a mammoth research task himself to prove that using said "oil" worked to block the break down of Myelin (around nerve cells) in the body.

Surely therefore that this new research is related, the biology of the body is holistic after all, no system within is truly unconnected from the rest.  So, is there not a cross over, or a commonality between the research of the Myelin Project and their proven research that restricting the intake of long-chain fatty acids (perhaps so as bacteria have nothing to work on), flooding the system with non-harmful short-chain fatty acids, and the fact that the bacterial directly now proven to lead to the onset of Parkinsons are creating short-chain fatty acids as well?!?!

One could leap to a conclusion, or a link, and I hope I'm not giving the impression of that, but I find it so curious that there's no mentioned cross over here, or no seemingly holistic look at the action of bacteria or other digestive processes breaking (very-long chain fatty acids) down and that affecting changes in the brain & nervous system.  There might be such research that I'm simply not privy to...

As I said, get in touch if you know... I am literally all ears!


Friday, 2 December 2016

Code History : Old Computers to Teach New Developers

A fair while ago, I posted about my Virtual CPU code, that code (still not complete) just does addition (and my association subtraction), the point was to lead into code which emulated actual logic gates and indeed I write (somewhere in these pages) a half-adder followed by a full-adder emulation logic in C++ code.

The purpose of this was actually related to my giving a talk to some school children, it was ever intended for these pages, the kids could not wrap their heads around not being able to "just multiply".

They had studied computer memory, and wrote documents in word bigger than the actual memory foot-print of the machines they were talking about.

The teacher, a friend of mine, wanted to demonstrate this to them.... I therefore hoved into view with my 12Kbyte Commodore 16... And challenged the kids to write a program for it... They then had the massive struggle... One bright young chap found a C16 emulator online, and he wrote out a long program in Commodore Basic, which amounted to little more than a text-editor...

It was very impressive work from the young chap, and I've kept his number for when he graduates (in 2021), unfortunately it worked fine on the emulator, as you could assign more memory to the machine... After typing the program into the actual machine... It ran out of memory!  The edit buffer was only 214 bytes...

He had only tested with "10 print 'hello' 20 goto 10", but typing any lengthier program in and it essentially started to overwrite the previous lines of code.

You might call this an oversight, but it was semi-intentional as after all the project was about memory.

So having learned how expensive and precious memory was, and is, in this world of near unlimited storage the kids moved onto assembly, learning about how the lowest level of a machine worked.

This is where my work came into help, because they could not wrap their heads around "not multiplying".  In some chips a multiplication call might be many thousands of gates ("to multiply was 200,000 gates, over half the chip" - 3Dfx Oral History Panel - See video below).


Hence I wrote the code of a CPU only to do addition, to multiple one had to write assembly which did a loop of additions!  I left the kids to write the assembly for division, which is why you never see it here in my code.

It worked, but I find so many commentators have missed this piece of computing history, have missed that machines used to NOT come with every function, you had to emulate some functions in software with the bare-set of commands present.

Some have confused this with the idea of RISC, this isn't what RISC is about, but I distinctly remember being taught about the RISC based machines at school (Acorn machines) and that RISC meant there were "less instructions".  Sofie Wilson herself tells us that this isn't the point of RISC...


Having just spoken to my friend about the possibility of presenting these ideas again to a class of kids, I'm wondering whether I need to clean up all my code here, to actually make-sense of all these disparate and separate sources and write one paper on the topic; histories of this which are readable by kids seem to be sadly lacking, or they start from the point of view of a child of my time, born in the late 1970's whom remembers a time when you have limits in computing.

Kids today see no such limits, and find it hard to relate to them, my own niece and nephews, whom are just turning 15, find it hard to fathom such limits, even when they can sit down with me in front of a 12K machine, or a 512K machine, they can't relate, these pieces of history, these things which previously one had to work around are alien to them.

They don't need to work around them, and this leaves me meeting modern graduates whom lack some of the lowest level debugging and problem solving skills.

Indeed, I see these great efforts to publish frameworks to make modern developers test and think about software, because new developers have never had to get things right the first time...

I did, I'm one of the seeming few, who had to get it right and first time.  This is not bragging, it's actually quite sad, as how do you prove your code is good?  Pointing to a track record counts for very little, especially when the person you are trying to persuade has no interest in you, just your skills.

My most recent anathema, Test Driven Development, seems to be the biggest carbuncle in this form of "modern development"... Write me some code... They might ask, and you can write the code, and show it's correct, or you can write the tests which test the range, the type, the call, the library... Then write the code?... One is quick, efficient, but requires faith in the developer... One is slower, aims to forge faith of the code out of the result... Both end up with the same result, but one requires a leap of faith and trust.

Unfortunately, bugs in code, over the history of development have destroyed that faith in the developer.  There are a precious few developers whom are trusted to act on their own initiative any longer.  I know I work in a part of my employers company where I am trusted to act on my own initiative; with temperance that I have delivered very many years of good working products.

But I'm seeing, and hearing, of so many other parts of companies around us which do not trust their developers, and I would argue, if these developers had had to struggle with some of the historical problems my own generation of developers had struggled with, then they would be trusted more, and be freer to act, rather than being confined and held-back by needing to check, recheck and double check their code.

Trust in one another, a peer review, and where necessary a sentence of text on the purpose of some function or other, should really define good development, the code itself can tell you it's purpose, not the tests, certainly not by just running the code employing observation.

I therefore propose I get off my pontificating bum, clean up all my "virtual CPU" stuff, document some of these issues, and we as part of the development community try to get new developers to challenge themselves against Old Computers... ComputerPhile already demonstrate this with their Crash Bug examples with the Atari ST...


Monday, 28 November 2016

The Xeon Hack is Dead, Long Live the Xeon Hack!

Whilst investigating a problem with the Socket 775 to 771 Xeon hack machine I found it would no longer boot, it then would no longer post, and finally would no longer even accept power (indicated on the 3v rail LED)... This was a disaster, I've been sorting out the network at home for the last few weekends, and yesterday morning was meant to be a delve in & fix session (at 830am on a Sunday, this is a feat).

Unfortunately it immediately escalated, the motherboard showed distinct signs of corrosion, which is really strange as it's been in a dry airy room, there looked like (and tasted) like salt condensed on the board... I do wonder if this board had had a life near the coast in former days (it was second hand), and the salt just slowly precipitated out of the fibre glass?

Whatever the reason, there was salt all over the board, I cleaned it all with isopropyl alcohol to no avail, it would not post.

So I stripped it out and went to my junk pile, two other motherboards were already dead, the third... Well I know it works, after a fashion, it's an EVGA nForce 680i SLI board, my previous main workstation board actually... But I retired it for my Core i7, and it had been off with a friend of mine, it has at least one faulty RAM slot too...

Inserting my test Celeron D it came on, and I could run a memory test until it overheated and thermal shutdown occurred... So, I pulled out the Xeon from the hack board and got it into the nForce... Nothing... Dead... But, a BIOS patch later (with a floppy disk!) and everything was working...

So the Xeon went into the EVGA nForce 680i no problem!  4GB of RAM installed in the two working slots, and with new thermal paste I left it soak testing... Everything seems fine...

And this is equivalent (if not better) than the previous board, because I know its history, it's got six SATA headers dual gigabyte LAN... It's actually the perfect little server board, except for the lack of the working memory slots.

A new one of these boards is still like £50, so that was out of the question, I did order a new one from ebay a Gigabyte branded on, which can take up to 16Gb of RAM but only has a single LAN connection, it will have to do.

Until then though, the server is getting re-installed on the EVGA nForce 680i, and I'm going to keep my eyes on ebay for another of these boards to replace the already dead set from my junk pile.

On the topic of drives, I wanted to set up a series of physical mirrors with ZFS, however, I don't have matching drives, so I'm wondering what's the best order to set up the disks...

I feel a little confused as to the best way...


Tuesday, 15 November 2016

Administrator: ZFS Mirror Setup & NFS Share

I'm going to explain how to use some simple (VMWare emulated) hardware to set up a ZFS Mirror.  I'm picking a mirror, so they have 100% duplicates of the data.

I've set up the VM with a 4 core processor and 4GB of RAM, because the potential host for this test setup is a Core 2 Quad (2.4Ghz) with 4GB of DDR2 RAM, and it's perfectly able to run this system quite quickly.

The first storage disk I've added is a single 20GB drive, this is the drive we install Ubuntu Server 16.04 onto.



Then I've returned to add three new virtual disks each of 20GB.  These are where our data will reside, lets boot into the system, and install zfs... Our username is "zfs-admin", and we just need to update the system:

sudo apt-get update
sudo apt-get install zfs

Once complete, we can check the status of any pools, and should see nothing... "No pools available"


We can now check which disks we have has hardware in the system (I already know my system installed on /dev/sda).

sudo lshw -C disk


I can see "/dev/sdb", "/dev/sdc" and "/dev/sdd", and I can confirm these are my 20GB disks (showing as 21GB in the screen shot).

The file they have needs about 5GB of space, so our 20GB drives are overkill, but they've just had a data failure, as a consequence they're paranoid, so they now want to mirror their data to make sure they have solid copies of everything rather then waiting on a daily back up...

sudo zpool create -f Tank /dev/sdb

This creates the storage pool on the first disk... And we can see this mounted into the Linux system already!


sudo zpool status
df -h

Next we add our mirror disk, so we have a copy of the pool across two disks... Not as fast as raidz but I'm going with it because if I say "raid" there's going to be "Raid-5", "Raid-6" kind of queries and I'm not going to jump through hoops for these guys, unless they pay me of course (hint hint)!


That's "sudo zpool attach -f Tank /dev/sdb /dev/sdc", which is going to mirror the one disk to the other... As the disks are empty this re-striping of the data is nearly instant, so you don't have to worry about time...

Checking the status and the disks now...


We can see that the pool has not changed size, it's still only 20GB, but we can see /dev/sdb and /dev/sdc are mirrored in the zfs status!

Finally I add their third disk to the mirror, so they have two disks mirroring the pool, which they can detach one from and go take home tonight, leaving two at work... It's a messy solution, but I'm aiming to give them peace of mind.


To detach a drive from the pool, they can do this:

sudo zpool Tank /dev/sdc

And take that disk out and home, in the morning they can add it again and see all the current data get put onto the drive.

So, physical stuff aside they now need nfs to share the "/Tank" mount over the network...

sudo apt-get update
sudo apt-get install nfs-common nfs-kernel-server
sudo nano /etc/exports

And we add the line:

/Tank 150.0.8.255 (rw,no_root_squash,async)


Where the IP address range there is the start of your IP, at home for me this would be 192.168.0.*.

Then you restart nfs with "sudo /etc/init.d/nfs-kernel-server restart", or reboot the machine...


From a remote machine you can now check and use the mount:


Why does this exist?
I think I just won a bet, a friend of mine (hello Marcus) about 10 years ago, I helped him set up a series of cron scripts to perform a dump of a series of folders as a tar.gz file from his main development server to a mounted share on a desktop class machine in his office.

He has just called me in a little bit of a flap, because that development server has gone down, their support had lapsed for it and he can't seem to get any hardware in to replace the machine for a fair while.

All his developers are sat with their hands on their hips asking for disk space, and he says "we have no suitable hardware for this"...

He of course does, the back up machine running the cron jobs is a (for the time) fairly decent Core 2 Quad 6600 (2.4Ghz) with 4GB of RAM.  Its running Ubuntu Server (16.04 as he's kept things up to date!)...

Anyway, he has a stack of old 80GB drives on his desk, he doesn't 100% trust them, but the file they have is only going to expand to around 63GB... So he can expand it onto one of them, the problem is he wants to mirror it actively...

Convincing him this Core 2 Quad can do the job is hard, so with him on the phone I ask him to get three of these 80GB drives, they're already wiped, and go to the server... Open the case, and let me ssh into it.

I get connected, and the above post is the result, though I asked him to install just one drive (which came up as /dev/sdg) and then I set that up as the zpool, then I asked him to physically power off and insert the next disk, where I then connected again and added it as a mirror.

In the end he has 5 actual disks, of dubious quality, mirroring this data, he's able to expand the tar.gz back up onto the pool and it's all visible with his developers again.

This took about 15 minutes... It in fact took longer to write this blog post as I created the VM to show you all!