Friday, 2 December 2016

Code History : Old Computers to Teach New Developers

A fair while ago, I posted about my Virtual CPU code, that code (still not complete) just does addition (and my association subtraction), the point was to lead into code which emulated actual logic gates and indeed I write (somewhere in these pages) a half-adder followed by a full-adder emulation logic in C++ code.

The purpose of this was actually related to my giving a talk to some school children, it was ever intended for these pages, the kids could not wrap their heads around not being able to "just multiply".

They had studied computer memory, and wrote documents in word bigger than the actual memory foot-print of the machines they were talking about.

The teacher, a friend of mine, wanted to demonstrate this to them.... I therefore hoved into view with my 12Kbyte Commodore 16... And challenged the kids to write a program for it... They then had the massive struggle... One bright young chap found a C16 emulator online, and he wrote out a long program in Commodore Basic, which amounted to little more than a text-editor...

It was very impressive work from the young chap, and I've kept his number for when he graduates (in 2021), unfortunately it worked fine on the emulator, as you could assign more memory to the machine... After typing the program into the actual machine... It ran out of memory!  The edit buffer was only 214 bytes...

He had only tested with "10 print 'hello' 20 goto 10", but typing any lengthier program in and it essentially started to overwrite the previous lines of code.

You might call this an oversight, but it was semi-intentional as after all the project was about memory.

So having learned how expensive and precious memory was, and is, in this world of near unlimited storage the kids moved onto assembly, learning about how the lowest level of a machine worked.

This is where my work came into help, because they could not wrap their heads around "not multiplying".  In some chips a multiplication call might be many thousands of gates ("to multiply was 200,000 gates, over half the chip" - 3Dfx Oral History Panel - See video below).


Hence I wrote the code of a CPU only to do addition, to multiple one had to write assembly which did a loop of additions!  I left the kids to write the assembly for division, which is why you never see it here in my code.

It worked, but I find so many commentators have missed this piece of computing history, have missed that machines used to NOT come with every function, you had to emulate some functions in software with the bare-set of commands present.

Some have confused this with the idea of RISC, this isn't what RISC is about, but I distinctly remember being taught about the RISC based machines at school (Acorn machines) and that RISC meant there were "less instructions".  Sofie Wilson herself tells us that this isn't the point of RISC...


Having just spoken to my friend about the possibility of presenting these ideas again to a class of kids, I'm wondering whether I need to clean up all my code here, to actually make-sense of all these disparate and separate sources and write one paper on the topic; histories of this which are readable by kids seem to be sadly lacking, or they start from the point of view of a child of my time, born in the late 1970's whom remembers a time when you have limits in computing.

Kids today see no such limits, and find it hard to relate to them, my own niece and nephews, whom are just turning 15, find it hard to fathom such limits, even when they can sit down with me in front of a 12K machine, or a 512K machine, they can't relate, these pieces of history, these things which previously one had to work around are alien to them.

They don't need to work around them, and this leaves me meeting modern graduates whom lack some of the lowest level debugging and problem solving skills.

Indeed, I see these great efforts to publish frameworks to make modern developers test and think about software, because new developers have never had to get things right the first time...

I did, I'm one of the seeming few, who had to get it right and first time.  This is not bragging, it's actually quite sad, as how do you prove your code is good?  Pointing to a track record counts for very little, especially when the person you are trying to persuade has no interest in you, just your skills.

My most recent anathema, Test Driven Development, seems to be the biggest carbuncle in this form of "modern development"... Write me some code... They might ask, and you can write the code, and show it's correct, or you can write the tests which test the range, the type, the call, the library... Then write the code?... One is quick, efficient, but requires faith in the developer... One is slower, aims to forge faith of the code out of the result... Both end up with the same result, but one requires a leap of faith and trust.

Unfortunately, bugs in code, over the history of development have destroyed that faith in the developer.  There are a precious few developers whom are trusted to act on their own initiative any longer.  I know I work in a part of my employers company where I am trusted to act on my own initiative; with temperance that I have delivered very many years of good working products.

But I'm seeing, and hearing, of so many other parts of companies around us which do not trust their developers, and I would argue, if these developers had had to struggle with some of the historical problems my own generation of developers had struggled with, then they would be trusted more, and be freer to act, rather than being confined and held-back by needing to check, recheck and double check their code.

Trust in one another, a peer review, and where necessary a sentence of text on the purpose of some function or other, should really define good development, the code itself can tell you it's purpose, not the tests, certainly not by just running the code employing observation.

I therefore propose I get off my pontificating bum, clean up all my "virtual CPU" stuff, document some of these issues, and we as part of the development community try to get new developers to challenge themselves against Old Computers... ComputerPhile already demonstrate this with their Crash Bug examples with the Atari ST...


Monday, 28 November 2016

The Xeon Hack is Dead, Long Live the Xeon Hack!

Whilst investigating a problem with the Socket 775 to 771 Xeon hack machine I found it would no longer boot, it then would no longer post, and finally would no longer even accept power (indicated on the 3v rail LED)... This was a disaster, I've been sorting out the network at home for the last few weekends, and yesterday morning was meant to be a delve in & fix session (at 830am on a Sunday, this is a feat).

Unfortunately it immediately escalated, the motherboard showed distinct signs of corrosion, which is really strange as it's been in a dry airy room, there looked like (and tasted) like salt condensed on the board... I do wonder if this board had had a life near the coast in former days (it was second hand), and the salt just slowly precipitated out of the fibre glass?

Whatever the reason, there was salt all over the board, I cleaned it all with isopropyl alcohol to no avail, it would not post.

So I stripped it out and went to my junk pile, two other motherboards were already dead, the third... Well I know it works, after a fashion, it's an EVGA nForce 680i SLI board, my previous main workstation board actually... But I retired it for my Core i7, and it had been off with a friend of mine, it has at least one faulty RAM slot too...

Inserting my test Celeron D it came on, and I could run a memory test until it overheated and thermal shutdown occurred... So, I pulled out the Xeon from the hack board and got it into the nForce... Nothing... Dead... But, a BIOS patch later (with a floppy disk!) and everything was working...

So the Xeon went into the EVGA nForce 680i no problem!  4GB of RAM installed in the two working slots, and with new thermal paste I left it soak testing... Everything seems fine...

And this is equivalent (if not better) than the previous board, because I know its history, it's got six SATA headers dual gigabyte LAN... It's actually the perfect little server board, except for the lack of the working memory slots.

A new one of these boards is still like £50, so that was out of the question, I did order a new one from ebay a Gigabyte branded on, which can take up to 16Gb of RAM but only has a single LAN connection, it will have to do.

Until then though, the server is getting re-installed on the EVGA nForce 680i, and I'm going to keep my eyes on ebay for another of these boards to replace the already dead set from my junk pile.

On the topic of drives, I wanted to set up a series of physical mirrors with ZFS, however, I don't have matching drives, so I'm wondering what's the best order to set up the disks...

I feel a little confused as to the best way...


Tuesday, 15 November 2016

Administrator: ZFS Mirror Setup & NFS Share

I'm going to explain how to use some simple (VMWare emulated) hardware to set up a ZFS Mirror.  I'm picking a mirror, so they have 100% duplicates of the data.

I've set up the VM with a 4 core processor and 4GB of RAM, because the potential host for this test setup is a Core 2 Quad (2.4Ghz) with 4GB of DDR2 RAM, and it's perfectly able to run this system quite quickly.

The first storage disk I've added is a single 20GB drive, this is the drive we install Ubuntu Server 16.04 onto.



Then I've returned to add three new virtual disks each of 20GB.  These are where our data will reside, lets boot into the system, and install zfs... Our username is "zfs-admin", and we just need to update the system:

sudo apt-get update
sudo apt-get install zfs

Once complete, we can check the status of any pools, and should see nothing... "No pools available"


We can now check which disks we have has hardware in the system (I already know my system installed on /dev/sda).

sudo lshw -C disk


I can see "/dev/sdb", "/dev/sdc" and "/dev/sdd", and I can confirm these are my 20GB disks (showing as 21GB in the screen shot).

The file they have needs about 5GB of space, so our 20GB drives are overkill, but they've just had a data failure, as a consequence they're paranoid, so they now want to mirror their data to make sure they have solid copies of everything rather then waiting on a daily back up...

sudo zpool create -f Tank /dev/sdb

This creates the storage pool on the first disk... And we can see this mounted into the Linux system already!


sudo zpool status
df -h

Next we add our mirror disk, so we have a copy of the pool across two disks... Not as fast as raidz but I'm going with it because if I say "raid" there's going to be "Raid-5", "Raid-6" kind of queries and I'm not going to jump through hoops for these guys, unless they pay me of course (hint hint)!


That's "sudo zpool attach -f Tank /dev/sdb /dev/sdc", which is going to mirror the one disk to the other... As the disks are empty this re-striping of the data is nearly instant, so you don't have to worry about time...

Checking the status and the disks now...


We can see that the pool has not changed size, it's still only 20GB, but we can see /dev/sdb and /dev/sdc are mirrored in the zfs status!

Finally I add their third disk to the mirror, so they have two disks mirroring the pool, which they can detach one from and go take home tonight, leaving two at work... It's a messy solution, but I'm aiming to give them peace of mind.


To detach a drive from the pool, they can do this:

sudo zpool Tank /dev/sdc

And take that disk out and home, in the morning they can add it again and see all the current data get put onto the drive.

So, physical stuff aside they now need nfs to share the "/Tank" mount over the network...

sudo apt-get update
sudo apt-get install nfs-common nfs-kernel-server
sudo nano /etc/exports

And we add the line:

/Tank 150.0.8.255 (rw,no_root_squash,async)


Where the IP address range there is the start of your IP, at home for me this would be 192.168.0.*.

Then you restart nfs with "sudo /etc/init.d/nfs-kernel-server restart", or reboot the machine...


From a remote machine you can now check and use the mount:


Why does this exist?
I think I just won a bet, a friend of mine (hello Marcus) about 10 years ago, I helped him set up a series of cron scripts to perform a dump of a series of folders as a tar.gz file from his main development server to a mounted share on a desktop class machine in his office.

He has just called me in a little bit of a flap, because that development server has gone down, their support had lapsed for it and he can't seem to get any hardware in to replace the machine for a fair while.

All his developers are sat with their hands on their hips asking for disk space, and he says "we have no suitable hardware for this"...

He of course does, the back up machine running the cron jobs is a (for the time) fairly decent Core 2 Quad 6600 (2.4Ghz) with 4GB of RAM.  Its running Ubuntu Server (16.04 as he's kept things up to date!)...

Anyway, he has a stack of old 80GB drives on his desk, he doesn't 100% trust them, but the file they have is only going to expand to around 63GB... So he can expand it onto one of them, the problem is he wants to mirror it actively...

Convincing him this Core 2 Quad can do the job is hard, so with him on the phone I ask him to get three of these 80GB drives, they're already wiped, and go to the server... Open the case, and let me ssh into it.

I get connected, and the above post is the result, though I asked him to install just one drive (which came up as /dev/sdg) and then I set that up as the zpool, then I asked him to physically power off and insert the next disk, where I then connected again and added it as a mirror.

In the end he has 5 actual disks, of dubious quality, mirroring this data, he's able to expand the tar.gz back up onto the pool and it's all visible with his developers again.

This took about 15 minutes... It in fact took longer to write this blog post as I created the VM to show you all!

Friday, 11 November 2016

Beagle 2 : It Sort of Worked?

Its so nice to see the BBC recap this great experiment we all took a little to heart... http://www.bbc.co.uk/news/science-environment-37940445

Not least as we've now lost Professor Pillinger, it's such a great shame to see just how close we, the UK, the nation, but especially those involved came to achieving success.

Ultimately, one day, I hope someone can approach the lander in person and proverbially "turn it off and on again".

(c) Demontford University, 2016

For other coverage of Beage 2 on this very blog, check out  my impromptu rant about David Southwood.

Thursday, 10 November 2016

My Home Network

My home network has been neglected, this is one of the problems of working in Technology, when you get home you are not going to be doing very much technology... Or maybe I'm just too interested in other things... Whatever the reason, neglect has set in.

Lets take a look at my home setup:

The network is broadly speaking split into two, the parts downstairs and the parts upstairs.

Downstairs is pretty much the Cable Modem box, a home hub set to Modem Only mode, connected to a Linksys router with some network attached storage for dumping files or media.

Upstairs is where the trouble begins, the critical path is to the left two upstairs items, my Xen server and especially my main Workstation, without these I can't work at all.  So thats two connections which I do not mess with, moving down we see the dotted yellow line, this is a single cable which lays on my desk that pays double duty to my powerful laptop or a raspberry Pi, so that's a spare I generally always need around.

We've used up four of our eight ports on the switch.  The final four are each going to the DRACs on my Dell Servers, which means they have no data connection.  If I want data to them (which is pretty much every time I use them) I either pull the wire from a neighbours DRAC, or I pull the cable from the IP Camera; it being the eighth and final port on the switch as it stands.

I have to ask what my options are... Well, I'm not able to change the cable trail from the lower to upper floor, so only one Cat6e cable there still, which means its not yet work my moving a box downstairs and dedicating it to pfSense & Squid.

I also don't have any rack space, so a racked switch with more ports is really a waste of money at the moment.

I took a look at another Netgear GC108 unmanaged switch, but I wondered about sub-net masking out some of the server stuff and thought for the few pounds different I'd go for a managed switch.

The desk then gets a new dedicated managed switch, and the more server stuff all stays together on the unmanged switch...


With the Dell servers immediately taking up six ports between their DRAC and data NICS I have two left, one for the Xen server and one to cross-connect with the managed switch.  When I come to rack these machines in a better manner, then I'll be able to co-locate the 8 port switch into whatever solution I have there without rewiring my main working station & desk!

This leaves me two ports free at the desk, in a unit which is light and small enough to screw to the underside of the unit (no way I could do that with a larger unit).

The real beauty for my needs here is the switch interlink, I can unplug just one wire and take away all my servers for re-positioning or remounting.  I could even take them all out of my immediate workspace now and hang them off of the WRT1900 downstairs (as it has two RJ45 ports spare).


Note to all the haters... On the topic of the Linksys WRT1900ACS, there is a lot of talk on the internet of it being flaky, unstable, crashing, resetting to factory defaults... Mine has been nothing but stable, like really stable, it's been reset once due to my needing to clear the cumulative six months of network map details and I think it locked up once with the WiFi not coming online.  This doesn't mean to go out there and buy one (though I did review it on Amazon, go take a look) but rather the unit I have was very good, and remains very good.



P.S. Neither Netgear nor Linksys sponsored my usage of their equipment, but you know, if you want to... Get in touch at the link near the Tip Jar!

Monday, 7 November 2016

Gedling Colliery : Twenty Five Years On

Back in 2011 I posted about my childhood links to Gedling Colliery, visit that post here.

And today is the twenty fifth anniversary of the pits closure, you can read about it on the BBC here.

The point of interest being, the paragraph:

"Although production ended in 1991, Gedling has been identified as one of three mines in the early 1980s that could have had a 'long term' future".

They mention Calverton & Corgrave along with Gedling as being in that field.  And I remember seeing men crying when it closed, and my own father telling me there were at least 90 to 100 years more coal down there ripe for the picking, cleaner burning, locally produced coal of higher quality.

Because that is the difference, we import vast amounts of coal; the last time I looked mostly from Ukraine; and everyone I've ever spoken to about it has described it as a dirty burning sludge laden coal.  It was formed differently, coal here was formed from flora in pre-history and burned cleanly, making carbon collection from the burning easier, and toxins like sulphur a lot less prevalent in fume content.

Coal is of course a fossil fuel, and needs temperament when using it, but British Coal was strictly speaking the better of possible environmental impact, and would have been easier to scrub the waste gases clean for than the junk being brought in, and still being brought in, cheaply from overseas.

All three of the mines mentioned are now gone, ALL the mines are in fact gone, but the coal is sitting there.

My only question for future generations being, if they need fossil fuels still in the future, will they open up and go back down for that sitting, high quality coal, will the mines and the learning curve for all that industrial knowledge need to be covered again at huge, huge cost, when it was just there, ripe for the taking when I was a boy?

Thursday, 3 November 2016

Software Engineering : Test Driven Development (TDD)

"Test-first fundamentalism is like abstinence-only sex ed:
An unrealistic, ineffective morality campaign for self-loathing and shaming."
 - David Heinemeier Hansson, 2014

I was very recently inflicted a case of TDD... I didn't like it, like the bad taste of medicine it lingered on my mind, so that I had to do some reading to even wrap my mind around the concept, and in wrapping around the idea my mind is made up to throttle it out of existence anywhere near me.

I come from the very much more formal days of development, from systems where you had very limited resources, there wasn't space to run a test suit on top of your editor next to your compiler, indeed the systems I learned to program with one had to chose to swap the editor and compiler and even a separate linker in and out of memory to achieve your build.

Taking that amount of time to do a build, it makes you learn from your mistakes quickly.  Such that I find myself now a rounded, and I'd like to think good, programmer.  I'm not the best, I've met a few of the best, but I'm good.

So combine a propensity to think about the problem and the drive to achieve a solution in code quickly, I'm used to thinking about testing after the fact, used to at least in part sticking to the old mantra of "Analyse -> Design -> Implement -> Test -> Document", with each step feeding each other.

For you see development today is not serial any more, you don't have time to stop and think, you perhaps should.  This was perhaps the only positive I see in employing TDD, and indeed it was one of the positives myself and the chap I was speaking to commonly came to, it makes you slow down and think about a problem.

However, if you're a traditional programmer, a goal orientated programmer, like myself, you've a plethora of patterns, design and experience to draw upon, which tell you your goal is achievable a certain way.

Design patterns and library implementation style, and simple rote learning of a path to a solution can drive development, peer review can share you ideas and impart new ideas on yourself, so why should you let the test; something which intrinsically feels like the end result, or the last step before accepting the solution; drive development of the solution?

I really can't think of a reason, and have now to take to the words of brighter stars than myself to express how it made me feel...

It felt like a litmus test, something being used to make me feel unprofessional, in a single trivial class, which should have taken five minutes to write and check over, an hours struggle ensued where hammer blow by hammer blow by the grace of TDD I was held up as a heretic.  "Thou dost not know TDD", the little voice in the back of my mind said, and I felt low.

But beyond how I felt, it seemed so inefficient to work in that manner, when one has programmed or engineered or done any task for long enough we build a reflect memory of how to achieve that task again, walking, reading, driving, programming... All these life goals and skills we just pick up.

Training in Karate was very much like this, one had to teach people to do things with care... 10 Print Hello 20 goto 10... Then we had to show them how to do more complex packaging and reuse of code procedure example(p_value : integer) and eventually we could let them free fight int main () { std::cout << "Hello World!" << std::endl; } and during this building up we let them drop the earlier teachings into their muscle memory, into their subconscious, we exercised, but didn't re-teach the same things, didn't cover the same ground.

Yet that coverage of the same ground is exactly where I felt TDD was taking me, and it seems to be the consensus among other "non believers" as to where it takes you, to write a test exercising your code, then to iterate over changes, sure it drives efficient code in the end, it drives out tested code before a single functional line has been written, but was it the quickest way to the same solution?

For a seasoned, experienced programmer like myself, No, No it was not, it bypassed so much of my experience as to know my self-confidence to literally brow beat me into worry.

I therefore stand and say "I do not write software test-first", and this is based on having to learn in an environment where one could not afford the processor cycles or memory to test, nor did one have bosses open to the concept of micro-design and iteratively pawing over code, one had to get the job done, and get it right or not have a role any longer.

Huib Schoots over at AgileRecord can be quoted as writing "TDD is a method of learning while writing clean, maintainable, highquality code", I would have to paraphrase that statement as "a method of learning today from scratch whilst writing modern, clean, maintainable, highquality code"  for a programmer setting out to learn their craft today is never going to have the benefit of the environment I learned in, they're not going to catch up to my twenty plus years of industry exposure and some thirty years academic experience on the topic.  They need to learn, and if teaching test proves a good starting point so be it, however, once the learning curve plateau is reached, even before, one has to question whether you need to take that next step back, step away from micromanaging the development process and evolving your code to just delivering the final product in a working form.*



Others on the topic:


http://beust.com/weblog/2014/05/11/the-pitfalls-of-test-driven-development/






* Please note, with this statement I am not advocating stagnation, and not saying that old-boys like myself are superior, however, TDD appears to me to dismiss any other way of working, to a fault and in my opinion its own ultimate flaw is itself, test your development, but don't drive your development with the test, you will seriously only be putting the cart before the horse.