Saturday 30 April 2016

Virgin Media - Getting The Service You Require

I've had a bit of a battle with Virgin Media over the last month, I noted I was suddenly, and for no reason, on a very slow connection (at least slow to my liking) it was showing up in tests as between 46 and 52 mbit... STRANGE!  Since I thought I was on 100Mbit and was trying at the time to get onto the new Vivid200 mbit.

Anyway, after doing some calling around, and basically being given the run around by sales, I got through to a chap in the right department, who was able to sell me things AND look at my account; they seem very able to sell stuff to you, and take your money, but you ask for something and you get stone walled.

So, this chap checked my account, 50M sir, you're on our 50M service?... I thought I was on the Big Bundle thing, on the tele?.. 100Mbit etc... Oh You were sir, but your contract ended in 2016...

So, did they continue charging me?.. Oh yes.. Full whack, but they slowly eroded both the channels on the TV package and the speed of the internet, I guess hoping they could provide less and charge the same, or even more!

I set about trying to rectify this, and got exactly nowhere, online chat, telephone calls, even twitter didn't budge them into action, they didn't give a hoot!

Therefore, I set about getting what I wanted the underhanded way... 

What I wanted was to jump from 50mbit to 200mbit, leaving the TV and landline telephone as was, the problem?... No human operator on any channel, in any department, nowhere could give me this, I was even told you could not order this combination!  That my kit didn't support it!  That the wires were wrong!!?!?!

When I pointed out that the wires and kit are theirs, sort it, they basically hung up on me.

Online in my account however, I could see an upgrade offer... For a free I could upgrade to some new cables.. DOCSIS or some such thing... So, I ordered that...

A week later, I checked again, the new offer was for a free upgrade from 50mbit to 70mbit.  Are you still with me here?... So I chose that.

A week yet further on and I had 70mbit, and the option to pay £1.50 more per month to upgrade to 100mbit!  So I ordered that.

And yet another week on, I finally have the option to order 200mbit for an additional £5.50 a month, with 6 months at £2.50...


I have this arriving, through the ether as we read this...

So why the delay?  Why did the humans say they could not leave my TV and landline alone to just uprade the speed?... Well, seems they could, they could all along, they were lying, or their training didn't expose them to the workings of their own systems.  Whichever it was, I was very very frustrated by the whole affair, and I've made it known.



Friday 29 April 2016

21 Degrees: The Office Nirvana

There's drama in the office... And to be honest, I'm on the loosing end of it...

We've recently had lots of work done in our office, we've had new double glazed windows, which no longer allow fresh air, so then they added a new air system, which supposedly brings in ten cubic meters of fresh filtered outside air per person, but it's never on, you can really tell when it is, and it's great, but it's never on.  Then they added a very expensive set of air conditioners.

Now, I know, and you all probably know and office is not meant to be warm and cozy, if you want a place to sit and read or code which is very warm and comforting, try the local library; if it's not already shut through budget cuts.

No, your office is meant to be slightly cool to the feel, around 21 degree's I've always been lead to believe.  High ceilings are best, and if not, air condition to that temperature.

How do I know this?... Well, I used to be an office manager, and I managed the air conditioning, and I managed the people, all shapes, sizes, ages and sexes... The consensus was too warm and most people drop off late afternoon.

So, what's my problem here?  Well, I'm the latest arriving person on the flex, I always arrive around 9:30-10:00, which means I'm always the latest to leave, around 18:30-19:00.  The problem?  Everyone else in this area want the air conditioning warmer, the reason is they're all cold in the morning, they all wonder in around 7am, and disappear, leaving their phones ringing and desks empty around 15:30... This is a problem.

It's a dichotomy, because right now they're all moaning it's too cold, according to my desk thermometer (a hang over from being an office manager and worker for many years) says 21 (this is centigrade before anyone complains), and it's perfect.

The other folks around me are complaining they're cold, they're wearing jumpers, and cardigans and one chap has the biggest fleece on... They're all a lot older than me, yes at nearing forty I'm the young one, and I can't help but believe it's their age.

And, don't get me wrong, I understand they're cold, but they might need to get up and get moving, have a warm porridge breakfast, something, other than disturb the cool air situation.

Because when they've all gone home, and the sun is around this side of the building, through the un-opening double glazing, it's boiling at my desk, literally boiling.  And in this ridiculous circus I end up needing a desk fan on, because the temperature goes way up to 26+.

21 Degrees people, please, and you deal with it, I'm not the only one to know this.


Thursday 28 April 2016

Updated Tutorial - Installing SVN (Subversion) on Ubuntu 14.04 Server

Back in May 2012 I posted a pretty complete tutorial on how to create a virtual VMware machine image for ubuntu, install apache2 and svn upon it, and configure it for access over your LAN.

Tonight, I've just migrated that very installation to a new Ubuntu Server 14.04 installation on a new XenServer.

And I noted I needed to add an extra line of configuration, therefore, I've added a little video note, which also shows the server working for me locally on my LAN:


The original tutorial can be found in it's full glory here: http://megalomaniacbore.blogspot.co.uk/2012/05/virtualizing-installing-and-using.html

Like, Subscribe, Tip if this helps!

Tuesday 26 April 2016

GNU C/C++14 Installation & Codeblocks 16.01 from Source (Command Line)

In yesterdays post I explained a C++14 user was having instant issues with the vanilla install of gcc/g++ on Ubuntu 14.04 LTS.

Getting a C++14 Compiler
So, here today are my command-line steps to update the GNU Toolchain v5.x.

sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install gcc-5 g++-5

If you already have compiler alternatives you may need these lines.

sudo update-alternatives
sudo update-alternatives --remove-all gcc
sudo update-alternatives --remove-all g++

But, everyone will need to swap the default gcc and g++ command-lines to the new paths to make them the defaults.

sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 20
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-5 20
sudo update-alternatives --config gcc
sudo update-alternatives --config g++

These last two --config commands are only needed if you have multiple alterantives, so don't worry if it tells you it's failed as you only have the one.

Now, if you perform g++ --version, you should see it's a version 5.x series compiler.

Codeblocks 16.01
Older versions of Codeblocks may start to error on the code completion with some of the C++14 specific commands.  So, we need install some prerequisites, and then build the latest 16.01 version from source.

sudo apt-get instal gtk+-2.0 automake libtool libwxgtk2.8-dev libwxbase2.8-dev

wget http://sourceforge.net/projects/codeblocks/files/Sources/16.01/codeblocks_16.01.tar.gz

tar -xvf codeblocks_16.01.tar.gz

cd code*

The next step is interesting, we need to iteratively check:

./bootstrap

Running this will show you anything wrong with your machine environment, any missing dependences etc... However, once bootstrap runs cleanly, you can continue below.

./configure
make
sudo make install

The make step takes quite a time, the more cores & RAM you have the better, on an 4 core (8 thread) machine with 8 GB of ram, I've found it takes about 10 minutes.  On a single core machine as the poor VM I had was assigned, it took a lllooooottttt longer.

Once complete we needed to use the text editor of our choice, in sudo mode....

sudo nano /etc/ld.so.conf

And to this we need to add the line:

include /usr/local/lib

Save the file, exit the editor and run:

sudo ldconfig

Once this was complete, we can run up Codeblocks, and see it's version 16.01.



And we can further see the nice new C++14 options in the build options:



Even the code highlighting recognises the make_shared operation:



Voila, if this helped, do check out my other posts, and code, leave a tip with my new tip jar!  And I'll be making a video of this one soon, as it's so useful.

Monday 25 April 2016

Embrace C++14 Please Mr Developer

Today I've had to spend sometime setting up a Xenserver, to host some virtual machines, the moment I was done, of course the first request I had was for a developer to have a Linux machine running a compiler.

I was quite excited, in a C# & embedded C strong company to hear "C++" as the reply to my query "what language are you going to use?"... However, my excitement quickly evaporated when I asked, "What version of C++ do you need?"... And he replied "There are different versions?"

After I had explained, yes, yes indeed there are... I took a look at his code for him, and pointed out naked pointers:

char *something = new char[28];
memset (something, 0, 28);
delete something;

I quickly explained, as kindly as I could, that this code is not only wrong, but very very old hat, and I introduced him to the standard library:

#include <string>

std::string something;
something.resize(28);
memset(&something[0], 0, something.size());

And his eyes opened a little... He asked what version of C++ is this in???... C++98... and his crest fell again, realising it was very old tech, which he had no idea about.

So I pointed him to C++14 and explained smart pointers as something for him to try out:

#include <memory>
#include <iostream>

namespace Xelous
{
class Test;
using TestPtr = std::shared_ptr<Test>;
class Test
{
public:
void Hello()
{
std::cout << "Hello";
}
};
}

int main (int p_argv, char** p_argc)
{
Xelous::TestPtr instance = std::make_shared<Test>();
instance->Hello();
std::cout << " World";
}

So, once this was done, I left him reading a copy a Tour of C++ by Bjarne, and told him to read all of Scott Meyers books.

This is a sad state of affairs for a programming & technology environment, especially when I know the chap earns more money than me, and as polite as I was I did want to just ask him to get his coat, and I'd slip into his salary grade & comfy company car (a perk I don't get).

Not least because I think the chap whom sent me off to speak to this fellow treats me a little more like a trained monkey, and they've themselves no idea about virtualisation, servers or development, beyond say using Turbo C++ from the command line in DOS 6.22... And unfortunately, things have moved on a lot since them...

Anyway, this leads me to tomorrows post, which I'm drafting, I set this C++ developer on the path to C++14, and installed him a Ubuntu 14.04 virtual machine on my little server... He was amazed, until he wondered over to me and pointed out that he had to manually add -std=c_++14 to the build options, and that sometimes the code-completion crapped out on him... Seems older Codeblocks instances fall on their face, and the default version of gcc/g++ on Ubuntu is 4.8.x and we need 5.x for C++14.  The next post will cover going through setting this up from the command-line.

Thursday 21 April 2016

C++: Boost Libraries, Code Documentation & Examples

I have a loving relationship with the Boost C++ libraries, but I have a real hate of their documentation, not just it's style, but the mistakes and fragmentation they build into it.

When I search for say "socket", I want it to take me to boost::asio::tcp::ip::socket.  But instead it takes me on a run around, and when I finally get to boost::asio::tcp::ip::socket the examples and namespace being used is incomplete, it assumes a bunch of things, and you get told to look in tcp::ip::socket.

This is fine, if your reader knows to assume boost::asio as well, but if you're new to the whole boost project, you don't know sockets live in asio, you might be looking for boost::net, or boost::networking, or boost::network or just boost::tcp.  And you're stuck, lost, you have to leave the boost documentation search and go to google.

To fundamentally have to leave the documentation of an actual project website, and use a third party in this way is fundamentally telling you your documentation is flawed.

Then, there are mistakes in the documentation, this is annoying, but when the mistakes are in the examples given it's unforgivable, you're basically giving your users the finger, because not only are you teasing them with an example, but when it doesn't work they are again cast out into the wilds of the internet.

Case in point, again the socket, it's example code shows it as "soocket".  A typo yes, but it tells me two things, first of all, the code was not proof read, and second; and perhaps most importantly; the examples are not from working code, they've never been run, because it's not soocket, and the compiler would instantly tell anyone trying to run that code soocket is invalid.  Examples SHOULD ALWAYS BE LIFTED FROM A WORKING EXAMPLE!

Wednesday 20 April 2016

C++11 Examples: Random Unsigned Integer Sequence

This is just a quick note, of using the C++11 random functions, to generate a positive list of random integers.

#include <iostream>
#include <string>
#include <random>
#include <cmath>

int main()
{
    // Seed with a real random value, if available
    std::random_device r;

    // Choose a random mean between 1 and 6
    std::default_random_engine e1(r());
    std::uniform_int_distribution<int> uniform_dist(1, 6);
    int mean = uniform_dist(e1);
    std::cout << "Randomly-chosen mean: " << mean << '\n';

    // Generate a normal distribution around that mean
    std::seed_seq seed2{r(), r(), r(), r(), r(), r(), r(), r()};
    std::mt19937 e2(seed2);
    std::normal_distribution<> normal_dist(20, 1000);
     

    // Loop edited by Xelous
    for (int n = 0; n < 10000; ++n)
    {
        unsigned int l_x = std::abs(normal_dist(e2));       
        std::cout << l_x << std::endl;
    }   
}


If you alter the uniform_dist spread to a higher yield, e.g. 1, 1000.  Then you increase the randomness of the mean used to generate the sequential seed.

This is taken straight from: http://en.cppreference.com/w/cpp/numeric/random

But, has been run on Coliru.

Friday 15 April 2016

Windoze Security Loop Hole

This is an example of why I hate Windows...

In a curious case of a security loop hole, in the office, we have a supposedly locked down security situation, none of us are local administrators on our machines, and neither do we have access to any of the very useful parts of our machines.

This is a real pain, and one whereby we often have to call up on the IT Administrators to come and physically, or remotely in a remote desktop session, enter their password for us.

I personally disagree that educated users such as myself have to put up with this situation, I agree totally with data privacy and integrity, however, I wholly disagree with locking people out of things on their machines, such as defragging, or emptying the temporary folders... Or in the case of a programmer, not being able to empty Prefetch or write an ISO to an SD Card.

Anyway, today, I had to write an ISO to an sdcard, the result... I called IT and asked them to run the program for me....


So, just to be clear, I'm logged in as myself:


I am unable to access parts of the system, like the Administrators desktop folders...


I get IT to log the ISO image writer as their elevated user, and the loop-hole begins, you see the program has a standard windows open dialog.  And this will work with any standard windows open or save-as dialog, in any program... The program is running as Administrator at this point.

When I select to browse to the file to open, the default folder is the administrators folder by name...


However, because these dialogs all use explorer under the hood, and it's all integrated, they do far more than select a file for you, they let you create folders, browse things and even launch programs...

Yes you can launch a program from a save-sa, or open-sa, browsing dialog!


Lets try to run a command prompt...


Oh, look, it's running as Administrator...


And now I can see the Administrator account directories, which were hidden from be in my own logged on Explorer window..


And I can clear the prefetch folder in windows...


This is clearly wrong, but it's all caused by windows, so what's going on?...

Well, instead of asking the current session (logged in as regular old me) to start the new application instance of Command Prompt, it's asking the application owning the browsing dialog, so command prompt is started under and inherits the user credential level of that program, not my whole session.

What should have happened, well, I believe windows, starting a new program from an elevated user like this should have re-prompted for the user's password again.  And indeed, trying to start certain files from the launched command prompt it does go back to the session level to ask for the credentials to start the application with.  But not asking and just starting the new application is a problem.

Solutions I can think of include, setting the administrator level account to timeout its password every minute, so one reduces the amount of time a regular user has an unaccredited ability to launch programs.  And within a minute the administrator could have started anything the user wanted and left.

A better solution however, might have been to have a user elevation level which could give access only to what the regular user wanted, permissions to use peripherals perhaps, rather than start applications.  And the Administrator should not have just started the application as themselves, but should have started the application under themselves as the Hardware only user.

There are other solutions, and I'm sure many I'm not even going to think about, because I don't use Windows systems.  If I want security, I simply use Linux and set things up correctly.

Thursday 14 April 2016

Project - Socket 771 to Socket 775 - Xeon Conversion

I've been conducting an experimental project, one I've seen all over the net by others, but which had lots of different information... This is the conversion of a socket 775 motherboard to a socket 771.

First of all, why?... Why would we do this?... Well, the socket 775 is a commercial socket, supposedly sold to us mere mortal customers who buy one machine and one CPU at a time, and the motherboards and processors in the class were/are quite expensive.

We're talking about the Core 2 era, Celeron D, Core 2 Duo, Core 2 Quad.  I remember the Core 2 Quad machine I put together was really rather expensive at the time.  So, between six and eight years on, we're retiring those machines; yet they cost us a lot of money, and despite depreciation rates we users can still make use of these machines.  They can be useful as render farm nodes for 3D or movie work, they can be used as servers, to host basic information, or upload/download points, even as firewalls.  All roles they can fall into easily.

I personally am going to be using the machine I've got as a quiet webserver, retiring a venerably serving Pentium 4 Prescott for this old Dell machine.

So, what is the base machine?

Well, it's a franenstein, the in-laws have had me build them a new machine, so they had an Intel D31PR motherboard holding a Celeron D 450, and one gigabyte of RAM, a totally unhelpfully slow machine.  Even they noticed it was extremely slow.

From my spares they had enough parts to basically rebuild their machine, which I did, and it left me with their old motherboard.  I wanted to upgrade the processor, expand the RAM and add a RAID array controller card, but my budget is extremely low, we're talking £20.

Well on Amazon, I can get a RAID controller card for £13.. This left me £7... Hmm.. Luckily, the IT Department at work were able to donate to me some old DDR2 RAM, so I had the maximum 4GB the board can handle.

£7... Upgrade the processor?.... A challenge... Ebay... Core 2 Duo's and Quads, going for over £25 a pop, most of the Quads were going for £30+.  Way out of the budget.

But there were dual core Xeons for around £4... And I saw this hack out there on the wires, so I set about working.

The first step is to strip everything down, clean it perfectly, and get a scalpel.  The first part of the modification is to remove the tabs from the processor, these tell the user which way to orientate the CPU for insertion, they do nothing else... A consumer CPU is orientated horizontally, so there are tabs top and bottom to stop you inserting the wrong CPU.

And the Xeon has gaps left and right, meaning it'll bounce off these tabs on the socket 775.


Taking the scalpel, I started to cut the tabs, now I DID THIS WRONG!  A much better approach is to leave the current CPU in there, with the tabs engaged into the socket 775 CPU, and then cut between the CPU and the edge of the socket.  So the CPU acts as a guide and the delicate socket pins are protected out of sight below the CPU.



And clean the cuts you make up.



Remove any debris...



Then you need to go back to ebay, and buy a Xeon Socket Modification sticker, this is a little sticker, which will cover two rows of connections on the bottom of the CPU, it will allow most of them to pass through the plastic, but two pins are headed with a little connector, and behind the sticker these connectors actually swap the two pins over.

So, two pins, and the orientation, that's all that's different about a Socket 771 and a Socket 775 CPU.


The stickers are bar shaped, so they indicate which pin to swap, but lay the CPU down with the notches to the top, and from the bottom count 10 connectors from the bottom right, moving left... Voila, stick it down carefully.

Insert into the Motherboard socket now, so the notches are to the "top" of the socket, add the heat-sink assembly, and build it back up on your work bench.


Now, some videos and advice says you need to go to sites, and download patches for your motherboard.  During my project here I've found most Intel brand motherboards do not need any patching, only third party boards.  It seems Intel include all the microcode for all their processors (this is only a guess, I have no proof other than using five different intel boards, and two none-intel boards and always having to patch the non-intel branded ones, whilst the intel ones just work).

Then powering on...


It worked, I've gone from a Celeron D 430 to a Xeon 5130.  They're very similar processors, but the Xeon has dual cores and a much faster FSB.



My YouTube play list, for my crappy videos covering this project can be found here:

Wednesday 13 April 2016

RIP Dude - First Anniversary

I can't believe it's been a whole year since my beautiful best boy had to leave us...


I miss you everyday my big yellow beauty.


Sunday 10 April 2016

Arc Welding Arduino Code

From my YouTube Video:



int ledPin = 13; // LED connected to digital pin 13

void setup()
{
pinMode(ledPin, OUTPUT);
}

void loop()
{
int i,count;
count=random(10,60);
for (i=0;i<count;i++)
{
digitalWrite(ledPin, HIGH); // set the LED on
delay(random(60));
digitalWrite(ledPin, LOW); // set the LED off
delay(random(200));
}
delay(random(800,2000)); // wait a random bit of time
}

Monday 4 April 2016

Junk PC - Celeron to Core 2 Quad

You may guess I've taken what was basically a junk PC from the in-laws and turned it into a fairly decent machine, what could it do?... Well it was reported to be showing "video like an old slide-show", "internet pages took so long to load we could boil the kettle" and a myriad other little things.

To hear those kind of reports from regular users of 70+ years of age, rang alarm bells, something needed to be fixed.

The machine was:

Intel Celeron 450 @ 2.2ghz
1GB DDR2 400Mhz RAM
Intel HD 3000 Graphics (on board)
320GB western digital carvier blue HDD

I've raided ebay, Amazon and my own spares, and the machine is now, re-cased, with new air-flow/cooling, and it's significantly improved:

Intel Core 2 Quad Q6600 @ 2.4ghz
4GB DDR2 800Mhz RAM
Asus GeForce 610 GT (1GB DDR3)
120GB Scandisk Performance SSD

With a nice new box, the thermal paste, a cleaning air-can and all the bits I needed the upgrade cost just shy of £60.

And the new incarnation of the machine runs:

World of Warships - High Settings - 50FPS
World of Tanks - High Settings - 50-60FPS
Minecraft - Fullscreen - 100+FPS
Arma 3 - Medium Settings - 25FPS
Arma 2 - High Settings - 60FPS
H1Z1 - Medium Settings - 50FPS

This is impressive performance from a bog standard Intel G33 Motherboard and a bunch of spare parts.

Certainly £60 was a very fair price for all this kit, and I can't help but thank the moron on ebay who sent me a mail abusing me for listing "such old shit at a high price"... because through whatever machinations I kept the kit, and here we are just a month on with is back in use and blowing through performance like no-ones business.


Saturday 2 April 2016

PC upgrades, and more coding ideas

As you can see, I'm in the middle of working on some PC kit.  Upgrading the in-laws old kit and retiring one of mine in the process.

I'm going to be working on the second part of my functional programming post later today, covering some alternate languages and better explaining maintainable code with relation to those parameters being passed.

Friday 1 April 2016

Coding Standards: Functional Programming (Code Maintainability)

Functional Programming

When I were a lad, and was being taught to program, I read a bunch of Basic code and wrote great long lists of instructions; that's pretty much how many people in the 90's described programmers, coders or hackers "someone who spends lots of times typing great long lists of instructions into their computer" - Robert X Cringley.

However, long lists only get you so far, so my next step into the world of programming was to learn Pascal, and I first read a simple book as part of my A-Level, by P. M. Heathcote, which introduced very basic programs in Pascal, something like...

program HelloWorld (input, output)
begin
    println ('hello world');
end;

So, I suddenly had this idea of putting the long lists of instructions into sort of functions:

program HelloWorld2 (input, output)

   procedure foo()
   begin
       println ('foo');
   end;
   
   procedure bar()
   begin
       println ('bar');
   end;
   
begin
   print ('hello ');
   foo();
   print (' and hello ');
   bar();
end;

(Note: before you call me out and say you can "GOSUB" or "CALL" other functions in BASIC, not in the dialect I first learned, all you could do was SUB to a line number, following the instructions until you used RET to go back to where you were... So they were technical functions, or procedures, but they were really just more lines of code within the huge list of lines of code, no indentified in anyway, except by comments, as being functions).

Interestingly at this time, I was taught there were two kinds of functions you could call, ones which returned a value, known as "functions" in Pascal, and ones which didn't return anything known as "Procedures".  These latter you'll be very aware of from other langugages like C, where the return type is part of the function declaration, so "void foo();" obviously doesn't return anthing, whilst "int bar();" returns an integer.

That "Procedure/Function" definition stuck with me a long while, I was a kid, I was taught something and got a certificate to say to the world "He knows what he's talking about", so it was with some trepidation, years later, that I had to admit it was rubbish and everything was a function.

And this revelation came with my being taught about "Functional Programming", this was important for large projects written in languages like C, because you really wanted to start to learn how to keep functions doing one task.  So when you designed, named, write and test a piece of code, you can break it down, and know each piece, or each function, is doing just one job.

"void Max(const int& p_Left, const int& p_Right, int& p_Result)"

Here I've just defined a function prototype, even without my explaining, I'm pretty sure you could guess what it does... Yes, it takes the left and right integers given and decides which is the bigger, placing that value in the result.

We can design code, define the prototype, and hand the nitty gritty of the actual code body of to someone else, to write the body of it, or just to test our implementation works.  And though this is a trivial example, think about a single function to say format a disk, it might contain calls to dozens, or hundreds, of other functions, but it itself does one task, and each sub task is itself kept in a single function so you break the whole job down.  Within your large projects therefore you maintain a level of ease in how to debug, maintain and update the code, if your "foo" function doesn't work, just fix that one function and re-test, there shouldn't be multiple-foo's and there shouldn't be more than one task performed within the code inside "foo".

This was the first major meeting, and teaching, I had on "Functional Programming", and it was something I took to heart.  My code took on very much a "one function" style, where each function had a single purpose.

Twenty years on, and even in an Object Orientated world (of C++, C# and Java) I utilise the functional idea.

The main thing applying the single function ideal to my code has lead to is a vast improvement in maintainability, this has been important for my job and on going sanity with large projects.

However, there are other caviates, for there are other parts to functional thinking which have to be taken into account.  For example, are you going to allow functions in your code to have just one, or multiple exit points from a function?  For example:

const int bar (const int& p_i)
{
   return p_i * 2;
}

const int foo ()
{
    if ( x > 0)
    {
        return bar(x);
    }
    else
    {
    return bar(-x);
    }
}

This is perfectly reasonable code, foo does one job, deciding upon the value of x which value to pass, however, this is a trivial example, what if there are tens, hundreds or even thousands of different paths you could take as the result of foo?... A case statement with every character possible already throws hundreds of options our way, so it's not out of the ordinary.

You could be tracing through this code and ANY of those many exit points could fail, causing a crash, which you then have to dig through after the stack has all unwound.

Using many exit points is fine, if you can justify it, please don't get the impression I'm hating on the concept, if you have a low memory situation for example, I can see you won't have space spare for my suggested solution; and this is part of the many trade-offs you will learn about in a career of programming, when to, and when not to employ a technique.

But in the above example, I would change foo as follows:

const int foo ()
{
    int l_result = 0;
    if ( x > 0)
    {
l_result = bar(x);
    }
    else
    {
    l_result = bar(-x);
    }
    return l_result;
}

So, you see I have a local value copy, this is using more memory, and wants to allocate that memory... and there are other options than allocating that as a local variable, but for maximum maintainability keeping that value within the actual function is a key feature of this example.

And so how does this assist in maintainability?  Well, we can now debug the function a lot more easily, we can see the value of "l_result" at any moment in a watch within the debugger, we can wrap individual - failing - calls to "bar" into try-catch statements and debug the returned value for the whole function at the return at the bottom.

I've seen some horrors in the poor use of both returning from within a function, where return points are hidden three, four or more nestings deep, and it's made for nothing but frustration, try to avoid multiple function exit points.

What else does functional programming offer us?... Well, it also offers us an easy way to make code self-documenting, if we have a function like the "max" function earlier, it explained itself in just it's definition, take advantage of that fact.

But what about functions inside objects?  I hear you cry, well lets take a pure C++ example, in C++ (or at least most C++ compilers) we're allowed to just define a function prototype and go to town, but that's really; technically; C not C++.  To make things C++ we really should have the functions all inside a class definition... I stick to this idea, even if we just need a pseudo class of static functions:

class Helpers
{
    public:
    
          static void Max(const int& p_Left, const int& p_Right, int& p_Result);
          
          static const int Sum(const int& p_Left, const int& p_Right);
          
 };

 One can see what the functions mean, and the function have meaningful names, the class itself can have, a useful name, and indeed the class can then be in a namespace which itself has an even more useful name.  So we build up not complexity, as some assume, but ease of division of functionality, breaking numbers from user interfaces from strings from file management, break it down, divide and conquer, that was the original purpose of functions in computing, and so that original function is now emphasised in the languages and methods we employ today.

 C++, C#, java, Python, all can benefit from using the function idiom... Your projects certainly can.