Skip to content

Matt Foster

Technology Freelancer

  • Submit a Review
  • About Me
  • Performant WordPress

iaas

Other service providers are also available No rating results yet

3rd March 2015 by Matt

Anyone who has worked with me in the past couple of years will know that I have a very strong preference for recommending Amazon AWS as your IaaS provider of choice.  It is mature, robust, performant, and has a whole raft of PaaS type features to make things easy and lower the sysadmin burden/requirement. It … Read more

Tags aws, iaas, linode, linux, longview, vps

Top Things

Monitoring TalkTalk Router bandwidth NAN/5 (2)
Freelancer for hire 5/5 (1)
Fast website 5/5 (1)
Sanitising user input 5/5 (1)
Varnish 4.0 4/5 (2)

Archives

<
2015
  • 2016
  • 2015
▼
>
Jan0 Posts
Feb0 Posts
Mar0 Posts
Apr1 Post
May0 Posts
Jun0 Posts
Jul0 Posts
Aug0 Posts
Sep0 Posts
Oct0 Posts
Nov0 Posts
Dec0 Posts
Jan6 Posts
Feb1 Post
Mar3 Posts
Apr0 Posts
May0 Posts
Jun0 Posts
Jul0 Posts
Aug0 Posts
Sep0 Posts
Oct0 Posts
Nov0 Posts
Dec0 Posts

You Said …

WHM things to be aware of

I've neer been an enormous fan of WHM, in the long run it pays to know what you are doing. Still it does have a very useful role to play, even if some of the things it does just seem plain strange. Yes EasyApache does give enormous flexibility, but so do the vendor provided packages.

Some days the only way to fix things is through SSH'ing into the server, and you have to be really careful to make sure that you don't change something at the command line that WHM has it's claws into.

suPHP seems to be the default handler (I can kind of understand why for multi-tenant hosting setups, but perhaps you should have a real sys-admin hired in that scenario?). It has a charming habit of doing the unexpected; todays head banging surprise came from wondering why php.ini settings were not getting applied.

After lots of grepping for ini-set statements, we eventually find an suPHP_config directive in .htaccess.

*sigh*

.htaccess has a lot to answer for, and if you are looking for real web performance you should _NEVER_ use .htaccess - put the configuration in the Apache configuration file where it belongs. The additional cycles Apache has to spend checking for the presence of .htaccess and parsing it if it is there will hurt you in the long run.

Allowing your "webmasters" to specify their own php.ini through .htaccess is just plain wrong.

Rant ends.

Other service providers are also available

Anyone who has worked with me in the past couple of years will know that I have a very strong preference for recommending Amazon AWS as your IaaS provider of choice.  It is mature, robust, performant, and has a whole raft of PaaS type features to make things easy and lower the sysadmin burden/requirement.

It also represents really good value for money to my mind, and what better way to learn about it from the free usage tier (if you stay within the fairly generous limits it truly is free).  Since the introduction of the t2.micro node, and general purpose SSD storage (replacing t1.micro, which was rather memory cramped, and our old friendly spinning rust) it is a serious piece of virtual hardware for a rather special price.

There is, however, no such thing as a one-size fits all answer.  Perhaps you need a UK IP address.  Perhaps you want a better pricing plan on TB of data in and out from your VPS.  Perhaps you don't need all the fancy infrastructure capabilities, but just want a few Linux boxen "in the cloud".  If so, you could do a lot worse than to look at linode.com.  I first had a shell on a linode many many many years ago (it still works), and it seems to fit into the "it just works bucket".   Good price point (especially if data transfer is a worry for you), fast NIC speeds (getting over 100Mbps is challenging at this price level), ability to deploy images, a fabulous reporting/monitoring engine - Longview.  And an API.  Nobody should be touching anything that doesn't have an API that you can do everything you need to through.

I do not, and have never, worked for either AWS or Linode, but they both have been wonderful providers to me and my clients time and time again.

Performant WordPress

A lot of the techniques I use to help you optimise your WordPress site get tested out here first. This certainly used to be known as "Dog-Fooding" in more than one big IT setup.  Still I'm quite happy with how this site runs, and hopefully so will you be.

Although this specific to WordPress, a lot of the technology used is applicable to web hosting in general.  I know how to squeeze the most out of your VPS to run Magento or just about anything else you like.

This site is hosted on an Amazon AWS t2.micro Instance, so no-one can accuse me of over specifying the hardware.  If you haven't used AWS before then you will qualify for the AWS free usage tier and I'd be delighted to help you migrate over to it.

Enough talk, here are some measurements of the performance of this site GTMetrix Report.  

 

Submit a Review

[submit_testimonial]Thank You

 

Sanitising user input

Some days you need to get user input from a bit of an HTML form that wasn't really designed for it, in order to get a great UX.
This means that the input get's passed around through JS, AJAX, PHP and goodness only knows what before it turns up in the right place.

How do we make sure it's safe to add to a SQL query?

Of course we can use PDO, but how about the general case?

$Words=str_replace("\xA0"," ", mysqli_real_escape_string($link,html_entity_decode(strip_tags(preg_replace('!\s+!', ' ',trim($Words))))); $pieces=explode(" ", strip_tags($Words)));

Something just says this is plain wrong, but it's working for me.

In this particular use case I'm trying to break up a user provided "sentence" into a set of words, which I then do stuff with.
So   is particularly difficult to parse here when things get pasted.

I'm sure the above approach is wrong, would anyone like to tell me how to do it better?

Thanks,

Matt

Mr Fantastic

Matthew Scott City House Media

Matt has fantastic technical ability. He can create things in such a short space of time to an excellent standard. Our team have been extremely impressed with his communication and advice throughout our projects. He is one of the best technical developers I've worked with and I've already recommended other companies, and I would 100% recommend him to anybody else. Also an absolute gent.

5/5 Stars.

Truly Professional

Matt Foster This website

Matt did an amazing job putting this website together in a single evening.

5/5 Stars.

Fast website

Here's a little page that shows the kind of all important page load times that the search engines rank so highly.

 

Very much work in progress especially on the mobile side, but it was certainly time for some dog food......

Freelancer for hire

Are your websites running slowly?

Are you not appearing in the search engines?

Do you have a hosting package sorted, but aren't sure you are getting the best out of it?

Time to hire the expert...

Magento, WordPress, OpenCart, Linux, MySQL, PHP - I do them all.

Help With Amazon SES Set-Up

Jamie Cooper Frank Tailor

Matt was fantastic, I knew the outcome I wanted but no clue how to go about getting their. Within 24 hours Matt had provided me with the answer I needed as well set it all up for me. I am amazed by the great work he has done and will certainly be turning to Matt for all future work in this area.

Matt’s Work

Michael AUTIN Ltd.

I found Matt via People Per Hour and could not be happier. From the first job I had him do I always had the feeling he was trying to solve the problem and help me understand the issue rather then just get it behind him. Also he is very responsive on Skype and only a message away when needed. I would at any time hire him again....

5/5 Stars.

Monitoring TalkTalk Router bandwidth

Having treated myself to a 4K TV recently, and the fact there is _some_ 4K or UHD content available through Amazon Prime and Netflix I wondered what the actual bandwidth requirements of streaming this kind of stuff down are. No problem, I'll just sling up the excellent MRTG and find out I thought.

Oh no, it's not that easy. I have an FTTC service provided by TalkTalk. The VDSL modem/router is a "Super Router" also known as the Huawei HG633. Running firmware v1.15t it has neither SNMP nor telnet/SSH or any other kind of CLI access. Bit of a dead end really. Still not to worry, I only use the HG633 to terminate the VDSL, it has an Ethernet uplink to an Apple AirPort Extreme that provides Wi-Fi for the house and a couple of gigabit connected wired devices (thanks TalkTalk for providing me with an 80/20 Mbps WAN product and only 100Mbps LAN side). Apple however have also removed SNMP capability from the AirPort range. *GRR*. Now the obvious solution is to get a proper modem/router/access point but these things are sent to challenge us. The HG633 has a tolerable web admin interface, which does expose some statistics, so we can surely yank those out with a bit of patience.

Turns our it is all JavaScript based in the HG633, but no worries, the excellent PhantomJS to the rescue. Lurking on the home LAN is a Raspberry PI Model 3, which proves to be more than up to the task of driving this headless JavaScript engine. After I little bit of tinkering I was able to generate a PhantomJS script which would login to the router, navigate to the appropriate page, and then dump the DOM out. Judicious use of text parsing results in getting the required information out of the admin gui, and which point it's trivial to feed it to MRTG.

The results can be seen at http://mattfoster.noip.me/mrtg/

The code is ugly, doesn't really cope with error conditions all that well, and is heavily dependent on some of the DOM structure in the router's management page which will doubtless get screwed the next time TalkTalk pushes down a firmware update. Still perhaps the next firmware update will re-enable the CLI.

When there is a will, there is a way even if it is a slightly stupid one, which certainly fails to deal with asynchronous requests properly or even work all the time.

I hesitate to even publish the code, but as it was an annoying enough problem to "solve" the PhantomJS script is available router.js.txt And the horrible bash script called by MRTG mrtg-router.sh.txt

UPDATE FOR 1.18t
Since the Huawei HG633 was updated to firmware 1.18t the scripts broke (no surprise really, given the lack of API and HTML scraping. The updated JS script is available router-1.18t.js.txt now.

Impressive results with Varnish

Rob Wassell Director

Matt recently installed Varnish and optimised the server to achieve the best performance. Having taken benchmarks before and after we are very impressed with the results. Page load times are much improved with noticeable speed increase across all of the sites. Great job.

Pay in Dollars

[wpedon id=237]

My Skills

I have been working with IT in one way or another since learning BASIC and 8086 assembler aged 8.

My interest really took off when I experienced SunsOS 4.1.3, TCP-IP and the fledgling Internet in the very early 90s and the University of Brunel where I studied Engineering.

Having worked a lot of my career in large multi-national Enterprise I have vast experience of the challenges of scale and globalisation. My work at smaller Limiteds along with my current freelancing has taught me that closeness and personal touch that is so important to the small business.

There are many areas of IT in which I am proficient, an attempted summary follows:

 

Expert, Leading Edge

  • Linux - installation, security, performance tuning.
  • LAMP stack - performance tuning, security.
  • Cryptography - VPN, PKI, SSL.
  • Web services - Apache, Nginx.
  • Proxy services - Varnish, Squid, Nginx, Apache.
  • Page speed optimisation - GTMetrix scores, google page speed scores etc.
  • MySQL performance tuning, schema layout.
  • AWS hosting services, architecture, deployment, administration.
    • EC2, S3, SES, RDS, CloudFront, ElasticIP, AutoScale LoadBalancing, SNS, IAM .....
  • Identity Services
    • Authentication, OAuthV2, Active Directory, SSO, MFA, SAML, LDAP, Social Identity.
  • DNS.  Everything to do with DNS, including records related to anti spam techniques.
  • eMail services, routing, configuration, security, anti-spam.
  • Conceptual architectures.
  • Network analysis - wire level debugging.
  • Raspberry Pi.
  • Security cleanup and analysis.
  • Firewalls, Checkpoint, IPTables and others.

Expert

  • Coding in PHP, bash, javascript (including jQuery).
  • Windows Server administration and deployment.
  • WHM, cPanel, Plesk and other dashboards.
  • Hosting migration services.
  • Hosting frameworks: WordPress, Magento, Drupal.
  • Network design and debug, layers 2,3 and 7.
  • Proximity Marketing - Wi-Fi, Bluetooth.

Strong

  • Coding in VBA
  • CSS
  • OS X
  • iOS
  • Social media integration

Competent

  • Coding in perl and C.
  • PGSql.
  • Stripe.
  • Twitter API integration.

I'm sure I've left some things out, so do feel free to ask.

Some things that I don't do include

  • iOS / Android apps (yet).
  • Windows applications.
  • Heavy custom front end web design work.  I know css well, and can style pages for you, but I am not a  web designer.
  • SEO - yes I will solve your performance/pagespeed scores - but that's as far as I go with SEO.

 

WHM/PHP Site

Sam

Brilliant and efficient developer with solid grasp of software and database development. Matt's ability to grasp a problem, take time to assess the many different issues related to it, and then importantly, solve the issue is a great strength. Strongly recommended.

5/5 Stars.

Varnish 4.0

Well I've taken the plunge and upgraded to Varnish 4.0 in front of this WP site. It's early days to draw any firm conclusions, but I shall be monitoring the impact closely.  If anything it seems to have slightly reduced my page load times (yay!) without having to implement any of the "Special" VCL logic that I have used in the past to serve up highly optimised varnish only content for when the bots come crawling.

No, this isn't a method that I use in general to "cheat" and increase my scores on GTMetrix and the like, rather I use it for the search engine crawling bots to boost (in theory at least) my page ranking.  Still SEO isn't really my area of total expertise, but no harm ever came from a quick and responsive piece of hosting infrastructure surely??

Of course the vcl syntax has changed, so we can't just take our old default.vcl and hang on to it.  I've based mine mostly upon https://wordpress.org/support/topic/good-varnish-4-defaultvcl which seems to do the job fairly well out of the box.

Well pingdom seems to be impressed at any rate:

Pingdom

Payment Failed

[simpay_error show_to="admin"]

We're sorry, but your transaction failed to process. Please try again or contact site support.

The Importance of reading the regexp properly

I did mention that my varnish 4.0 configuration was pretty much out of the box here.

Well, there are some things that can come back to bite you when you copy'n'paste stuff that you find with google.

One client was very patient with me today whilst some serious head scratching happened today whilst we tried to work out why we had broken the shopping cart on one vhost on the same server, but not another, with identical versions of opencart running in the background.  I was all ready to give up and put in a "don't cache this site/VHOST" entry into the VCL, when something caught my eye.

# Cache the following files extensions
if (req.url ~ "\.(css|js|png|gif|jp(e)?g|swf|ico)") { unset req.http.cookie; }

Read that regexp carefully.  Yes it doesn't do quite what you expect it to.



if (req.url ~ "\.(css|js|png|gif|jp(e)?g|swf|ico)$") { unset req.http.cookie; }

 

Works much more consistently, and more to the point as intended.





Pay attention to the detail, and remember there is always a reason for strange behaviour, the code only follows the rules we give it.

payment-test

[simpay id="246"]

Mr

Marios Web Developer

Matt has help me a lot with a PHP challenge. He was fast and accurate. I will corporate with him in the future for sure. I absolutely recommend him for a freelance hire.

5/5 Stars.

Payment Confirmation

[simpay_payment_receipt]

Test review 2

Matt Foster Linux

Another amazing job by Matt, we would have never hit the deadline without his help!!

5/5 Stars.

Google Authenticator with PHP

Because you just should.

Gone are the days of SecureID OTP tokens costing an arm and a leg, and being just for Enterprise. My own WP site here is protected with Google Authenticator, and there is no excuse for not doing the same on yours.  Just grab the awesome WP Google Authenticator plugin and you will be good to go.

My favourite iOS App for this is the awesome Authy but there are plenty out there.

But the world doesn't run on Wordpress, suppose you want to do it yourself in a LAMP site...

Grab a copy of the PHPGangsta class

 

Creating users:

$ga = new PHPGangsta_GoogleAuthenticator();
$secret = $ga->createSecret();
echo "Your OTP Secret is: ".$secret."\n\nIt is probably a good idea to take a note of this";
echo "\nPlease scan in the QR code to setup your OTP ";
$qrCodeUrl = $ga->getQRCodeGoogleUrl('MyApp', $secret);

<IMG SRC='<?php echo $qrCodeUrl?>'>
<BR>

<?php
$oneCode = $ga->getCode($secret);
$checkResult = $ga->verifyCode($secret, $oneCode, 2);    // 2 = 2*30sec clock tolerance
if ($checkResult) {
echo 'OK';
$sql="UPDATE localusers set GASecret='" . $secret . "' WHERE id=" . $userRow['id'];
mysqli_query($link,$sql);
} else {
echo 'FAILED';
}

Authenticating users:

if(!isset($userRow['GASecret']) || !isset($_REQUEST['e'])) { // Impossible to Authenticate
header('HTTP/1.1 401 Authentcation Impossible');
header('Content-Type: application/json; charset=UTF-8');
die(json_encode(array('message' => 'ERROR', 'code' => 1337)));
} else { // Try to authenticate
$ga = new PHPGangsta_GoogleAuthenticator();
$checkResult = $ga->verifyCode($userRow['GASecret'], $_REQUEST['e'], 2);    // 2 = 2*30sec clock tolerance
if($checkResult)  {
session_write_close();
session_start();
$_SESSION['OTP'] = 1;
session_write_close();
$result="Authenticated";
header('Content-Type: application/json');
die(json_encode($result));
} else {
header('HTTP/1.1 401 Authentcation Failed');
header('Content-Type: application/json; charset=UTF-8');
die(json_encode(array('message' => 'ERROR', 'code' => 1337)));
}

 

Obviously these are just snippets, which will never actually run for you, but you get the general idea.

 

It is so easy, it is just rude not to.

 

 

About Me

[testimonials_cycle timer="2000" transition="scrollHorz"]

Excellent

Sean Dreamr.uk / Lead Developer

Matt's communication is excellent. We've returned to Matt several times over the past year or so - and each time he has been incredibly efficient, knowledgable and served as a very valuable partner to have. The tasks we've set him loose on have ranged from basic server configuration to fairly advanced troubleshooting - and each time he returned with an answer, and a smile.
Would thoroughly recommend using Matt if you're in need of some assistance - or even a "go-to" freelancer.

Backup to AWS S3 with s3cmd

Particularly since the introduction of Glacier, S3 from Amazon is quite attractive as an offsite backup offering (archive the backups to Glacier automatically after, say, a week with lifecycle management and your storage costs drop dramatically).

Of course we still have to keep an eye on our data transfer costs. There are two possible candidates for backing up our Linux Server/VPS to S3 that I've seen and used in the past, either: s3cmd or s3fs

S3FS certainly feels nice, and we can rsync to it in the normal way, but (and it is potentially a huge but - no pun intended) AWS S3 data charges are not just for storage, but also bandwidth transferred, and perhaps critically the number of requests made to the S3 API. I freely confess to having doing zero measurement on the subject, but it just feels instinctive that a FUSE filesystem implementation is going to make way more API calls than the python scripts that call the API directly that are s3cmd.

So using the rsync like logic you might consider doing something like:

cd /var/www/
s3cmd sync -r vhosts --delete-removed s3://$BUCKET/current/vhosts/

There is a small snag however to this approach. s3cmd keeps the directory structure in memory to help it with the rsync logic. This is fine if you are on real tin, with memory to spare. But on a VPS, especially an OpenVZ based one where there is no such thing as swap, this can be a real show stopper for large directory structures as the hundreds of MB of RAM required just are not available. Time for our old friend the OOM killer to rear it's head :(

Recursion of some form would be the elegant answer here. However elegance is for those with time for it, and the following seems to work very effectively with minimal RAM consumption:

cd /var/www
for i in `find . -type d -links 2 | sort | sed -e 's/\.\///g'`
do
s3cmd sync -r $i/ --delete-removed s3://$BUCKET/current/vhosts/$i/
done

The find command looks for directories which only contain two directories (. and ..), that is to say they are the end nodes of a directory tree. And then we back them up, one by one.

Simples.

Tags

#monitoring #raspberrypi #talktalk apache authy aws backup detail glacier googleauth htaccess iaas injection linode linux longview openvz performance php PPH s3 security varnish vps whm wordpress
© 2022 Matt Foster • Built with GeneratePress