Automated Broadband Monitoring on Linux

I’m trialing a 4G home broadband router at the minute to see if it can give me decent upload as apposed to the 1mbps I currently get so I thought I’d look into automatically running speedtests.  Here’s how and it turned out to be quite simple.  Caveat, this runs on Linux.

There is a really handy command line util for running Speedtest.net which is a Python script.  To install it run: “sudo pip install speedtest-cli”

If you don’t have pip or python installed run this: “sudo apt install python-pip”

This will install pip and the prerequisites, one of which is Python itself.  Next up, in your home directory, run “speedtest-cli –csv-header >> speedtest.csv”.  This will create an empty CSV with only the headers in it.

Next up, and finally, run “crontab -e” and enter “*/10 * * * * /usr/local/bin/speedtest-cli –csv >> ~/speedtest.csv”.  Thanks to this thread for giving me the answer as to why it wasn’t working to start with.

That’s it!  Every 10 minutes a speedtest will be run and the results appended to the csv file.  Load it up in a spreadsheet program and job done.

It isn’t a long term solution as that csv will get unweildy after a while but this is a two week trial so not an issue.  I’ve an aging Raspberry Pi B hooked up to the router and will check the results in a day or so to see what the connection is like without anything else on it.

Amazon Fire TV, Kodi and TVHeadend

I realised recently that, like most people, I am watching more and more content on demand and that it’s actually a pain to do with Windows 10 and a HTPC.  There are apps for Netflix and various UK providers but there isn’t one for Amazon Video or Sky Go.  Controlling them typically requires a keyboard and mouse and use of various browsers too.  Also, for whatever reason the user experience just isn’t as good either.

I’ve been using Kodi on a Windows machine with Mediaportal’s TV server and it does work, Kodi also works in a lot of places though including the Amazon Fire TV…  I bought one last week and I’ve been damn impressed with it, I thought today I’d try setting up Kodi for live TV and it turned out to be a lot easier than last time.  Installing to the Fire was a breeze, I simply installed it to my phone and used Apps2Fire to install it to the Fire TV over the network after enabling remote debugging in the Fire’s Developer menu.  Amazon have got much kudos from me for making sideloading so easy!

A few years back I tried to use tvheadend on my Linux server, I’ve had a HDHomeRun for years too so connecting a tuner isn’t an issue as it’s network based, but either the software has got tighter or I’ve learned more as it was a breeze today.

I installed tvheadend to my server using this guide and the HomeRun tuners were automatically detected.  I had to change the tuner type to DVB-T as it defaults to cable, under Networks I added my local transmitter then under services clicked “map all” and that was that.  It started to scan for the EPG in the background and found it pretty quick too.

image

A few changes I’ve made are to point my recorded TV and timeshift folders to larger drives, and I enabled timeshifting as it isn’t by default.  This setting is under Configuration –> Recording.

After that I enabled the TV headend DVR plugin in Kodi and pointed it towards my server and job done!  I’ve a Blackmagic DVB-T2 card to install at some point which will give me a few extra tuners and access to HD channels.  It means running a coax cable though and I can’t be arsed with that at the minute.

One content provider I mentioned above was Sky Go, there is allegedly a way of getting it working on the Fire but I’ve not managed it yet.  For now I’ll just plug my laptop into my AV receiver and have done with it.  I don’t use it often anyhow.

I can now access TV from any device connected to the network so plenty of scope to expand in the future, my upload rate is shocking though so unfortunately I likely wont be able to watch TV remotely.  All in all a fun bit of learning and it frees up the motherboard from my HTPC too.  I’ve a few ideas for a winter project for that but for now it’s on the shelf waiting to be used again.

Recovering Greyhole After Ubuntu Upgrade

In my previous post I talked about some issues I hit when I upgraded from Ubuntu 14.04 to 16.04, it wasn’t all plain sailing and in this one I’ll cover the issues I’ve had with getting Greyhole back up and running.

At the end of the last post I had my “missing” disks mounted and I mentioned I was moving data around.  Thankfully the two disks that were mounting fine we’re two of my largest, 4TB and 2TB worth, the two that weren’t mounting are 2TB and 3TB, after deleting a load of old files and reducing the redundancy level on the non-critical shares it looked like I’d have just enough space to make things easier.

One at a time I ran the command to remove a disk from the pool and waited for Greyhole to finish balancing;
greyhole –going=/mnt/three/gh

You can see what the Greyhole service is doing by running “greyhole –L”, one it tells you it is sleeping you can crack on with the second disk.

This completed and I was able to see my files from a remote machine via Samba, huzzah!  The problem was the install wasn’t tidy any more, I couldn’t control Greyhole using the service command and the landing zones were on a disk I was intending to reformat.  I tried unsuccessfully to fix it but decided to follow the steps to reinstall it in the end.  From the perspective of the documentation this would be the same as migrating to a new machine.

First off I ran “sudo apt remove greyhole –purge” which removes the service with extreme prejudice and I then followed the standard steps to install as per this page.  I restarted Samba and Greyhole after running the fsck command and lo and behold I got most my shares back online!  Two were showing up fine, full of files, one was showing up but empty.  This was my backups share which was a little worrying but I’d already backed it up to another machine so wouldn’t be a big issue to rebuild it.

It turns out that when I was configuring the smb.conf and greyhole.conf files I called the backup share “Backups” rather than “Backup” and this meant that Greyhole couldn’t find the files to make them accessible again.  I fixed this typo, ran fsck again and they are now showing up.

Regarding the other two drives, it looks like I’d initialised them as zfs_members at some point and with Ubuntu 16.04 and they can’t be mounted in the same way.  It’s a vaguely educated guess so happy to be corrected!  To get rid of them I used the wipefs tool which strips the drive bare of partition signatures.  BE VERY CAREFUL WITH THIS!

I ran “wipefs –all /dev/sdc” and “wipefs –all /dev/sdd” which seemed to do the trick.  After that I followed this guide to format my drives using parted.  I’ve no idea why but blkid still doesn’t show the UUID for the partitions I’d created but I took note of them from the output of the mkfs.ext4 command.  I put them into fstab along with creating a folder to mount against with the other two drives and ran “sudo mount /dev/sdc1” and the same for sdd1, they then showed up!

Finally I added the two drives to the Greyhole storage pool by following this guide and ran “greyhole –balance”.

A massive faff but a great learning experience!

Ubuntu 16.04 Upgrade, What Could Possibly Go Wrong?

So I logged in to my home server recently and found in the MOTD that an upgrade from 14.04 to 16.04 was available.  Being a bit cautious about things I asked a colleague if he’d done the upgrade and he had, the only issue he’d come across was for hardware I don’t use so thought I’d crack on.

That night I got home, ran do-release-upgrade, answered a few questions and left it to it.  It carried on tinkering with one of my programming projects on my desktop PC and several hours later, tired after a satisfying nights hacking, I shutdown my desktop.  Completely forgetting I had an SSH session open…
I promptly logged back on and checked my server, in HTOP there was a process at the top of the list that looked upgrade related so I left it to it overnight.  Turns out that as I didn’t have screen installed there was no way to reconnect to that upgrade session which was an arse to say the least!  I didn’t have a choice, that I know of, but to reboot.  I did so and it kernel panicked on boot, something to do with not being able to mount something.

image
BORK BORK BORK BORK ::PANIC:: BORK BORK

::Expletive deleted::

I loaded in to maintenance mode by selecting the advanced option on reboot and looked at what was or wasn’t mounted.  It turns out two of my four disks weren’t being mounted by fstab on boot, I ran blkid and they weren’t listed either.  I managed to find the following command on Ask Ubuntu which showed that the disks we’re still being detected which was a good sign.

sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL

image

I managed to manually mount the disks as EXT4 and could access the data so I figure this is a quirk of 16.04 I need to figure out.  So far so good!  I commented the two drives out of fstab and attempted a reboot, I got a bit further but ended up in maintenance mode again.

This time around I did some more digging and found the “apt –configure –a” command which reconfigures all the packages installed, this was recommended for interrupted installations and for me it worked a treat.  I could now boot normally!

As previously mentioned I use Greyhole for file duplication across my disks, for long-time readers of my blog or those familiar with it it’s very similar in concept to Drive Extender on Windows Home Server, Greyhole wasn’t happy.  First off it complained about PHP and MySQL errors so one by one searched for the error line and installed the missing packages.  After that I managed to get Greyhole running against the manually mounted disks and I’m now moving data around so I can reformat the two odd ones out that are listed as zfs_members so I can get them in line with the others.  That’s in progress and I’ll cover it in another post as this one has rambled on long enough.

It has certain been a learning experience and I’ve got nerd points from my colleagues for actually managing to fix a borked upgrade, apparently most people would just reinstall but I figured I’d have a stab at it.  For a certified Windows fanboy I’ve certainly come a long way!

Headless Raspberry Pi Setup With PiBakery

I thought I’d write up the steps I’d take to set up the Raspberry Pi 3 I’m using on my Roomba, including wifi and the rest, then discovered PiBakery and frankly this post writes itself!PiBakery is a tool for Windows and Mac which makes configuring a new Pi a block based affair.  It keeps up to date with the latest version of Raspbian too.  Basically you select blocks from the left hand side, change the values and once you are happy you write to an SD card by clicking “write”.  As I’m running headless on the Roomba being able to configure without the faff of plugging in to a keyboard and mouse is brilliant, it’s a little thing but they add up.

From the screenshot you’ll seepibakery-setup on first boot I configured the wifi, SSH key, changed the hostname and set the Pi to boot to console to save resources.  I was dubious but plugged in the SD card, gave it power and sure enough it appeared on the network a few minutes later.  The only step I took afterwards was to install XRDP which is handy for debugging and if I want to deploy new code to the Arduino directly from the Pi. You can install packages as part of the setup process too and I’ll certainly be doing that next time as I know what I want.

I’ve also used the same method with the PiZero to turn it into a USB gadget which worked a treat.

Reverse Tunnels and RaspBMC

I’m once more dabbling in Linux at work so figured I’d give RaspBMC another go with my new knowledge of Linux troubleshooting workarounds…

First of all I thought I’d have a play with SSH so set up a reverse tunnel to access my Pi remotely.  I followed the same steps in this article to no avail;
http://www.tunnelsup.com/raspberry-pi-phoning-home-using-a-reverse-remote-ssh-tunnel/

If I ran the script manually I could create the tunnel no problem, it just wasn’t being created automatically which was a pain.  Turns out that RaspBMC disables cron out of the box so this needed to be re-enabled;
http://www.wexoo.net/20130406/running-cron-jobs-on-raspberry-pi-in-raspbmc

Last of all I found that, as before, my Pi keeps randomly hanging requiring a reboot.  A friend of mine introduced me to the hardware watchdog on the Pi that can be used to reboot if the device goes unresponsive and so far so good;
http://raspberrypi.stackexchange.com/questions/1401/how-do-i-hard-reset-a-raspberry-pi

 

That’s all for now, the new wiring loom is going in the Mini soon though so expect more automotive posts soon!

SSHFS From Windows to Ubuntu

SSHFS is a file system that works over SSH, this allows for a secure connection to remote file systems and in my case will allow me to use the Windows based tools I’m familiar with against files on a remote Linux machine.  I’m planning on getting to grips with Linux but it’s daft not to use the tools you know and this will likely prove very useful with the Raspberry Pi too.

The best method of using SSHFS with Windows I’ve found is outlined here, I’ve not tried with anything but a password yet but I’ve now mounted the disk of a user I have in a virtual Ubuntu server I’m running on my laptop.  Seems to work a treat though not tried it over the network yet, with an upcoming move from Home Server to Ubuntu at home soon that will be likely be heavily tested.

Installing Atlassian Stash on a Headless Ubuntu Server

Today’s post relates to my experience installing Atlassian Stash on a headless Ubuntu box. My first attempt ended with me trying all sorts of nonsense which left the server messy as hell, including an accidental installation of Unity…

The trickiest part for me was the installation of Java as the version available via the package manager, OpenJDK, didn’t seem to play ball with Stash.  No idea why but one of the helpful fellows from Atlassian, Stefan Saasen, pointed me in the right direction with this article which I’ve used as basis for the steps below, the browser plugin steps were omitted for obvious reasons.

First of all you need to grab the tar file for Java from here then get it onto your server, the easiest way I found was to use an FTP application in SSH file transfer mode.  Then connect to the terminal and run through these commands;

  1. Decompress the tar file noting that the file name will reflect the version you download and may differ from mine;
    tar -xvf jdk-7u2-linux-x64.tar.gx
  2. Move the JDK directory to /usr/lib;
    sudo mkdir -p /usr/lib/jvm
    sudo mv ./jdk1.0.7.0_17 /usr/lib/jvm/jdk1.7.0
  3. Now run these lines, no idea why, explanation in the comments please!
    sudo update-alternatives –install “/usr/bin/java” “java” “/usr/lib/jvm/jdk1.7.0/bin/java” 1
    sudo update-alternatives –install “/usr/bin/javac” “javac” “/usr/lib/jvm/jdk1.7.0/bin/javac” 1
    sudo update-alternatives –install “/usr/bin/javaws” “javaws” “/usr/lib/jvm/jdk1.7.0/bin/javaws” 1
  4. Correct the permissions for the executables;
    sudo chmod a+x /usr/bin/java
    sudo chmod a+x /usr/bin/javac
    sudo chmod a+x /usr/bin/javaws

If you are installing on a fresh server, that should be all you need for Java to work.  If you’ve a few JVMs present on your machine, see the original article for steps to configure the default.  For Stash, you can now follow the instructions here.  One thing I found quite useful was the wget command that a colleague showed me, it makes downloading via the terminal quite easy, getting the Stash installer for example can be done like so;
wget http://www.atlassian.com/software/stash/downloads/binary/atlassian-stash-2.3.1.tar.gz

Note, the version I’ve listed there may not be the latest so you should grab it via here;
http://www.atlassian.com/software/stash/download

For editing the setenv.sh file, and text files in general, I’ve found nano really useful.  I’ve not ventured into the murky depths of the text editor holy war yet but this does the trick for now.  I had to run the same chmod against the Stash home directory I created but after that I was good to go.  Open a browser to the path listed in the console and use the wizard to continue setup.

Installing BOINC on a Headless Ubuntu Server

I’ve always been a fan of grid computing and at the start of the year I donated my gaming rig to the World Community Grid project, 24×7 for January, which was a bit of fun and a nice stress test.  I’ve since been given permission to install on some spare machines at work and found a bit of a gap in the documentation for a headless installation which I thought I’d bridge.  Installing it headless is easy, managing it headless proved a little trickier but only due to my rookie status with Linux.  These steps are all you need for a quick installation and remote management;

sudo apt-get install boinc-client
sudo nano /var/lib/boinc-client/remote_hosts.cfg

Add your client machine IP to the list then connect remotely using Boinc Manager.  Job done.

For anyone interested in joining the fun, I’ve set up a jedbowler.com team;