I’ve switched to using the StereoPi image for MacFeegle Prime as it offers the lowest latency for streaming video, the downside is I can’t get ROS to build on it so I’m using Docker which needs to interface with hardware…
I’m using the RedBoard+ by Red Robotics, it uses the pigpio library which can work over a socket or pipe. I’m trying to figure out how to have the pigpio deamon running on the Pi host and access it from a docker container that’s running ROS.
Security Note Of Doom!
The following changes have security implications, only do this if you’re running on a private, trusted network. I don’t really know what I’m doing so assume the worst and double check everything I suggest!
Things To Tweak
Upon some digging, this wasn’t as evil as I thought it would be. For pigpiod you need to change the service definition to remove the “-l” from the start command:
sudo nano /lib/systemd/system/pigpiod.service
Remove the “-l” from the ExecStart line.
This is needed as by default pigpiod runs in localhost mode which means that it’ll only accept connections from the local machine. When connecting from inside of a docker container, it’s considered another host.
In order for the docker container to access the hardware it’ll need to have privileged status, add this to the command when you create your container. Eg:
docker run --priviledged -d ros
From inside the docker container, and in the same shell instance you’re running the code that calls the RedBoard library, you need to run this:
export PIGPIO_ADDR=[PI IP ADDRESS]
This sets the address used by the pigpio library that RedBoard uses to the host machine.
Conclusion
After doing the above I was able to use the RedBoard keyboard_control.py script to control the robot from inside a docker container. I’ve not tried anything further yet but that proved the concept enough for today!
As per my last post I’m using Docker on my robot with ROS. The last task is to get docker running from a dedicated USB drive to split resources between that and the SD card the OS is running from. A good guide to mounting a USB drive can be found here.
Note, rather than using umask=000 for mounting you need to mount, then change the permissions of the “host” directory to 777. For example, mount to /media/usb as per the article then chmod 777 /media/usb **WHILE MOUNTED**. This should allow you to mount, then set to automount on boot.
If you are running headless and there is a problem with the fstab file it can get annoying so to test in advance of a reboot run “sudo mount -a” to mount all volumes as per that file. If it succeeds, you can reboot.
I was having a problem mounting with fstab, I could manually mount the usb folder every time but not using “mount -a”. The penny dropped when I did “df -h” to see how much space was free and noticed /media was itself a mount point. I created a new root folder called “docker” and it worked a treat.
Following this answer I moved the location of the docker containers and such to /docker.
I’ve run “docker pull ros” and it’s now happily using the usb drive instead of the SD card. 🙂
I’ve been using the Ubiquity Robotics Raspberry Pi ROS image to run both the robot and controller, it seemed the easiest way to get ROS running, but now I’m trying to get low latency streaming working from the cameras it is proving tricky.
New Plan, use the Stereo Pi image with Docker to host ROS images/containers.
Some Mods Needed…
In order to lower the latency as much as possible the Stereo Pi image is a heavily modified version of Raspbian, this includes custom partitions and the file system set to read only. Here are the steps I followed to get it to a state where I can restart development.
1. Get and Modify the Image
Head here and follow the steps to get and install the image, follow the steps to get it on the wifi and under the Advanced section you’ll see details on how to SSH to the Pi afterwards. Once logged in you’ll need to temporarily set the file system to read/write then edit the fstab file.
Under the “SLP Structure Details” section you’ll find this command:
sudo mount -o rw,remount /
This will set the file system to read/write, at which point you can open the fstab and edit it to make this permanent by changing the ro to rw for the root.
nano /etc/fstab
2. Resize the Partitions
I was trying to figure out how to do this on Windows then realised I could just install gparted on the controller and remote to it… I put the micro-sd card in a USB card reader and followed these instructions. The 1.8G root partition was expanded to around 25GB and the recording drive slimmed down accordingly.
screenshot of gparted with the final sizes of the partitions
3. Install Docker
This next bit is trivial, run this command, as taken from the Raspberry Pi blog:
One of the reasons StereoPi is very quick is because it is tailored not to use the filesystem on the SD card, to that end it may be worth moving all of the docker containers and images over to a USB drive. There’s more information on how to do that here but I’ve not tried it yet:
The next things I need to do is convert my code to work in a docker container, this shouldn’t be too tricky but as the RedBoard library will need to talk to the hardware there will likely be complications.
In an earlier post I talked about The Plan, that post has become a living document as I’m updating it as bits get done. I’m trying to think a bit more about the order in which I’m doing things as some tasks have prerequisites and thought I’d share my thoughts on how to manage it.
For example, I can’t do the remote controlled motion until the remote is sending messages, in order for something to receive them I need the new head printed to take the StereoPi board back in place, and so on. To get the remote working I also need to rewire it slightly to use a hardware serial on the Teensy as USB serial doesn’t work as I’d expect with a Pi. There’s a lot to do but what’s the priority and how to decide?
One thing that is time consuming, more for the build time than design time, is anything that requires 3d printing. Depending on the size of the model it can take an hour or overnight to print parts so this is an easy win for planning. If I have something that takes a while to print, prioritise that so I can get on with something else while my printer does my bidding! <evil laugh>
After that it’s a matter of thinking tasks through to mentally go through the process of doing the work. I like to make notes about each task in bullet lists so that I can go back, edit, move them around in the list, until I get my head around it.
For the example above, here’s a brain dump:
Motion control needs the StereoPi in the new head design, finish the design off and get it printing. For remote control to work we need to get messages from Control to Robot, this will need the test code expanding upon to send useful data. The controller also needs wiring up properly as though the Teensy is putting out the state of the joysticks and switches over serial, the Pi can’t actually open serial to the Teensy, using hardware serial on the Pi apparently mitigates this.
That brain dump then can be turned in to a list:
Finish head redesign
Print new head
Rewire Teensy in controller to hardware serial
Update ROS test code to relay message from Teensy
Control side
Robot side
Add RedBoard library to the receiver code, hook up to motor control
This process doesn’t take long and it helps me a great deal, this project is incredibly complex and has a lot of moving parts that depend on one another. Once I get the above tasks done I can think about the next steps, after the head is controlling the arms, this will require a board to be soldered, new designs doing/printing, and more messages between controller and robot, so the tasks will likely be similar to the above but running through the same process will help concentrate my effort and hopefully reduce the amount of stress involved.
Hope this helped someone, if nothing else if ever I’m struggling again if someone could point me towards this as a reminder I’d appreciate it!
I’ve decided to use ROS (Robot Operating System) for my PiWars project as it’s industry standard and this is an excellent excuse to learn it. For some reason I thought that ROS was a realtime operating system, turns out it is a bunch of libraries and services that run on top of existing operating systems, though that’s selling it short. It’s been around since 2007 and there are *loads* of libraries available for it, I’m hoping to use these to simplify navigation and control of the arms. There are loads of kinematics libraries available so I’m hoping to stand on the shoulders of giants here.
I’ve been playing around with the tutorials and have messages going from one Raspberry Pi to another so thought I’d share how I got here.
Left, the Raspberry Pi console on the controller. Right, the Raspberry Pi in the robot.
Setup/Prerequisites
I’m using the image provided by Ubiquity Robotics, it works and already supports stereo imagery using the StereoPi so seemed daft not to use it. Once you have two Raspberry Pis running with this image get them both on your network. If you’re running Windows you may also want to install Bonjour Print Services, this includes the same service that the Raspberry Pis use to advertise to each other on the network and means you can find them easier by host name.
Tutorials
The ROS tutorials can be found here, If you’re wanting to do ROS development on a Windows machine this may be of use. It’s instructions for installing ROS in the Windows Subsystem for Linux, Docker or VM.
The specific combo of tutorials I used are the Python pub/sub and “running on multiple machines” tutorials, I ran the former on each Pi first to make sure they were working then followed the steps in the latter to set the robot as the master node, then run the listener on the robot and talker on the controller. You can do it either way around, I just like the idea of sending messages from controller to the robot. 🙂
If you follow these along at home you will need to go back a few steps as to run the pub/sub tutorial you need to build a package, to do that you need to create and set up your workspace. The prerequisites for each tutorial are listed at the top of each article so easy to backtrack.
Learnings
So, I can send “Hello, World!” from one machine to another. Woo I hear you say! It doesn’t sound like much but from here I can use these concepts to send and receive messages between controller and robot. For example, one node would publish sensor data, I would then have one or many listeners that use that data. Another would listen for motor control signals, telemetry data, etc…
Next up, use the RedRobotics samples on the robot to enable remote control and basic telemetry back to the controller. This will just be the battery level to start with but that’s a very important thing to know as I trust you’ll agree.
With a heavy heart, PiWars has been postponed and a Virtual PiWars has been announced for April 25th prior to a full competition being run. This is very much the right thing to do and I totally support it. One thing this does mean is more time to overcomplicate expand functionality! Here are a few things I’ve been thinking about that I’m going to investigate further.
I’ll start running through these in order, based on the MoSCoW priority system. It’s a simple way of prioritising tasks.
This page is going to be updated each time a task is done with relevant links added to blog posts or videos.
MacFeegle Prime, Minimal Viable Product, To Be Decided…
Time’s pressing and though I started with lofty goals I need to set a minimum that I’ll be happy with that are acheivable, in software engineering (and probably other fields) we refer to this as the minimum viable product.
The Challenges
There are seven challenges in PiWars, one is autonomous only, a few are optionally autonomous for extra points and some are remote control suggested but you can do them autonomously for bragging rights. The challenges are as follows:
Autonomous Only
Lava Palaver – Line Following
Remote Control/Autonomous Optional
Eco Disaster – Barrel Sorting
Escape Route – a blind maze
Minesweeper – find and disarm red squares
Remote Control
Pi Noon – Robot jousting!
Zombie Apocolypse – shooting gallery
Temple of Doom – Obstacle course!
Required Sensors
This robot will be powered by a Stereo Pi so will have the capability for conputer vision, if I’ll be in a position to learn how to do that is a different matter, so what are the simplest sensors I can use to solve these problems?
Line Following
The simplest way for this is an array of light sensors pointing down along the front bumper. The line will be brighter so you can sense how far from centre you are and change your steering accordingly. I’ve a load of IR distance sensors from the ill fated version one of the shooting gallery that I can press in to play.
Blind Maze
For this I’ll need a bunch of distance sensors arrayed around the robot, I have used ultrasonic sensors in the past but they’re physically quite large and the other competitors mentioned a better option. The VL53L0X is a LIDAR sensor that runs over i2c, it can run in continuous mode and you can request a reading on the fly. These are physically smaller will be easier to have more arrayed around the robot they do have a few downsides.
First off, all of these have the same i2c address by default so you have to either change the address on boot up, which requires a wire per sensor, or use an i2c multiplexor, which requires a few wires per sensor. I heard from one of my fellow competitors that the former was ropey at best when they’ve tried it in the past so multiplexor it is!
The other downside is that the performance of these sensors depends a great deal on the surface they’re reflecting from, white is the best for reflectance and black the worst, guess which colour the walls are at PiWars?
In finding links for this post I just spotted these ultrasonic rangefinders which are much smaller, pricy but they’d certainly do the job.
Mine Sweeper
The way this challenge works is that the robot is placed on a 4×4 grid that is lit up from underneath. One of the squares will be lit up red and if the robot moves to and stands on it, it’ll be defused. For a pure brute force method of doing this you can use a colour sensor facing down on the bumper. You’d have the robot bimble around at random, much like the early Roomba’s, and when it detects red it can stop until the colour changes.
It’s not efficient, but it could work. I’m not sure if the extra points for doing it autonomously would be more fruitful than getting more mines by driving manually. I’ve seen someone post a proof of concept for doing this using computer vision so for this one I’ll go with manual with computer vision being the stretch goal.
Remote Control
The bulk of the challenges will be done manually, so we’re going to need a suitable controller. Ideally I’d have a full waldo controller and VR headset as per my aspiration but I need to be more realistic. As a very basic method of control I have an xbox controller rigged up to a Raspberry Pi with a display. It’ll connect via a wifi hotspot, likely my phone, and issue commands over TCP. With the analogue sticks of the Xbox controller I’ll be able to control the movement with one stick and control the head (cameras) with the other, much like in a first person shooter. If arm control using the sticks proves too tricky to control this way I can just preprogram a few positions so it can put hand in front of the bot, press another to close the hand, another to raise is slightly… It’d be all I need for the barrel challenge but wouldn’t be using the arms to the fullest.
Conclusion
We have a plan, or at least the idea of a plan… Having a smaller set of more constrained targets is a good focus, now I just need to get over this damn lurgy and get some energy back!
I’ve gained a lot of experience over the last few months with regards to Fusion 360, 3d printing, electronics and more besides. I thought I’d share some of those lessons.
As Complex As You Make It
The most important lesson, as with any project, is to have an idea of what you’re building from the start and how long you have to build it. If it’s a relatively simple design, there will still be a lot of issues you’ll come across that will take added time to figure out, doubly so if you’re learning as you go. My robot concept was complex to start with, more so than I expected, and I had a lot more to learn than I realised too. However long you think you need, add more and if possible simplify your design.
In retrospect, more of a plan than a quick sketch wouldn’t have gone amiss…
I had a bunch or early wins, I used existing parts from an RC car to make early proof of concepts which sped things up, and this gave me a little too much confidence. I was designing elements in Fusion 360 in isolation, assuming they’d work, and that burnt me a lot. I went through a number of different chassis designs as prototypes in the early steps and it wasn’t until I realised I needed to have more of a complete design done in CAD to see how they all fitted together that I could save an awful lot of time. I’m still not great at this but certainly getting better.
Longer term I need to learn how to do joints in Fusion 360 so that I can actually see how things fit together and what constraints there are.
A few of the prototypes, with the almost final form
I wasted a lot of time in what was designing seven different robots, I couldn’t have got to where I am without doing it though so a difficult balance to make.
Seriously, Make A List. Then Check it Again…
I had the vague idea that I’d have the Stereo Pi up top in the head for stereo vision, this would give a lot of opportunities for computer vision too. Around the chassis would be a ring of sensors, ultrasonics were what I had in mind to start with, but though simple to work with they’re quite large. I didn’t really know better so that’s that I went with. Later on I learned of the VL53L0X which is a really cheap lidar sensor and a lot smaller too. They had the quirk of having the same i2c address by default so you need to use i2c multiplexors or have them connected in such a way to reset their addresses on first boot… More complexity!
Again, we’ve all PHDs in hindsight but having a more solid plan and spending more time on research and planning in the early stages would’ve paid off in the long run.
Burnout
Look. After. Yourself.
As I mentioned earlier on I had lots of early successes which gave me an awful lot of false confidence, as soon as the easy wins came and went and the real struggle began the build got a lot more difficult, both technically and mentally. For those who know me or have been reading the blog for a while will know I suffer from Anxiety and Depression, they’re a bugger individually but when they join forces they’re truly evil. A few weeks before I applied to enter PiWars my beloved cat, Willow, passed away. To say this was hard on me is an understatement, coupled with the year tailing off, getting darker and colder, and things going from win after win to struggle after struggle, things got rough.
I tried to push through it, that was a big mistake, and I made the best decision for the project which is to take breath and start again. With a lot of support from my girlfriend, the rest of the PiWars community, friends, family, and colleagues alike I slowly got out of the funk while making slow but consistent progress. The Epic Rebuild Began.
The evolution of the rebuild
Conclusions and Next Steps
I’ve learned a lot, come an awful long way in may regards and though I’ve still a lot to do I’m in a better place and so is the robot. The next steps are to get the controller up and running and the robot drivable again.
In the next blog post, I’ll talk about the plans for the challenges. As it stands I’ve almost one arm and only need to finish the hand, add a bunch of sensors and remote control. I have a minimum spec in sight and will at least be able to compete.
A long while since the last post, more on that in an upcoming post titled “How Not To Build A Robot”, but thought I’d give an update on the general architecture that is manifesting for MacFeegle Prime.
The Robot
The robot will have at it’s core a Raspberry Pi, in its case it’ll be a Raspberry Pi 3 Compute Module hosted on a StereoPi board. This board is designed to take advantage of the CM (Compute Modules) two camera ports and allows for GPU boosted stereo vision.
Latest render of MacFeegle Prime
For motor control, and for some of the servos, I’ll be using a RedBoard+ by RedRobotics. This has everything you’ll need for most robots including a pair of 6A motor controllers, 12 servo headers, support for NeoPixels and most importantly great documentation and support from the creator, Neil Lambeth. This HAT also includes a power regulator so it powers the StereoPi too which is incredibly handy.
Connected to the Pi will be a Teensy 4 board, this will handle and collate data from the various sensors around the robot, with an i2c servo board to control the arms, and potentially an NRF24 RF transceiver too.
The Controller
The controller will also be running on a Raspberry Pi, in this case a standard 3 Model B, though connected to a 7″ touchscreen display. This will also have a Teensy 3.6 board which will be used to interface with various buttons and potentiometers. Also possibly another NRF24, it depends on if control via a WiFi access point will be stable enough.
The sort of thing I have in mind is similar to these controllers for cranes and diggers.
I just love the industrial design of them and with the complexity of all the arms and similar it seemed a valid excuse to build one… I have a pair of 4 axis joysticks, these have X and Y as you’d expect but also able to rotate. The 4th axis is a button on the top, I can use this as a modifier or to toggle modes.
One thing I’d love to do is a waldo controller, similar to the one James Bruton developed for his performance robot but I’d prefer it to be smaller and I think that’s out of scope for the competition.
Better yet would be one similar to the controller Naomi Wu showed in her video about the Ganker robot. It attaches around her waist and allows her to control not only the arms but the motion of the robot too as the “shoulders” of the controller is essentially mounted on a joystick.
This controller is incredibly intuitive, coupled with stereo vision via Stereo Pi and an Android phone in a Google Cardboard headset I think it’d be an exceptional combo. Definitely one for future development!
Software
The software for this will be written in Python but make use of the Robot Operating System. This isn’t an operating system but a collection of libraries and frameworks to allow components of a robot to work together, even if spread across multiple machines. I’ll be running this is Docker as I’ve had pain trying to get it installed and there’s an image available already.
This will run on both robot and controller and the intention is that it’ll allow for control over WiFi as well as telemetry to the controller. If a WiFi access point, likely a phone in hotspot mode, isn’t stable enough for control I’ll fall back to the NRF24 transceiver option. Handily there is an Arduino library that allows for sending and receiving messages in a format suitable for ROS to parse so hopefully that’ll be fairly easy to swap out.
Summary
There is a lot of work to do, the hardware is mostly done and needs mounting, just the end effectors (hands) need designing along with a few tweaks to the head, and the mount for the nerf gun.
I’m a professional software engineer by trade so I’m hoping that writing the code shouldn’t be too bad a job (DOOM! FORESHADOWING! ETC!) and I have the week before the competition off too to allow for last minute hacking…
As I’m about to start development of MacFeegle Prime in earnest I’ve started looking at how best to do this. I’ve long been a fan of Visual Studio Code and figured it would probably have a solution to my problem. Turns out it did!
I’ve used JetBrains WebStorm in the distant past and one of the really handy features was remote development. You could modify HTML, CSS and JavaScript locally and it would automagically deploy that code to your server, remote debugging included! It turns out VS Code has similar.
This is made possible using the Remote Development – SSH extension, follow the steps in the link and you’ll get set up in no time.
One issue I faced is that I couldn’t get the ssh-agent service to run in Windows, I solved this using this solution as a base. In the end I opened services.msc and set the ssh agent to automatic.