Order of Operations (Time Management)

In an earlier post I talked about The Plan, that post has become a living document as I’m updating it as bits get done. I’m trying to think a bit more about the order in which I’m doing things as some tasks have prerequisites and thought I’d share my thoughts on how to manage it.

For example, I can’t do the remote controlled motion until the remote is sending messages, in order for something to receive them I need the new head printed to take the StereoPi board back in place, and so on. To get the remote working I also need to rewire it slightly to use a hardware serial on the Teensy as USB serial doesn’t work as I’d expect with a Pi. There’s a lot to do but what’s the priority and how to decide?

One thing that is time consuming, more for the build time than design time, is anything that requires 3d printing. Depending on the size of the model it can take an hour or overnight to print parts so this is an easy win for planning. If I have something that takes a while to print, prioritise that so I can get on with something else while my printer does my bidding! <evil laugh>

After that it’s a matter of thinking tasks through to mentally go through the process of doing the work. I like to make notes about each task in bullet lists so that I can go back, edit, move them around in the list, until I get my head around it.

For the example above, here’s a brain dump:

Motion control needs the StereoPi in the new head design, finish the design off and get it printing. For remote control to work we need to get messages from Control to Robot, this will need the test code expanding upon to send useful data. The controller also needs wiring up properly as though the Teensy is putting out the state of the joysticks and switches over serial, the Pi can’t actually open serial to the Teensy, using hardware serial on the Pi apparently mitigates this.

That brain dump then can be turned in to a list:

  1. Finish head redesign
  2. Print new head
  3. Rewire Teensy in controller to hardware serial
  4. Update ROS test code to relay message from Teensy
    1. Control side
    2. Robot side
  5. Add RedBoard library to the receiver code, hook up to motor control

This process doesn’t take long and it helps me a great deal, this project is incredibly complex and has a lot of moving parts that depend on one another. Once I get the above tasks done I can think about the next steps, after the head is controlling the arms, this will require a board to be soldered, new designs doing/printing, and more messages between controller and robot, so the tasks will likely be similar to the above but running through the same process will help concentrate my effort and hopefully reduce the amount of stress involved.

Hope this helped someone, if nothing else if ever I’m struggling again if someone could point me towards this as a reminder I’d appreciate it!

Pi to Pi Comms Using ROS

I’ve decided to use ROS (Robot Operating System) for my PiWars project as it’s industry standard and this is an excellent excuse to learn it. For some reason I thought that ROS was a realtime operating system, turns out it is a bunch of libraries and services that run on top of existing operating systems, though that’s selling it short. It’s been around since 2007 and there are *loads* of libraries available for it, I’m hoping to use these to simplify navigation and control of the arms. There are loads of kinematics libraries available so I’m hoping to stand on the shoulders of giants here.

I’ve been playing around with the tutorials and have messages going from one Raspberry Pi to another so thought I’d share how I got here.

Left, the Raspberry Pi console on the controller. Right, the Raspberry Pi in the robot.

Setup/Prerequisites

I’m using the image provided by Ubiquity Robotics, it works and already supports stereo imagery using the StereoPi so seemed daft not to use it. Once you have two Raspberry Pis running with this image get them both on your network. If you’re running Windows you may also want to install Bonjour Print Services, this includes the same service that the Raspberry Pis use to advertise to each other on the network and means you can find them easier by host name.

Tutorials

The ROS tutorials can be found here, If you’re wanting to do ROS development on a Windows machine this may be of use. It’s instructions for installing ROS in the Windows Subsystem for Linux, Docker or VM.

The specific combo of tutorials I used are the Python pub/sub and “running on multiple machines” tutorials, I ran the former on each Pi first to make sure they were working then followed the steps in the latter to set the robot as the master node, then run the listener on the robot and talker on the controller. You can do it either way around, I just like the idea of sending messages from controller to the robot. 🙂

Python Publisher/Subscriber Tutorial
Running on Multiple Machines

If you follow these along at home you will need to go back a few steps as to run the pub/sub tutorial you need to build a package, to do that you need to create and set up your workspace. The prerequisites for each tutorial are listed at the top of each article so easy to backtrack.

Learnings

So, I can send “Hello, World!” from one machine to another. Woo I hear you say! It doesn’t sound like much but from here I can use these concepts to send and receive messages between controller and robot. For example, one node would publish sensor data, I would then have one or many listeners that use that data. Another would listen for motor control signals, telemetry data, etc…

Next up, use the RedRobotics samples on the robot to enable remote control and basic telemetry back to the controller. This will just be the battery level to start with but that’s a very important thing to know as I trust you’ll agree.

PiWars Postponed, New Plan…

With a heavy heart, PiWars has been postponed and a Virtual PiWars has been announced for April 25th prior to a full competition being run. This is very much the right thing to do and I totally support it. One thing this does mean is more time to overcomplicate expand functionality! Here are a few things I’ve been thinking about that I’m going to investigate further.

I’ll start running through these in order, based on the MoSCoW priority system. It’s a simple way of prioritising tasks.

This page is going to be updated each time a task is done with relevant links added to blog posts or videos.

To MoSCoW!

Must have, Should have, Could have, Won’t have. A logical follow up to my previous post on the MVP of MFP.

Must Haves

The robot must:

  • Be drivable by remote control
    • Motion control Done, 29/3/20
    • Arm control
      • Manual control
      • Preset positions for grab/lift/drop
    • Head control Done, 31/3/20
    • Weapon control
  • Sensors
    • Perimeter distance sensors
    • Line follow/cliff sensors on bumpers
    • Video feed over WiFi
  • Blinkenlights!
  • On robot battery charging (BMS) Charge port added (14/3/20)
  • Controller internal batteries/charging (BMS) Done, 29/3/20

Should Haves

The robot should:

  • Provide a stereo feed to a Google Cardboard enabled phone
  • Provide sensor feedback over WiFi to the controller
  • Arms upgraded with compliant joints
    • I might be able to can do this by modifying the servos to give me an analogue output from the potentiometer similar to these,
    • Proof of concept video here! (14/3/2020)
  • A voicebox!
  • Power monitoring on the battery, displayed on controller
    • Voltage
    • Current
    • Charging state (LED on robot too)

Could Haves

  • Full motion and arm control using puppeteering rig
  • Computer vision for
    • Line following
    • Barrel sorting
  • Automation
    • SLAM/Similar
    • Navigation for the blind maze
    • Navigation for barrel sorting
    • Target detection for Zombie Defense

Must Not

  • Be on fire
  • Break down, unless incredibly funny
  • Gain self awareness and take over the world (unless it has a good plan for public services)

MFP MVP TBD…

MacFeegle Prime, Minimal Viable Product, To Be Decided…

Time’s pressing and though I started with lofty goals I need to set a minimum that I’ll be happy with that are acheivable, in software engineering (and probably other fields) we refer to this as the minimum viable product.

The Challenges

There are seven challenges in PiWars, one is autonomous only, a few are optionally autonomous for extra points and some are remote control suggested but you can do them autonomously for bragging rights. The challenges are as follows:

  • Autonomous Only
    • Lava Palaver – Line Following
  • Remote Control/Autonomous Optional
    • Eco Disaster – Barrel Sorting
    • Escape Route – a blind maze
    • Minesweeper – find and disarm red squares
  • Remote Control
    • Pi Noon – Robot jousting!
    • Zombie Apocolypse – shooting gallery
    • Temple of Doom – Obstacle course!

Required Sensors

This robot will be powered by a Stereo Pi so will have the capability for conputer vision, if I’ll be in a position to learn how to do that is a different matter, so what are the simplest sensors I can use to solve these problems?

Line Following

The simplest way for this is an array of light sensors pointing down along the front bumper. The line will be brighter so you can sense how far from centre you are and change your steering accordingly. I’ve a load of IR distance sensors from the ill fated version one of the shooting gallery that I can press in to play.

Blind Maze

For this I’ll need a bunch of distance sensors arrayed around the robot, I have used ultrasonic sensors in the past but they’re physically quite large and the other competitors mentioned a better option. The VL53L0X is a LIDAR sensor that runs over i2c, it can run in continuous mode and you can request a reading on the fly. These are physically smaller will be easier to have more arrayed around the robot they do have a few downsides.

First off, all of these have the same i2c address by default so you have to either change the address on boot up, which requires a wire per sensor, or use an i2c multiplexor, which requires a few wires per sensor. I heard from one of my fellow competitors that the former was ropey at best when they’ve tried it in the past so multiplexor it is!

The other downside is that the performance of these sensors depends a great deal on the surface they’re reflecting from, white is the best for reflectance and black the worst, guess which colour the walls are at PiWars?

In finding links for this post I just spotted these ultrasonic rangefinders which are much smaller, pricy but they’d certainly do the job.

Mine Sweeper

The way this challenge works is that the robot is placed on a 4×4 grid that is lit up from underneath. One of the squares will be lit up red and if the robot moves to and stands on it, it’ll be defused. For a pure brute force method of doing this you can use a colour sensor facing down on the bumper. You’d have the robot bimble around at random, much like the early Roomba’s, and when it detects red it can stop until the colour changes.

It’s not efficient, but it could work. I’m not sure if the extra points for doing it autonomously would be more fruitful than getting more mines by driving manually. I’ve seen someone post a proof of concept for doing this using computer vision so for this one I’ll go with manual with computer vision being the stretch goal.

Remote Control

The bulk of the challenges will be done manually, so we’re going to need a suitable controller. Ideally I’d have a full waldo controller and VR headset as per my aspiration but I need to be more realistic. As a very basic method of control I have an xbox controller rigged up to a Raspberry Pi with a display. It’ll connect via a wifi hotspot, likely my phone, and issue commands over TCP. With the analogue sticks of the Xbox controller I’ll be able to control the movement with one stick and control the head (cameras) with the other, much like in a first person shooter. If arm control using the sticks proves too tricky to control this way I can just preprogram a few positions so it can put hand in front of the bot, press another to close the hand, another to raise is slightly… It’d be all I need for the barrel challenge but wouldn’t be using the arms to the fullest.

Conclusion

We have a plan, or at least the idea of a plan… Having a smaller set of more constrained targets is a good focus, now I just need to get over this damn lurgy and get some energy back!

How Not To Build A Robot

I’ve gained a lot of experience over the last few months with regards to Fusion 360, 3d printing, electronics and more besides. I thought I’d share some of those lessons.

As Complex As You Make It

The most important lesson, as with any project, is to have an idea of what you’re building from the start and how long you have to build it. If it’s a relatively simple design, there will still be a lot of issues you’ll come across that will take added time to figure out, doubly so if you’re learning as you go. My robot concept was complex to start with, more so than I expected, and I had a lot more to learn than I realised too. However long you think you need, add more and if possible simplify your design.

In retrospect, more of a plan than a quick sketch wouldn’t have gone amiss…

I had a bunch or early wins, I used existing parts from an RC car to make early proof of concepts which sped things up, and this gave me a little too much confidence. I was designing elements in Fusion 360 in isolation, assuming they’d work, and that burnt me a lot. I went through a number of different chassis designs as prototypes in the early steps and it wasn’t until I realised I needed to have more of a complete design done in CAD to see how they all fitted together that I could save an awful lot of time. I’m still not great at this but certainly getting better.

Longer term I need to learn how to do joints in Fusion 360 so that I can actually see how things fit together and what constraints there are.

I wasted a lot of time in what was designing seven different robots, I couldn’t have got to where I am without doing it though so a difficult balance to make.

Seriously, Make A List. Then Check it Again…

I had the vague idea that I’d have the Stereo Pi up top in the head for stereo vision, this would give a lot of opportunities for computer vision too. Around the chassis would be a ring of sensors, ultrasonics were what I had in mind to start with, but though simple to work with they’re quite large. I didn’t really know better so that’s that I went with. Later on I learned of the VL53L0X which is a really cheap lidar sensor and a lot smaller too. They had the quirk of having the same i2c address by default so you need to use i2c multiplexors or have them connected in such a way to reset their addresses on first boot… More complexity!

Again, we’ve all PHDs in hindsight but having a more solid plan and spending more time on research and planning in the early stages would’ve paid off in the long run.

Burnout

Look. After. Yourself.

As I mentioned earlier on I had lots of early successes which gave me an awful lot of false confidence, as soon as the easy wins came and went and the real struggle began the build got a lot more difficult, both technically and mentally. For those who know me or have been reading the blog for a while will know I suffer from Anxiety and Depression, they’re a bugger individually but when they join forces they’re truly evil. A few weeks before I applied to enter PiWars my beloved cat, Willow, passed away. To say this was hard on me is an understatement, coupled with the year tailing off, getting darker and colder, and things going from win after win to struggle after struggle, things got rough.

I tried to push through it, that was a big mistake, and I made the best decision for the project which is to take breath and start again. With a lot of support from my girlfriend, the rest of the PiWars community, friends, family, and colleagues alike I slowly got out of the funk while making slow but consistent progress. The Epic Rebuild Began.

Conclusions and Next Steps

I’ve learned a lot, come an awful long way in may regards and though I’ve still a lot to do I’m in a better place and so is the robot. The next steps are to get the controller up and running and the robot drivable again.

In the next blog post, I’ll talk about the plans for the challenges. As it stands I’ve almost one arm and only need to finish the hand, add a bunch of sensors and remote control. I have a minimum spec in sight and will at least be able to compete.

MacFeegle Prime Architecture Overview

A long while since the last post, more on that in an upcoming post titled “How Not To Build A Robot”, but thought I’d give an update on the general architecture that is manifesting for MacFeegle Prime.

The Robot

The robot will have at it’s core a Raspberry Pi, in its case it’ll be a Raspberry Pi 3 Compute Module hosted on a StereoPi board. This board is designed to take advantage of the CM (Compute Modules) two camera ports and allows for GPU boosted stereo vision.

The latest render of MacFeegle Prime. This shows a robot with tank-style treads, a head and one arm.
Latest render of MacFeegle Prime

For motor control, and for some of the servos, I’ll be using a RedBoard+ by RedRobotics. This has everything you’ll need for most robots including a pair of 6A motor controllers, 12 servo headers, support for NeoPixels and most importantly great documentation and support from the creator, Neil Lambeth. This HAT also includes a power regulator so it powers the StereoPi too which is incredibly handy.

Connected to the Pi will be a Teensy 4 board, this will handle and collate data from the various sensors around the robot, with an i2c servo board to control the arms, and potentially an NRF24 RF transceiver too.

The Controller

The controller will also be running on a Raspberry Pi, in this case a standard 3 Model B, though connected to a 7″ touchscreen display. This will also have a Teensy 3.6 board which will be used to interface with various buttons and potentiometers. Also possibly another NRF24, it depends on if control via a WiFi access point will be stable enough.

The sort of thing I have in mind is similar to these controllers for cranes and diggers.

PLUS+1® remote controls
https://www.danfoss.com/en/products/electronic-controls/dps/plus1-remote-controls/

I just love the industrial design of them and with the complexity of all the arms and similar it seemed a valid excuse to build one… I have a pair of 4 axis joysticks, these have X and Y as you’d expect but also able to rotate. The 4th axis is a button on the top, I can use this as a modifier or to toggle modes.

One thing I’d love to do is a waldo controller, similar to the one James Bruton developed for his performance robot but I’d prefer it to be smaller and I think that’s out of scope for the competition.

James Bruton’s puppeteering rig from his video

Better yet would be one similar to the controller Naomi Wu showed in her video about the Ganker robot. It attaches around her waist and allows her to control not only the arms but the motion of the robot too as the “shoulders” of the controller is essentially mounted on a joystick.

Still taken from Naomi’s video

This controller is incredibly intuitive, coupled with stereo vision via Stereo Pi and an Android phone in a Google Cardboard headset I think it’d be an exceptional combo. Definitely one for future development!

Software

The software for this will be written in Python but make use of the Robot Operating System. This isn’t an operating system but a collection of libraries and frameworks to allow components of a robot to work together, even if spread across multiple machines. I’ll be running this is Docker as I’ve had pain trying to get it installed and there’s an image available already.

This will run on both robot and controller and the intention is that it’ll allow for control over WiFi as well as telemetry to the controller. If a WiFi access point, likely a phone in hotspot mode, isn’t stable enough for control I’ll fall back to the NRF24 transceiver option. Handily there is an Arduino library that allows for sending and receiving messages in a format suitable for ROS to parse so hopefully that’ll be fairly easy to swap out.

Summary

There is a lot of work to do, the hardware is mostly done and needs mounting, just the end effectors (hands) need designing along with a few tweaks to the head, and the mount for the nerf gun.

I’m a professional software engineer by trade so I’m hoping that writing the code shouldn’t be too bad a job (DOOM! FORESHADOWING! ETC!) and I have the week before the competition off too to allow for last minute hacking…

Remote Python Development in Visual Studio Code

As I’m about to start development of MacFeegle Prime in earnest I’ve started looking at how best to do this. I’ve long been a fan of Visual Studio Code and figured it would probably have a solution to my problem. Turns out it did!

I’ve used JetBrains WebStorm in the distant past and one of the really handy features was remote development. You could modify HTML, CSS and JavaScript locally and it would automagically deploy that code to your server, remote debugging included! It turns out VS Code has similar.

This is made possible using the Remote Development – SSH extension, follow the steps in the link and you’ll get set up in no time.

One issue I faced is that I couldn’t get the ssh-agent service to run in Windows, I solved this using this solution as a base. In the end I opened services.msc and set the ssh agent to automatic.

To put an SSH key on the Pi, use this as a guide:
Add Public SSH Key to Remote Server in a Single Command (howtogeek.com)

MacFeegle Prime – Overview

This is definitely going to change with time but this is the current vague, but slowly solidifying, plan for MacFeegle Prime, my competitor for PiWars 2020!

Concept

This robot is heavily inspired by Johnny Five from Short Circuit. To that end, it’ll be tracked, have a pair of arms, a head, and shoulder mounted nerf cannon. There will have to be lasers and blinkenlights in there somehow too! The demeanour and style of the robot will be very heavily influenced by the Wee Free Men from Terry Pratchett’s Discworld series… No, I’m not sure what that’s going to look like either but I’m looking forward to finding out!

Cover of Wee Free Men, pretty sure this is fine under fair use…
Original Designer, Paul Kidby
By Rik Morgan (Rik1138, http://www.handheldmuseum.com) – http://props.handheldmuseum.com/AuctionPics/Johnny5_03.jpg  CC BY-SA 1.0, https://commons.wikimedia.org/w/index.php?curid=2693174

Hardware

Unsurprisingly this will primarily run on a Rasbperry Pi, this will take data from the controller, sensors, and cameras then send appropriate control signals to the motors, servos and lights.

As he has a head, I was planning on using a pair of cameras. The “simple” option is just to stream these over wifi to a phone and use a Google Cardboard headset to get a 3d live stream from the robots perspective. Longer term I’ll be using OpenCV or similar to generate a depth map to allow for autonomous navigation and object detection. I was thinking of using two Pi Zeros with cameras attached, they could be dedicated to rescaling and streaming at different qualities for streaming to a HMD (head mounted display) or lower quality stream to another Raspberry Pi that could use OpenCV on them. In the end, I went with the Stereo Pi as they are designed for this very task! To that end, I’ve the Delux Starter kit on order that includes a Raspberry Pi 3 Compute Module and a pair of cameras with wide angle lenses.

Motors are controlled via an Arduino with a motor shield, in this case an Adafruit Feather and motor wing, and they are connected to a pair of PiBorg monsterborg motors and these are beasts! I started off with motors from an RC car and they rapidly hit the limit of what they could move. The robot already weighs 1.6KG…

For the arms I’ll be using servos, a whole bunch of them! I’m aiming for 7DOF arms, which is the same as humans, with the shoulder servors being a more powerful than the others as they’ve more weight to move around. The head and nerf cannon will also have a pair of servos for moving them around, the torso will need to be actuated too but I’m probably going to use a motor for that as it’ll left the whole weight of the robot from the waist up. To control all of these I’ve an Adafruit 16 servo hat, I may need another…

For sensors I’ve a load of ultrasonic sensors, inertial measurement units and optical sensors. The ultrasonic sensors will be good mounted aroudn the robot to get a decent set of returns to create a map from, the IMUs will be good to check if the motor is level and where it’s moving, and the optical sensors should be handy for line following. These will almost certainly be fused together via an Arduino. This means that the real time bits can be done on dedicated hardware and we don’t have to worry about timing issues on the Pi as it isn’t real time.

Software

I’m expecting to use Python for the lions share of the code for this, the Pi, StereoPi, and the servo hat I’m using have excellent support for this and I’ve a bit of experience with it already. The Arduinos run C and if I need to write something for the phone to enable a head up display I’m going to use the Unity games engine which uses C#. It’s what I use at work and I know both Unity and C# very well.

Construction

I’m using an aluminium plate for the base of the chassis and 3d printing custom components to mount to it, currently all hot glued in place with rapid prototyping/laziness…

Timescale

I’m close to having the drive part of this done, at least the first iteration, and I’m expecting that to be sorted over the next few weeks. There will no doubt be improvements over time so it’ll not be a one shot thing. For the torso, arms and head I’m aiming to get the first iteration of those sorted by the end of November. This will give me plenty of time to improve things and get the software written for next March too.

That’s it for now, it’s mostly a brain dump while I work on things in more detail.