Disclaimer: this is not specifically for a commercial product, but various things I design sometimes get commercialized. I mention this so that you may decide whether you want to weigh in. If it’s commercialized, I will probably make very little money but a bunch of university students may get a neat STEM program in the countryside :D
That out of the way, I’ve designed some boards for a Wi-Fi controlled robot with mechanum wheels. So 4 independent motor drivers, one for each wheel, allow omnidirectional motion. It’s built around a Pi Pico W, 4 SOIC-8 9110S motor drivers, and some buck/boost converters to give the system a 5V and 12V line. It’s very basic, mostly made to be cheap. Here’s a photo:
Right now it just receives UDP communications (a little app written in Godot) and activates the motors in different combinations – very “hello world”. I’m planning to add some autonomy to move around pre-generated maps, solve mazes, and so on.
I have foolishly used 2-pin JST connectors for the motors, so using motors with rotary encoders would be a pain without ordering new boards. I’ll probably fix that in a later board revision or just hack it in. Also the routing is sloppy and there’s no ground plane. It works well enough for development and testing though :D
What I’m thinking about right now, is how to let the robot position itself in a room effectively and cheaply. I was thinking of adding either a full LiDAR or building a limited LiDAR out of a servo motor and two cheap laser ToF sensors – e.g. one pointed forward, the other back, and I can sweep it 90 degrees. Since the LiDAR does not need to be fast or continuously sweep, I am leaning toward the latter approach.
Then the processing is handled remotely – a server requests that the robot do a LiDAR sweep, the robot sends a minimal point cloud back to the server, which estimates the robot’s current location and sends back some instructions to move in a direction for some distance – probably this is where the lack of rotary encoders is going to hurt, but for now I’m planning on just pointing the forward laser ToF sensor towards a target and give the instruction “turn or move forward at static speed X until the sensor reads Y”, which should be pretty easy for the MCU To handle.
I’m planning to control multiple robots from the same server. The robots don’t need to be super fast.
What I’m currently wondering is whether my approach really needs rotary encoders in practice – I’ve heard that mechanum wheels have high enough mechanical slippage that they end up inaccurate, and designers often add another set of unpowered wheels for position tracking anyway. I don’t want to add more wheels in this way though.
On the other hand, it would probably be easier to tell the MCU to “move forward X rotary encoder pulses at a velocity defined by Y pulses per second, and then check position and correct at a lower speed” than to use a pure LiDAR approach (e.g. even if rotary encoders don’t give me accurate position, on small time scales, they give me good feedback to control speed). I could possibly even send a fairly complex series of instructions in one go, making the communications efficient enough to eliminate a local server and control a ton of robots from a cloud VPS or whatever.
Anyone have some experience with encoders + mechanum wheels that can offer a few tips my way? At this stage the project doesn’t have clear engineering goals and this is mostly an academic exercise. I’ve read that using a rigid chassis and minimizing the need for lateral motion can reduce slippage, reading through a few papers didn’t get me any numerical indication of what to expect.
Ok yeah – I’m leaning toward relying more on the laser ToF than the rotary encoders.
A simple algorithm of ‘pick a lidar point and drive toward it’ does sound simplest. Thanks for weighing in!