Robot Programming Made Simple (Update)

The robot buggy was tested at Rix Inclusive Research to see how well it worked (in principle) and although the controller (designed by Levi) communicated well with the Robot and commands were successfully sent to the robot buggy itself via BlueTooth, the buggy was reluctant to move in a straight line.

DC motors are never quite the same in terms of their performance, so one was clearly stronger than the other resulting in the buggy turning right when it should have moved straight.

To counter the effect of dissimilar motors, a pair of trimmers were added (10K pots) that were used to add a small amount of ‘trim’ to each motor – speeding the motor up or down according to what was required. Although partially successful, it was quite difficult to get consistent straight line performance.

To try and persuade the buggy to move in a straight line, an LSN9DS1 accelerometer/gyro module was added to help control the movement, constraining the movement. This calculates yaw and corrects movement away from a straight line, which is looking promising although some final tuning is required.

Another upgrade in the pipeline is the addition of wheel encoders. Due to the space limitations of the existing wheels and motors, the encoders needed to be reflective. To enable this, two disks of acrylic were cut and engraved so that an IR LED would reflect from the shiny surface (painted with a white marker) and not reflect where the acrylic was engraved (in between the white marks):

The encoders themselves were 3D printed form PLA and each contains an IR LED and photo transistor set at 45 degrees:

These have yet to be tested, but the idea is to get better control of the forward motion. The buggy needs to move a fixed distance each time a forward command is encountered. Currently, the forward movement is controlled using a simple timer. It’s not very accurate!

Posted in Rix Inclusive Research, Robot Programming Made Simple | Leave a comment

Robot Programming made Simple

We are developing a simple, programmable robot buggy designed to make robotics accessible to a much wider audience. The core idea is to replace traditional text-based coding with tangible, tile-like objects that represent commands and actions. By arranging these tiles in sequence, users can create programs without needing prior programming knowledge. This approach lowers barriers to entry, supports inclusive learning, and empowers individuals, especially those with learning disabilities or limited technical experience, to engage creatively with robotics. Our goal is to transform programming into an intuitive, hands-on activity that fosters participation, exploration, and confidence.

The buggy is a modified Cybot (which stems from around 2001) and adapted to use an Arduino Uno R4 Wifi microcontroller. The big grey box underneath the Uno is the rechargeable battery:

Modified Cybot

This project aims to democratize robotics education by creating a simple, programmable robot buggy that can be controlled using tangible, tile-like objects instead of traditional text-based coding. The core concept is to design a physical programming system where each tile represents a specific command or action, such as ‘move forward’, ‘turn left’ or ‘pause’. Users can arrange these tiles in sequence to form a program, which the robot interprets and executes. This approach removes the cognitive and technical barriers often associated with conventional programming, making robotics accessible to individuals with diverse abilities, including those with learning disabilities or limited digital literacy.

The system will integrate a robust yet intuitive interface that translates tile arrangements into executable instructions for the robot. By leveraging inclusive design principles, the project seeks to foster creativity, problem-solving, and confidence among users who might otherwise be excluded from STEM activities. The tangible programming paradigm encourages hands-on engagement, collaboration, and experimentation, transforming programming into an approachable and playful experience.

Ultimately, this project envisions a platform that can be used in educational settings, community workshops, and maker spaces to promote equitable participation in technology. By bridging the gap between physical interaction and computational thinking, the robot buggy serves as a stepping stone toward a more inclusive future in robotics and programming.

The body of the Robot Buggy is based loosely upon the robot ‘tortoises’ by Grey Walter (with a little modification to the body shape and the addition of a speaker:

First Steps

The first step in this process is a robot that receives simple forward, left, right and stop commands via a hand-held bluetooth controller. Pressing a button on a controller moves the robot a fixed distance. This has some advantages in that it provides:

Immediate Feedback: Pressing a button and seeing the robot move a fixed distance creates a clear cause-and-effect relationship, which is fundamental to understanding programming logic.

Discrete Actions: Each button corresponds to a single command (forward, left, right, stop), which mirrors the idea of individual instructions in a program.

Sequencing: Even though the user is pressing buttons manually, they are effectively creating a sequence of actions. For example, forward, then turn, then stop. This helps build the mental model of how instructions combine to form a program.

Scaffolding: Starting with direct control via Bluetooth provides a low-barrier entry point. This can then be followed by providing tiles that represent these same commands, reinforcing the concept of sequencing without overwhelming the learner.

This approach is very much aligned with constructivist learning principles – learners first experience the concept physically, then abstract it into symbolic representation (tiles), and eventually into digital or text-based programming.

Bluetooth Control

Not the easiest thing to program, especially when using two distinctly different microcontrollers using two different programming languages and different bluetooth libraries. The first problem here was figuring out which of the devices, using bluetooth terminology, has the role of ‘central’ and which should be ‘peripheral’. A little research on the web gave some answers, and this is what I found out.

In Bluetooth Low Energy (BLE), the roles of central and peripheral are about who initiates the connection and who advertises availability:

Peripheral: This device advertises its presence and waits for a central to connect. Typically, peripherals are simpler, low-power devices like sensors.
Central: This device scans for peripherals and initiates the connection. Centrals usually have more processing power and act as the ‘controller’.

So the robot itself will act as the ‘central’ device, whilst the controller will be designated as a peripheral.

The controller is a simple four button circuit board connected to a Rapberry Pi Pico 2 W and powered by a tiny Lipo battery with USB charger:

It communicates with the Robot Buggy by sending simple integer values as instruction codes: 1: Turn right, 2: Go forward, 3: Stop, 4: Turn left. We’ll use a minimalistic protocol where each command is a single character sent over Bluetooth:

2 → Move forward a fixed distance (e.g., 10 cm)

4 → Turn left by a fixed angle (e.g., 90°)

1 → Turn right by a fixed angle (e.g., 90°)

3 → Stop immediately

This keeps communication lightweight and easy to parse on both devices.

The simplified robot interface will be trialled by non-programmers by constructing simple sequences of instruction to move the robot around an obstacle course fro a starting location to a given target location.

Posted in Robot Programming Made Simple | Leave a comment

RixTalk Update

The new version (technically the next iteration, as it is far from ‘finished’) of the talking ruler is almost ready. New parts have been printed, the stripboard has been soldered together and connectors added to the various parts.

As there needs to be a wire for each of the touch points on the ruler, which is every 10mm, then it required 16 different ports on the Pico, and therefore 16 single core wires soldered onto the board:

I actually removed the two connectors at the end of the board and replaced them with ones which pointed upwards, as there was not enough space to insert both plugs. I also added connectors for the speakers. It was quite a challenge adding the plugs, as the speaker wires are only about as thick as a few human hairs:

I photographed the set of components that need to go inside the housing, which you can see will be a challenge to fit – an exercise in shoe-horning:

The new battery and charger unit have been added to the housing, as well as the on-off switch:

The on-off switch:

Once the new housing has finished printing, it’s time for assembly and to see if it all fits together. Note that I chose multi colour filament to make the numbers stand out:

The assembly of the talking ruler was a little tricky, given the number of wires and components that had to be crammed inside:

The working version is shown below:

You might have noticed that the playback sometimes repeats when holding the pencil onto the contact. This is a limitation of the software – the next version (if there is one) will use a different MP3 player module and be more controllable.

The next thing to do is find a collection of suitable candidates to try the ruler out and see how well it works, get feedback on possible improvements and write up. If you’d like to participate, or know of someone with a sight impairment that would like to participate, then please do contact me. The user tests will be held in East London (at UEL) and also in Norfolk UK.

Posted in Hardware | Leave a comment

Wheelchair users Experience VR

We ran our first VR workshop specifically for wheelchair users this week, giving two of our co-researchers a taste of virtual reality. You can see one of our researchers below using the Meta Quest Pro to explore one of the spaces:

The space that she looked at originally was a simple introductory space in the meata Quest Horizon worlds:

https://youtube.com/shorts/sERSt8JaApU

The VR workshops are guided by multiple, overlapping goals including the exploration of virtual spaces such as environments that wheelchair users would not be able to easily accomplish in the physical world, looking at some of the issues that accompany standard controllers that are supplied with the VR headsets and exploring ways of making VR more inclusive.

Exploring virtual Japan

Wearing the headsets did not cause too much concern (although both are fairly heavy), as they are both adjustable and with reasonable padding. However, the controllers appeared to be difficult to hold and the buttons and joysticks hard to manipulate particularly if you have limited manual dexterity. In the video below, you can see that the controls on the VR controller need some improvement – something that we are currently working on.

After we had looked at a few of the virtual worlds (mostly from the set of Horizon worlds intstalled on the headset) we asked our researchers “If you could do anything in a virtual space, what would you choose to do?” The responses were interesting. One said she’d like to be ‘Michael Jackson’s hat for a day’ and experience what he sees on stage, and the other was a little more down to Earth, experiencing being in Kenya and possibly a VR cooking experience which she talks about in the video below:

Posted in Uncategorised | Leave a comment

New RixTalk Talking Ruler

After creating the initial prototype of the talking ruler and considering some of the early feedback given to me by people who have seen the device work, the new version is now on its way. This is now a self-contained unit that includes speakers, audio player, microcontroller and battery.

The first version of the talking ruler worked quite effectively and produced a good clear sound, but suffered from having external wires, an external microcontroller and audio player:

Old version of talking ruler.

As the image above shows, the first version of RixTalk used an Arduino Uno R3 together with the brilliant Adafruit MP3 player shield (super audio quality). A standard HB pencil was wired to the Uno via a simple 3D printed cap, and the contact points on the ruler (that are touched by the pencil) are wired to the analog ports. It worked well, but was rather cumbersome with all the external wires and external microcontoller.

The new version of RixTalk (let’s call it RixTalk2) is self-contained, with the speakers integrated into the ruler itself, and all electronics contained within a housing that is screwed to the ruler. The new version uses digital ports on the Pico, as there were not enough analog ports available. All of the components are 3D printed using a Prusa Mk4S.

Note that the first notch on the ruler is a locater for 0 mm. The remaining notches are either large (every 10mm) or small (between each large notch). The large notches represent lengths at 10mm, 20mm, 30mm, etc., and play audio sounds for each one. The small notches are for lengths 5mm, 15mm, 25mm, etc., but do not play sounds.

Plastic components of talking ruler together with speakers.

Internally, the microcontroller is a standard Raspberry Pi Pico Mk2 (i.e. not wireless), the MP3 decoding and playback is taken care of by a Fermion MP3 module, using simple AT (ATtention) commands via one of the UARTs on the Pico:

Fermion MP3 player module.

The Fermion is connected to a pair of 2W speakers that are located at either end of the ruler, and is controlled directly by the Pico. Both Pico and MP3 player are powered using a single 3.7V 150 mAh LIPO battery that is connected to an Adafruit lipo charger module:

Electronics inside talking ruler.

The Pico and Fermion site side-by-side on a piece of stripboard (not the nice little Pi Hut board above – that was too small), which is used to connect everything together.

The audio was generated using an AI text to speech engine, with an English sounding female voice. Each number was generated individually and stored as an MP3 file on the Fermion player.

The image below shows the RixTalk talking ruler assembled, together with a 3B pencil. A 3B is being used as it gives better conductivity.

RixTalk talking ruler with pencil attached.

The talking ruler has yet to be tested out with end users, but that should not be too far ahead.

Posted in Hardware | Leave a comment

Hackathon25 at Rix Inclusive Research

The next Hackathon (Hackathon25) at Rix Inclusive Research will take place on June 18th 2025, and already the planning is well underway with ideas floating around with respect to how it will run, how it will be enabled and the types of project that we are interested in considering this year. The hackathon will focus on generating ideas and solutions for projects within the very broad area of Healthy Living.

A suggestion made by one member of the Rix team (Craig) was to use ‘Makedo’ kits (https://www.make.do/) for helping in the construction of prototypes out of cardboard. These are kits aimed at children (or adults!) that provide you will tools and accessories for encouraging creativity and imaginative play using easy-to-use tools to build with cardboard. We have been lucky to have MakeDo as one of our sponsors this year, who have very kindly donated the kit to use at the Hackathon.

Makedo

The kits, which consist of tools and resources for building, cutting, screwing together, etc., cardboard models, could work really well for the Hackathon, giving people the opportunity to construct something physical from simple materials, and what could be simpler that cardboard?

In addition the website has a number of additional resources, such as a 3D printable hinge which I immediately decided to print. The image below shows the hinge that was 3D printed on a Prusa MK4S. It works straight from the printer, and requires no assembly.

This worled really well, so I decided to print a few more to add to the collection. There will be more on the way for the actual Hackathon, but I an only print 10 at a time.

One of the great things about these 3D printed parts (like many 3D printed items) is that they work stright off the printer, without assembly:

Posted in Rix Inclusive Research | Leave a comment

What are NFC Tags?

NFC (Near Field Communication) Tags are tiny stickers, cards, and keychains that can store and send information to a phone when tapped. They don’t need to be charged because they get power from the phone when they come close together.

NFC tags are used for many things:

  • Quick Actions: Tap your phone on a Tag to turn on WiFi, open an app, or play music
  • Contactless Payments: Used in credit cards and Apple Pay/Google Pay to make purchases
  • Smart Business Cards: Tap a Tag to instantly share your contact information
  • Access Control: Used in keycards for doors and event passes to enter restricted areas
  • Home Automation: Used to turn on/off lights, adjust thermostat, control fans, etc.

How to add data to an NFC Tag:

  • Step 1: Purchase an NFC sticker, card, or keychain online
  • Step 2: Download an NFC app (e.g. NFC Tools)
  • Step 3: Choose what you want the Tag to do (share contact details, turn on WiFi, send message, etc.)
  • Step 4: Write the data to the Tag – Hold your phone close to the Tag, press “Write” and the phone will save the info onto the Tag
  • Step 5: Test it – Tap your phone on the Tag to see if it works

Posted in Uncategorized | Tagged , | Leave a comment

Speech Recognition Engine for Controlling RixBot

Finally, I managed to find some spare time today to connect up the audio system to a Raspberry Pi 4, which includes a simple speaker output and MAX9814 condenser mic input module. The purpose of this setup is to allow the voice recognition software (VOSK) to be installed on the Raspberry Pi, and respond to simple prompts. It took a huge amount of tweaking (and a lot of head scratching, cups of tea and chocolate) to finally resolve all the conflicts, but it seems to work, and the great thing about it is that it works offline.

The screenshot below shows a simple test that illustrates the software responding to voice prompts after I spoke several simple commands into the microphone:

I was impressed that the mic module would pick up on commands from 2 metres away, and did not make any mistakes.

The next stage of this work will be to attach the motor controller to the Raspberry Pi and install the whole module into the RixBot body. The body of the RixBot is based upon a RealRobots ‘Cybot’ body, which was originally sold as a kit through a weekly publication (see below). This is an excellent basis for a robot, as it already has geared motors, battery compartments and places for light and ultrasonic snesors built into it.

Once the Raspberry Pi has been mounted inside the robot body, and the motor controller installed, then it is ready for testing at Rix with our co-researchers.

Posted in Hardware | Leave a comment

Is Artificial Intelligence (AI) Inclusive? Workshop at Rix (12/02/2025)

Kate introducing the Generative AI workshop.

This is the second Generative AI workshop that we have hosted aimed at finding out more about the accessibility and usability of Generative AI software. The workshop developed ideas that were explored during our first workshop, presented as part of  ‘Creative inclusive research explorations of intelligent technologies/Artificial Intelligence (AI) at the University of Ravensbourne MeCCSA Symposium’

At the Symposium, delegates were working in diverse inclusive teams and given the chance to try out different AI software engines such as Stable Diffusion, in this instance hosted by NightCafé. In order for participants to have the chance of experiencing AI software, we gave no specific goal and requested people to experiment and enjoy using their imagination to see what could be created. For this second workshop, our co-researchers were given the task of generating images for a specific theme – health and wellbeing.

Researcher and co-researchers entering textual prompts to an AI engine.

Our co-researchers were all very keen and excited to learn and work with AI and see what they could create for themselves. We worked in teams of people with and without a learning difference and disability (LDD)  Along with gaining a better understanding of the inclusivity of AI software, a related focus was of a future where AI has a prominent role, and that our diverse team should be included in the development of this AI. Our workshop discussions started to showcase the potential advantages for use by people with LLD, and this is particularly relevant in light of the conversations regarding the ethics of AI and access to the services that it provides. 

Co-researcher entering a text prompt.

The participants at the workshop session were given the theme of Health and Wellbeing as we age, with the specific focus on healthy food, exercise and maintaining good mental health. They were asked to try and capture these three threads by generating images (including cartoon sequences) using the AI software and by providing the prompts, either through speech, text or by uploading a sketch they had drawn.

One of the images representing health and wellbeing generated by Nightcafe AI engine

Being able to talk to the AI seemed to work well for some who found it difficult to type, although some found that the dictation software (built into the Macbook) misunderstood what they wanted to say, possibly because the speech was not clear enough. Some of this was possibly due to the unique formation of sentences they spoke which could perhaps be improved upon with the use of an AI transcriber that could learn the idiosyncrasies of the speaker and transcribe their intentions more accurately. We haven’t yet found such software, so there is potential for a project that addresses this issue.

We discussed how AI could aid inclusion in many ways, including translating ideas, editing first thoughts, and create imagery to help share ideas and emotions. As an augmentation tool, the generative AI software engines are relevant in many areas, particularly in ideation and despite AI mis-understanding prompts, creating odd versions of reality, or just lacking something original. But no-doubt that as the software (and hardware) improves, so too will the abilities of the software to pre-empt what we really wanted to do and offer its own solutions.

The almost conversational approach with the AI was highlighted by one co-researcher when seeing the images produced by the AI software to create images showing healthy food, commented that the portions were huge – as illustrated below.

Gigantic portions generated by AI engine.
Image produced using the prompt "Create images for cartoon strip of somebody having a healthy meal". Note large portion sizes.

Another member of the team suggested you we need to provide clear prompts for the AI software – instructing it to create an image with smaller portion sizes. This was a common theme with the software – whichever method is used to generate the images – voice, text or image upload – you need to be quite clear and detailed about what it is that you require. Misunderstood words or keywords out of context could result in strange images being generated.

Everyone seemed delighted with the image results produced from interacting with the AI, despite some of the unusual and often unintentional consequences of the AI software, such as missing or additional limbs, or odd shaped physical objects (e.g. the dumbbells below).

Image generated using the prompt "Eating chicken curry with weights"

Keeping prompts simple and clear seemed to create straightforward, if not a little ordinary, images. Even when providing an obscure prompt such as “Generates an image of not eating pizza, but eating, healthy and small portions with lots of vegetables” produced something realistic, though not particularly out of the ordinary.

Image generated using the prompt "Generates an image of not eating pizza, but eating, healthy and small portions with lots of vegetables".

Some of the participants attempted to generate cartoon strips to illustrate health and wellbeing, but this did require quite a lot of effort and unless very clear and detailed instructions were provided, the results were rather simplistic, as illustrated below. As with many of the other images generated, these tended to be in the form of white, physically fit, young people. To be really inclusive and include images of people with disabilities, you would need to specifically ask for this.

Image generated using the prompt "Create a cartoon strip with someone doing arm exercises". Note the missing legs and additional limbs.

It was later suggested in a discussion following the workshop that visual prompts, such as physical or digital flashcards, could assist in giving the participants the cues that they were looking for in generating the images that they wanted to produce. To a small extent, some iconic prompts (image thumbnails) were provided by the Nightcafe interface for selecting the styles of image that could be generated.

Posted in Rix Inclusive Research | Tagged , , , , | Leave a comment

Talking Ruler (AKA RixTalk)

At a recent visit to the Google Accessibility Discovery Centre with colleagues from Rix Inclusive Research, amongst the many accessible items on display we saw a ruler that is available for people with sight impairments (they called it a ‘Braille Ruler’). The RNIB also sell an identical ruler to the one we saw at Google, but they call it a ‘tactile ruler’. Both have cutaways every 5mm to help you make measurements, and raised numbers and lines at intervals of 5mm. Here is the one from the Google Accessibility Centre:

Here is the same ruler from RNIB (looks 3D printed to me):

Not being content with a static ruler, I thought it might be interesting to enhance this a little and add some speech to the ruler. So I created an alternative version. The prototype below is the work in progress, which combines the tactile ruler with speech output using (for the prototype at least) an Arduino Uno, with an Adafruit MP3 player for the speech output:

The RixTalk Ruler has similar notches every 5mm, but larger notches every 10mm to help distinguish between them. The ruler is also only 15cm long, as it was easier to print the prototypes this way (having to produce many versions before I got the dimensions correct). Inside each ‘V’ slot is a contact wire, each of which connects to an analog port on an Arduino Uno. The contact wires are visible in the side view below:

The pencil (shown above) has a 3D printed cap that makes contact with the graphite core of the pencil, and is connected to the 5V port of the Arduino Uno. When a slot in the ruler is touched using the pencil, it makes contact with the Uno and speaks the measurement through the MP3 player and speaker. The prototype uses wires to the Arduino (just five of the thirty slots for testing purposes), which is a bit awkward, but necessary to test the idea out. However, it could be made into a single unit, battery operated with an integral speaker and bluetooth connection to the pencil. That will be the next development if we decide to continue with it.

Video of the prototype is below:

The next stage of development is to embed the electronics inside the ruler itself, with an integrated battery and speaker, and remove the need for wires by using a bluetooth connection to the pencil. We have also looked at the prospect of adding a thin base to the ruler so that in addition to having the slots at 5mm interval, you can also ruler straight lines. I’ll add an image of the new version as soon as it’s printed.

Posted in Uncategorized | Tagged , | Leave a comment