ECE5160 Fast Robots

Daoyuan Jin · ECE Grad Student · dj368@cornell.edu

Hi there. Welcome to my webpage for ECE 5160 Fast Robots. Most of the links here are still under construction...


about me

I'm a graudate student in CAIR lab, Cornell AgriTech. During my free time I'd like to go outdoors.

To be continued.


LAB1: The Artemis board and Bluetooth

PART1: The Artemis board

Lab Objective

This part acts like a introduction to the Artemis board and the Arduino IDE. By testing examples in the library, we gradually become familiar with the process of using Arduino boards.

Prelab

Setup Arduino environment is reletively simple. We first install the Arduino IDE and install the Sparkfun Appollo3 library accodring to the setup instructions. Then hook the Artemis board up to the computer, select a corresponding borad. Now we are good to uploud examples.

Example: Blink it Up

As the first code we tested, the "Blink it up" example turns the LED on and off periodically.

Example: Serial

In this example we test the serial output and the function of serial monitor. When we send an input to interface, the Artemis board would send it back to the serial monitor as the video shows.

Example: analogRead

In the third example we test the fuction of temperature sensor and the board's ability to read analog values. As you can see in the video, the original "temp" value is around 33.9 Celsius. I hold the board still for a while and the reading goes up a little to 34.0 Celsius.

Example: MicrophoneOutput

This example tests the function of microphone unit. It has the ability to print out the loudest frequency the board received.

Please turn down your volume a little bit before click the video. Thanks for your understanding!

5000-level: Electronic Tuner

On top of the MicrophoneOutput example, I write a code to make the board act like a electronic tuner. As the "Ode To Joy" goes, the board will identify the frequency of each note and print out a "C", "E" or "F" whenever it encounter one. (Unfortunately there are no "A"s in Ode To Joy.) And it will blink for 0.5 second when it hear a "C", which can be noticed in the video.

The logic behind is pretty simple. We can search on the internet for the frequencies of musical notes. For example, C5 is 523.25Hz. But in my case, it turns out to be around 526Hz. Due to the deviation in pitch accuracy, and certain overtones produced during the instrument performance, it's hard to fully encode all the musical notes. But it can be an interesting task in the future practice.

PART2: Bluetooth

Lab Objective

The objective of this part is to establish communication between PC and the Artemis board through the Bluetooth connection. On PC side we use Python3 in Jupyter notebook, while on the Artemis side we use the Arduino programming language, which is a simplified version of C/C++. We will get familiar with the process of sending data via Bluetooth that will be useful in future labs.

Prelab

1. Setup

PC side: Install Python, Create and Activate Virtual Environment, Install Python Packages, Using Jupyter Server.

Artemis side: Install ArduinoBLE, Using ble_arduino.ino.

For this 2024 spring class, I'm using Win11 and I went through all the setups and tasks without using WSL, although I still spent some time trying to install it. It turns out that all the errors popped up can be fixed inside Windows11.

2. Codebase

The Python and Artemis packages are provided to establish a basic communication between PC and the Artemis board through BLE. We will modify on top of the codebase

Workflow

To start with, let me priefly introduce the general workflow of writing a new bluetooth function.

On Artemis side: Twosteps. First, add CommandTypes, so that the board can handle commands accordingly. Second, add a case under switch function, define the specific function inside this case. Typical output steps are Serial.print() and tx_characteristic_string.writeValue(), which print out to the serial monitor and send message via bluetooth, respectively.

On PC side: add CMD(Enum) in cmd_types.py just like CommandTypes

class CMD(Enum):
PING = 0
SEND_TWO_INTS = 1
SEND_THREE_FLOATS = 2
ECHO = 3

Use ble.send_command() and ble.receive_string() pair to send and receive message. Or use ble.start_notify() to start a notification handler, which we will cover below.

Establish Bluetooth Connection

First, upload the ble_arduino.ino into the board. Once finished, the MAC address will be printed in the serial monitor. Replace the first line in connection.yaml file with this address.

Second, in order to differentiate from other students' boards, generate a new Universally Unique Identifier (UUID) by using lines below.

Replace the second line in connection.yaml file with this UUID. Replace the BLEService UUID in ble_arduino.ino with this UUID too.

Last, run the demo.ipynb notebook to make sure Bluetooth connection is established.

Task1: ECHO

First we write a "ECHO" command to let Artemis send back a signitured string to test the send & receive functions via bluetooth. On PC side the code and result looks like this:

On Artemis side the code looks like this:

Task2: GET TIME MILLIS

On top of string transmission, we use millis() function in Artemis to send back the time information. On PC side the code and result looks like this:

On Artemis side the code looks like this:

Task3: Notification Handler

When we use ble.receive_string to receive information from Artemis, the entire Python program enters a waiting state and does not handle other tasks, which is very time-consuming in practical use. To address this issue, we developed the Notification Handler function. When we activate this function, the callback function is called to process the information every time Bluetooth receives it, and it does not occupy the thread when there is no new information. The following video shows the specific implementation and result:

Task4: Data Transfer Rate

Now we write a loop in Artemis (show in the picture above) that gets the current time in milliseconds and sends it to PC to be received and processed by the notification handler. Collect for around 5s and calculate the effective data transfer rate. As the video shows, during 5s the PC received around 200 timestamps, thus indicating a data transfer rate 40/s.

Task5: SEND TIME DATA

We also have different ways to transfer data. Instead of sending message once a time, we can store it in an array first. Once we send request from PC for the timestamps then the board send all its data to PC.

To address this we first add the following lines in the loop (besides default write_data() and read_data()) to store timestamps. Once the array is full we overwrite old data with new data while always preserving the index information of the data in the array, allowing us to know from which position to start reading when sending data.

timestamps[myindex] = (int)millis();
myindex += 1;
if (myindex > MAXSIZE){
myindex = 0;
}
write_data(); // Send data
read_data(); // Read data

After that we define "case: SEND_TIME_DATA" in Artemis and "SEND TIME DATA" in Python as shown in the video. In this example, I define the array length to be 255, each time we want to get timestamps from Artemis, it will send us this array in sequence.

Task6: GET TEMP READINGS

To get synchronized time and temperature data, we simply add the corresponding part for temperature as timestamp in Task5. The video shows that both Artemis and PC fulfilled the requirement.

Task7: Discussion

One data at a time: this method doesn't count on the RAM, if we are dealing with single piece large data and small RAM, we'd better choose this method.

Sending data after storage: this method has the ability to record denser data points campared with the former, also this method allows the robot temporarily disconnecting with PC (although in our case it's not possible).

The Artemis board has 384 kB of RAM. If we use all of the RAM to store our data, and one sample value is 32-bits (string in utf-8), then it will be 12000 values storage.

5000-level: Effective Data Rate And Overhead

To test the relationship between data length and data rate, I modified the ECHO function to an exact sentback mode. Then I proceed a expriment to calculate the time it takes for transmitting one byte of data.

The result graph shows that by increasing the data length, the effective data rate gradually increase as well.

5000-level: Reliability

By comparing the stored data in Artemis and PC, I found that the computer read all the data published by Artemis, even if Artemis sends at a higher rate.


LAB2: IMU

Lab Objective

In the second lab we add a IMU to our robot (Artemis board). We record and process the Accelerometer and Gyroscope data, and implement a low pass filter to improve the data quality. We also test the RC car to get a sense of future work.

Task1: Set up the IMU

We use the Sparkfun breakout board as IMU, and install the SparkFun 9DOF IMU Breakout - ICM 20948 - Arduino Library through the Arduino library manager.

Connect the IMU to the Artemis board using the QWIIC cable.

Run basic example to read the sensor data. As you can see in this video, Accelerometer and Gyroscope both have X, Y, Z three dementional data. As I rotate, flip and accelerate the board, data change accordingly. Next we will use functions to convert these data to the gesture of the board.

The definition of AD0_VAL is confusing. In my case I have to set to 0 to get the sensor data. However, on my board the ADDR jumper is soldered, which indicate I should use 1 to get default setting. Anyway, "0" works.

Furthermore, as suggested in the instruction, I add a blink function in the setup of Artemis board. You can see the blue light blink three times on start-up.

Task2: Accelerometer

First we use the following equations the convert Accelerometer data into pitch and roll.

In Artemis it goes like this:

pitch_a = atan2(myICM.accY(),myICM.accZ())*180/M_PI;
roll_a = atan2(myICM.accX(),myICM.accZ())*180/M_PI;

To demonstrate the accuracy of Acc data, we show the output at {-90, 0, 90} degrees for pitch and roll respectively.

Picture1 pitch & roll @ 0 degree

Picture2 pitch @ -90 degree

Picture3 pitch @ 90 degree

Picture4 roll @ -90 degree

Picture5 roll @ 90 degree

We can see that there is noise in the data but generally accurate.

Upon applying linear regression to the three-point dataset, we observe a minor offset and a slope coefficient below 1.

By manipulating the equations in the graph, we can derive the conversion factor. However, this data exhibits significant randomness and is heavily influenced by the desk's incomplete horizontal/vertical alignment during the experiment.

Low Pass Filter

The accelerometer is noisy. To address this issue by using low pass filter, we first to analyze the noise in the frequency spectrum

Here is a helpful reference on Fourier Transform.

Take pitch data as an example, I collected nearly a thousand data points while the board was stationary. The collected data and the data after Fourier transformation are shown in the graph.

From the results, it appears that the intensity of the noise is much lower than the intensity of the signal. Perhaps there is no need to add another low pass filter.

In fact, by checking the datasheet, we found that the IMU has a built-in low-pass filter. However, upon closer examination, I believe that this function is not enabled in the mode we are using.

Anyway, we can still add an additional low-pass filter to see its effect.

Take pitch as an example, the implementation of a low pass filter is like this:

const float alpha = 0.2;
pitch_a_LPF[1] = alpha*pitch_a + (1-alpha)*pitch_a_LPF[0];
pitch_a_LPF[0] = pitch_a_LPF[1];

We have a list, pitch_a_LPF[], of two doubles, to record the previous and present pitch data, and implement LPF by adjusting the parameter alpha.

Mathematically, the cutoff frequency is defined as the frequency at which the output power of the filter is reduced to half of its maximum value. In terms of the transfer function of the filter, the cutoff frequency is the frequency at which the magnitude response is 0.707 times the maximum magnitude. Here we set alpha=0.2.

This is the result of pitch data after applying the low pass filter:

In this experiment I recorded around 1000 data points. Starting from 500 I added some vibration to the Artemis board. In my opinion, the figure in the frequency spectrum is pretty decent.

By contrast, the roll data was recorded simultaniously but without a low pass filter. Shown as below.

Task3: Gyroscope

The reading of the gyroscope is the angular acceleration of the XYZ coordinates. By integrating them separately, we can obtain the angles of the three coordinates. Artemis implementation is as follows:

pitch_g = pitch_g + myICM.gyrX()*dt;
roll_g = roll_g + myICM.gyrY()*dt;
yaw_g = yaw_g + myICM.gyrZ()*dt;

This video shows the pitch, roll, and yaw data we get from Gyroscope.

You can see from the video that the data of Gyroscope is pretty smooth ~

Now we compare the pitch data from Accelerometer and Gyroscope. We can get pitch output from Accelerometer and Gyroscope simultaniously, and the results are as follows:

We can see that the pitch noise obtained from the accelerometer is very large, while the pitch obtained from the gyroscope is very smooth. However, the data obtained from the gyroscope has accumulated errors; when the board is stationary, the pitch calculated by the gyroscope still changes.

Additionally, we change the sampling frequency to see how it affects the accuracy of gyroscope data.

After lowering the frequency, the accumulated error in gyroscope data becomes more pronounced, while the accelerometer data remains unaffected. This indicates that the sampling frequency is crucial for the accuracy of gyroscope data.

Complimentary Filter

To address the issue of high noise in accelerometer readings without cumulative errors, and low noise but cumulative errors in gyro readings, we merge the readings of both sensors using the following filter:

Artemis implementation:

pitch_g_delta = myICM.gyrX()*dt;
roll_g_delta = myICM.gyrY()*dt;
pitch_a = atan2(myICM.accY(),myICM.accZ())*180/M_PI;
roll_a = atan2(myICM.accX(),myICM.accZ())*180/M_PI;
pitch = (pitch + pitch_g_delta)*(1-alpha) + pitch_a * alpha;
roll = (roll + roll_g_delta)*(1-alpha) + roll_a * alpha;
yaw = yaw + myICM.gyrZ()*dt;

Here alpha is set to 0.1 based on practical experience. Now the pitch and roll we get is both accurate and stable.

Task4: Sample Data

Speed Up

Just like the GET TEMP READINGS task in lab2, we store the timestamps and IMU data in an array and send to PC once the Artemis board received a command.

The data we received on PC looks like this.

We can see that the sample rate is around 3.15ms, which is around 317Hz. I've written in the code that if the IMU data is not ready, 0 will be filled into the array. But we didn't receive 0s in the array which indicates that the main loop on the Artemis run slower than the IMU produces new values.

Data Storage

Firstly, I think it makes more sense to store Timestamp, Accelerometer, Gyroscope, and ToF separately. It would be easier to locate data by index in separate arrays than a single combined one. Also, it would be more convenient to use separate arrays if we are dealing with different data types.

Secondly, data types. I would use integers (2or4 bytes) for timestamp, floats (4 bytes) for Accelerometer (X,Y,Z) and Gyroscope (X,Y,Z) data to save storage.

Lastly, the above mentioned sample rate would be too large to deal with. If we choose to record at a sample rate of around 50Hz, we will use 1400 bytes per second. With the 384kB internal storage we can store around 274s.

5S Time-stamped IMU Data

Here is the video to demonstrate the ability to record 5 second of data.

One can tell from the timestamps that the time span is longer than 5s and we have recorded 255 sample points.

Task5: Record a Stunt

To get a sense of how RC car moves, we use the remote control to drive the car.

The RC car moves forward, backward, and turns at very high speeds. During remote control, each step is large, and fine adjustments cannot be made. It seems that the car's speed only have one gear, leaving a lot of room for optimization.


LAB3: Time of Flight Sensors

Lab Objective

Adding Time of Flight (ToF) sensors to a robot can greatly enhance its ability to navigate and avoid obstacles effectively. ToF sensors work by emitting infrared light pulses and measuring the time it takes for the light to bounce back, allowing the robot to determine the distance to objects in its path.

Prelab

1. I2C sensor address

From the datasheet, the address of two ToF sensors are both 0x52. In following tasks we will find that it shows differently and we also need the address the issue of communicating with two identical sensors (originally with same address).

2. Using 2 ToF Sensors

To use 2 ToF Sensors, the idea is to shutdown one of them using the XSHUT pin, change the address of the one which is online, then power up the other one.

3. Wiring diagram

For QWIIC wires, Yellow-SCL, Blue-SDA, Red-VIN, Black-GND.

Here is the design of wiring. We use two long cables to connect ToF sensors and the short ones to connect the breakout board and the IMU. Because IMU will be put inside the robot while ToF sensors should be put outside.

4. Placement of sensors in future labs

I will put one sensor in front of the robot to detect incoming obstacles, the other one on one side of the robot (probably right side since cars are driving on the right side:)). It's worth mentioning that having two sensors set in front of the robot may interfere with each other because they will receive signals from each other.

Task1: QWIIC breakout board connection

First we use a JST connector and a battery to power up the Artemis board and install the SparkFun VL53L1X 4m laser distance sensor library.

Then we use solder a QWIIC cable to our ToF sensors and connect them to the QWIIC breakout board.

The additional white wire will be discussed in following tasks.

Task2: Read I2C address

From the screenshot we can see that the address is 0x29, which is different from the datasheet (0x52).

Through further research, we can discover that 0x29(00101001) is one bit moving left from 0x52(01010010), that's probably because the rightmost bit of 0x52 is used to indicate read/write status and it was taken off on Artemis side.

Task3: Sensor data in different modes

The ToF sensors we use have two ranging mode, namely setDistanceModeShort() and setDistanceModeLong(), ranging 1.3m and 4m respectly by default.

To select one the these two modes, I did a experiment to test there accuracy and repeatability.

The video below shows my experiment setting:

I chose 150, 300, 450, ..., 1200 eight sample points, got the average reading from both modes.

You can see that both modes have pretty accurate readings. Given that our robot can move in a fast speed (as shown in lab2), choosing the long range mode might be more usefu.

I also use matplotlib to draw the picture as shown below.

Task4: 2 ToF sensors

In prelab we mentioned the issue of two ToF sensor sharing one default address. By checking the head file of ToF sensor library, we can see many functions that come in handy.

We will use setI2CAddress() function to address the issue.

After changing the address of one of the sensors, we can call them separately just like in examples. One can see in the serial output that two sensors are working in parallel.

To illustrate, the white wire was used to shutdown sensor1.

Task5: Tof sensor speed

Executing speed is critical in future labs. So we measure the time it takes for ToF sensors to receive ranging data.

In this task the board will print time information no matter it receives a new ToF data or not.

Results show that it takes around 3ms for the board to run a loop, 60ms for the ToF sensors to receive a new data, and around 8ms to process the ToF data. The limiting factor here is probably the time it takes for sensors to transmit signals.

Task6: Time v Distance

By combining previous labs the managed to use bluetooth to collect ToF data.

After collecting, I used matplotlib to draw the graph.

5000-level: Discussion on infrared transmission based sensors

Two common sensors based on infrared transmission are Infrared (IR) Proximity Sensors and Infrared Distance Sensors:

1. Infrared Proximity Sensors

Functionality: IR proximity sensors emit infrared light and measure the reflection of this light to detect nearby objects. They work based on the principle that objects reflect infrared light differently depending on their surface properties and distance from the sensor.

Pros:
Simple and cost-effective.
Suitable for detecting the presence or absence of objects within a limited range.
Fast response time, making them suitable for applications requiring quick detection.

Cons:
Limited range and accuracy compared to other distance sensors.
Susceptible to interference from ambient light sources, which can affect their reliability.
May have difficulty distinguishing between objects of similar reflectivity.

2. Infrared Distance Sensors

Functionality: Infrared distance sensors also emit infrared light, but they measure the time it takes for the emitted light to bounce back (Time of Flight principle) to calculate the distance to an object. They typically use methods like triangulation or phase-shift measurement to determine distance accurately.

Pros:
Offer greater accuracy and range compared to IR proximity sensors.
Can provide precise distance measurements over longer distances.
Less affected by ambient light interference due to sophisticated modulation and signal processing techniques.

Cons:
Generally more complex and expensive than IR proximity sensors.
May require calibration and adjustment for optimal performance.
Sensitive to environmental factors like temperature and humidity, which can affect their accuracy.

5000-level: Sensitivity of sensors to colors and textures

Time of Flight sensors are generally less sensitive to variations in color and texture. As the picture shows I tested several material but didn't find much difference. However, transparent or translucent objects may absorb or scatter infrared light differently, impacting the accuracy of distance measurements obtained by ToF sensors.


LAB4: Motors and Open Loop Control

Lab Objective

In this lab we'll replace the chip inside our racing car with the Artemis board, solder the motor drivers with the board and two motors separately. By the end of the lab, we'll be able to drive the car via pre-programmed codes or bluetooth connection.

Prelab

1. Wiring Diagram

As shown in the following diagram, in order to get enough current we need, we use both of two channels on one driver chip to actuate one motor. Note that channels on different chips should not be used to drive one motor, because they will interfere with each other.

The two GND pins on a driver chip are essentially the same, so we only have to hook up one GND to the Artemis.

In terms of analogWrite PIN, I chose A14 and A15 for one driver, A2 and A3 for the other. We only have to make sure the pin we use has the ability to write analog signals. The sequence is not that important because we can easily switch the forward/backward pin in Arduino code.

2. Battery Discussion

We will use separate batteries for the Artemis board and the motor drivers for several reasons.

(1)Motors can generate electrical noise and voltage spikes, especially during sudden changes in speed or direction. This electrical noise can potentially interfere with the operation of the Artemis microcontroller, causing malfunctions or erratic behavior. By using separate batteries, we can isolate the power supplies for the Artemis and the motors, reducing the likelihood of interference.

(2)Motors often require higher voltages and currents than microcontrollers. Using separate batteries allows us to choose a battery with appropriate voltage and current ratings for the motors without worrying about compatibility issues with the Artemis microcontroller.

(3)Separating the power supplies can also improve the overall efficiency and performance of the system. Motors drawing large currents won't cause voltage droops in the power supply for the microcontroller, ensuring consistent and reliable operation of both the Artemis and the motors.

Task1: Test analogWrite

Before connecting the driver to the real motors, we first test the analogWrite function of the combination of Artemis and motor driver.

Test Setup

As you can see in the picture, I soldered up one of the motor driver with the Artemis board, and use power supply the power up the driver.

Power Supply Discussion

For the setting of the power supply, I set it to 3.7V to keep up with the battery supply.

analogWrite code

Before setup, we define the output pins for motor control.

#define MOTOR_L_FORWARD 15
#define MOTOR_L_BACKWARD 14

In setup, we set the resolution of analogWrite output. By default it's 8 bits, so we can skip this line. We need to explicitly set it if we want 1-16 bits other than 8.

analogWriteResolution(8);

In the loop, we write one pin to high and the other to 0.

analogWrite(MOTOR_L_FORWARD, 127); // 50% duty cycle w/ 8-bit resolution
analogWrite(MOTOR_L_BACKWARD, 0);
delay(1);

The output looks like this.

I also changed this value to 63(25%) and 191(75%).

Task2: Take apart!

Now it's the time for us to take our RC cars apart. After removing the shell, we'll see two motors and the control PCB comes with it. Carefully cut the wires and get rid of the PCB and LEDs. Now we have two moters and one battery connector all with original wires.

Note the each motor control two wheels on one side, which means it is a differential drive robot.

Task3: Spinning Test

After removing the PCB and LEDs, we hook up the drive and the motor to get a physical test of analogWrite.

Single side test

As you can see in this video, the motor first spin forward for 4s, rest for 1s, and then spin backward for another 4s.

All-wheel-drive with Battery Supply

Repeat the above process for the other motor and driver, and hook up two drivers with the battery connector. Then we can let the car run by it self.

It's worth mentioning that I forgot to pass the battery connector through the hole in the battery box before soldering it, which led to the need to start over. Thanks to TA Julian for providing me with another battery connector, allowing me to cut the cable in an easily accessible part, thus avoiding the need to re-solder the part connected to the driver.

Now you can see both wheels spinning, and the car running on the ground.

Notice that due to the different spin rate of these two motors, the car didn't run in a vary straight line. We'll do a calibration to fix this problem in the flollowing part.

The picture of all the components secured.

Task4: Static Friction

Due to friction inside the motor and gear box, the motor cannot be actuated at PWM values just above 0. So it is important for us to measure this lower limit.

To make my experiment easier, I add a "GO_STRAIGHT" command on top of the bluetooth control from previous labs. It can read in a integer from Jupyter lab and implement this speed for analogWrite.

By increasing from 0, I found out that the lower limit is around 36-38 (out of 0-255).

For on-axis turns, the threshold is much higher at around 175.

Task5: Calibration

In the end of task3, we noticed the different spin rate of two motors. By implementing a calibration factor on one of the driver we can address this problem.

By trail and error, I found out that the calibration factor is actually around 1. Other factors such as the alignment of the vehicle's front end with the straight line, and the installation of tires, have a greater impact on straight-line driving.

Here is a video to demonstrate that my car can drive in a fairly straight line for more than 2m.

Actually, I believe that slight deviations to the left or right during straight-line driving have limited impact on actual autonomous driving. These errors can be promptly corrected under the closed-loop control of distance sensors.

Task6: Open Loop Demonstration

Lastly, we wrap up with a open loop control demo, including straight lines, back up and turns.

5000-level: AnalogWrite Frequency

The analogWrite function in Arduino generates a PWM signal with a frequency typically around 490 Hz for most Arduino boards. This frequency is generally suitable for driving many types of motors, including DC motors, servos, and some types of stepper motors.

However, for certain applications or specific motor types, a faster PWM frequency may offer benefits

1. Reduced Audible Noise: Motors driven at higher PWM frequencies tend to produce less audible noise, which can be advantageous in applications where noise is a concern, such as in audio equipment or robotics used in quiet environments.

2. Higher Control Resolution: Faster PWM signals allow for finer control of motor speed and position, as the shorter pulse widths provide more discrete steps in the motor's operation. This can be beneficial for applications requiring precise control, such as CNC machines.

5000-level: Dynamic Friction

Once the car is in motion, the lower limit of PWM value should be lower than what was been found in task4, where the car starts from static.

To measure the dynamic lower limit, I modified previous code as below.

The car is given an initial speed which is higher than the lower limit we found in task4. Then we change the speed to what we want to test.

I found out that the dynamic lower limit is around 32 (out of 0-255) in this test below.

One can tell whether the motors are running by listening to the sound. If you can hear the sound from the motors but the car stopped moving, the current value is below the lower limit.

When the car is moving at its slowest speed, it takes nearly no time for it to stop.


LAB5: Linear PID control and Linear interpolation

Lab Objective

This lab is part of a series of labs (5-8) on PID control, sensor fusion, and stunts. This week we will do position control. Specifically, drive the robot towards a wall, then stop when it is 304mm away from the wall using feedback from the time of flight sensor.

Prelab

Data Transfer

Before implementing the PID controller, it's very important for us to setup a data transfer system via bluetooth.

To begin with, I modified the notification handler from previous labs to receive data.

def notification_handler(uuid, byte_array):
global times, tof1, pwm
time, tof11, pwm1 = ble.bytearray_to_string(byte_array)[:].split()
times.append(int(time))
tof1.append(int(tof11))
pwm.append(int(pwm1))
ble.start_notify(ble.uuid['RX_STRING'], notification_handler);

To make my debug and tuning easier, I wrote another two commands to communicate with the Artemis.

First is a PID_SWITCH command. By sending this

ble.send_command(CMD.PID_SWITCH, "1")

I can start the PID controll, wheras "0" to stop. Initially, it's set to "0" so that I can place the car easily.

Next is a SET_PID_GAIN command. By sending this

ble.send_command(CMD.SET_PID_GAIN, "0.15|0.01|0")

I can set Kp=0.15, Ki=0.01, Kd=0, which makes my tuning much easier.

On Arduino side it looks like this:

Task1: PID Controller

We now can dive into the controller itself. We have talked about PID Controller in class. The basic equation is as follow:

Proportional (P) Control: This term produces an output that is proportional to the current error signal, which is the difference between the desired setpoint and the actual value of the system being controlled.

Integral (I) Control: This term integrates the error signal over time, which helps to eliminate steady-state errors by continuously adjusting the output based on the accumulated error. The integral term helps to reduce any long-term deviations from the setpoint.

Derivative (D) Control: This term considers the rate of change of the error signal. It acts to dampen the system's response by anticipating future trends in the error signal.

PI Controller

In my experiment, I chose to use a PI controller for several reasons.

1. Steady-State Accuracy: The integral term in a PI controller continuously adjusts the control signal to eliminate steady-state errors. This means that the system output eventually reaches and maintains the desired setpoint accurately, even in the presence of disturbances or uncertainties.

2. Less Sensitivity to Noise: In some cases, the derivative term in a PID controller can amplify noise in the system, leading to undesirable control action. By excluding the derivative term, PI controllers are less sensitive to noise, which can be beneficial in noisy environments or systems with high sensor noise.

3. Reduced Complexity: Compared to PID controllers, PI controllers have simpler dynamics and fewer tuning parameters. This reduced complexity can lead to easier implementation, maintenance, and troubleshooting in control system applications.

Task2: Range/Sampling time discussion

The sampling frequency of the ToF sensor is relatively low, around 40ms in my case. And I'm alredy using the shortmode. It would be lower if I chose the longmode.

So I kept using this mode and came up with two strategies which we will further discuss. Namely, controll with previous information and with extrapolation data.

Task3: implementation

The implementation of PID controll is relatively simple. On Artemis it looks like this:

while (!distanceSensor1.checkForDataReady()){delay(1);}
time_last = time_current;
time_current = (int)millis();
int dt = time_current - time_last;
int distance1 = distanceSensor1.getDistance();
distanceSensor1.stopRanging();
distanceSensor1.clearInterrupt();
distanceSensor1.startRanging();
error_current = distance1 - goal;
error_sum = error_sum + int(error_current*dt/1000);
int p = kp * error_current;
int i = ki * error_sum;
float d = kd * (error_current - error_previous)/dt;
int speed = p+i+int(d);

In each loop we wait for ToF data ready, then get the time information and new ToF data. After that we compute three terms separately and add them together.

The output "speed" will go to my actuate function and translate to PWM value. In my actuate function I wrote a mapping between demanding speed and PWM value, particularly considered the dead band tested in last lab.

Here is a video to demonstrate the basic feedback controll.

One thing I noticed during my debugging is that I have to set my integration term to 0 when I stop my PID controll. Otherwise this term will get weird.

Reaching Task Goal

I tuned the PID gains according to the strategy we have talked about in class. And get the following combination and result.

ble.send_command(CMD.SET_PID_GAIN, "0.1|0.01|0")

You can see that the car bump into the carton no matter how I tune this. This is probably caused by the low controll frequency, which leaves up space to add extrapolation.

The sensor readings and controll values over time are recorded as below.

Task4: Extrapolation

Although the update frequency of ToF sensors are relatively low, we can increase our controll rate by predicting position information. By implementing a linear prediction on previous two data, we can get a more accurate new data rather than using the previous one.

The pseudocode for extrapolation is as follow. We have to memorize two previous data points (timestamp+reading) to do linear extrapolation.

if newdata:
updata previous data point
controll according to new data
else:
compute linear extrapolation
controll according to extrapolation data

One thing I noticed during testing is that, we should avoid doing extrapolation on extrapolation data. I tried to record both raw data and extrapolation data, and do extrapolation on previous two data points no matter they are raw or extrapolated. But it turns out the data quality get even worse. Because the update rate of extrapolation is even higher than raw, tiny error in sensor reading will accumulate during extrapolation. So it's important for us to restrict extrapolation on raw data.

(5000)Task5: Wind-up and discussion

Windup happens when the system's output saturates or reaches its limits, but the integral action continues to accumulate error. This can lead to overshooting, instability, or prolonged settling time when the output eventually returns within the allowable range.

To address this problem, the simple way I use is just clamp the integral term to a certain range, for example below 100. This will address this issue while keep the ability to reach steady state.

After implementing Extrapolation and addressing Wind-up, we can get a better controll outcome.

The sensor readings and controll values over time are recorded as below.


LAB6: Orientation Control

Lab Objective

This lab is part of a series of labs (5-8) on PID control, sensor fusion, and stunts. Quite parallel to the last lab, in this one we will controll the yaw of our robots using the IMU.

Prelab

Data Transfer

Before implementing the PID controller, it's very important for us to setup a data transfer system via bluetooth.

I did slight chances to last lab's code to transfer data.

To make my debug and tuning easier, I wrote another two commands to communicate with the Artemis.

First is a PID_TURN command. By sending this

ble.send_command(CMD.PID_TURN, "1|90")

First item "1" start the PID controll, wheras "0" to stop. Initially, it's set to "0" so that I can place the car easily.

Second item "90" is used to set the turning angel measured by degree according to Right-hand thread.

Next is a SET_PID_GAIN command similar to last lab. By sending this

ble.send_command(CMD.SET_PID_GAIN, "0.15|0.01|0.1")

I can set Kp=0.15, Ki=0.01, Kd=0.1, which makes my tuning much easier.

On Arduino side it looks like this:

Furthermore, the command to send back turing data is also modified accordingly on Artemis.

Task1: PID Controller

We now can dive into the controller itself. We have talked about PID Controller in class. The basic equation is as follow:

Proportional (P) Control: This term produces an output that is proportional to the current error signal, which is the difference between the desired setpoint and the actual value of the system being controlled.

Integral (I) Control: This term integrates the error signal over time, which helps to eliminate steady-state errors by continuously adjusting the output based on the accumulated error. The integral term helps to reduce any long-term deviations from the setpoint.

Derivative (D) Control: This term considers the rate of change of the error signal. It acts to dampen the system's response by anticipating future trends in the error signal.

PID Controller

In this lab, I chose to use a PID controller for several reasons.

1. Steady-State Accuracy: The integral term in a PI controller continuously adjusts the control signal to eliminate steady-state errors. This means that the system output eventually reaches and maintains the desired setpoint accurately, even in the presence of disturbances or uncertainties.

2. Damping Oscillations: The derivative action of a PID controller helps to dampen oscillations in the system's response, particularly during transient periods or when the system experiences sudden changes.

3. Preventing Integral Windup: The derivative term indirectly helps to prevent integral windup by limiting the integral action during rapid changes in the error signal.

4. Also the Gyroscope readings are relatively stable, which provides the foundation to implement derivative controll.

Task2: Range/Sampling time discussion

The sampling frequency of IMU is relatively high, beyond 300fps in my case. So I think there's no need to use extrapolation and I can simply put the control loop inside myICM_dataReady function.

Task3: implementation

Here I first implement a turning controll without bluetooth

The PID controll code on Artemis looks like this:

In each loop we wait for IMU data ready, then get the time information and new Gyroscope data. After that we compute three terms separately and add them together.

In the end, we have to write a function to translate the PID result to wheel controll. For turning it's as follow:

Here you can see that there is a mapping between PID output and analogWrite range. The dead band for turning is relatively larger than moving straightward.

Here is a video to demonstrate the basic PID turning controll.

In this video the target angle is set to 90. As you can see, the car tried to maintain at 90 degrees when I tried to interrupt it.

Task4: Bluetooth and data collection

After that I combined Bluetooth controll.

I tuned the PID gains according to the strategy we have talked about in class. First get the following combination without derivative term.

ble.send_command(CMD.SET_PID_GAIN, "0.3|0.01|0")

You can see that there is a large overshoot. We can add a derivative term to prevent this.

ble.send_command(CMD.SET_PID_GAIN, "0.4|0.01|0.1")

(5000)Task5: Wind-up and discussion

Windup happens when the system's output saturates or reaches its limits, but the integral action continues to accumulate error. This can lead to overshooting, instability, or prolonged settling time when the output eventually returns within the allowable range.

To address this problem, the simple way I use is just clamp the integral term to a certain range, for example below 30. This will address this issue while keep the ability to reach steady state.

After addressing Wind-up, I did more test on 30 degrees and 180 degrees to show the car's ability to make turns.

Further Discussion

To control the orientation while the robot is driving forward or backward, we can simply add up the controll command for both straightward and turning, with some calibration factors to balance the contribution of both motions.


LAB7: Kalman Filter

Lab Objective

In this lab we will implement a Kalman Filter, which can supplement our slowly sampled ToF values to make our controll smoother.

Task1: Estimate drag and momentum

Before implementing the Kalman Filter, we first have to model the state space of our robot. Specifically, estimate the drag and momentum terms for our A and B matrices.

To get these parameters, we will drive our robot towards a wall from stationary position, record the process of acceleration and steady speed, and analyze the speed graph.

Note that the floor conditions should remain similar between the experiment from which we model these parameters and the experiment we want to implement KF.

Because I did my position control on a carpet, I went back to the same place to model my state space.

The motor output is set at PWM 220 out of 255. I recorded data from ToF sensor and time stamps, then calculated the speed.

From the speed graph, we can read the steady state speed, the speed at 90% risetime, and 90% rise time.

The steady state speed is around 2.8m/s, 90% rise time is around 0.65s.

Using these data, we can calculate drag and momentum from these two equations:

And get d = 220/2800 = 0.0786, m = -d*650/ln(0.1) = 0.0341.

Task2: Implement Kalman Filter

Initialize KF

From first task we get our A, B, C matrices, and we will discretize them in the following code

Then we initialize process and sensor noise covariance matrices

Finally we can initialize the state

We already have the KF function from class, simply call this function in every iteration as below.

Now we get the output from Kalman Filter.

You can see that the trend of KF line is highly consistent with the observed ToF results, but it's smoother.

(5000)Task3: Faster Frequency

Even if we don't have a ToF reading, we can still get a update prediction from the Kalman Filter.

In the following example, I insert a timestamp between each ToF reading to enhance prediction frequency.

For each ToF&PWM reading, I would update twice, one with ToF, the other without ToF. Thee output shows like this:


LAB8: Stunts!

Lab Objective

In this lab we will combine everything we've done up till now to do fast stunts. We can choose between position control (flip) and orientation control (drift).

Task: Drift!!!

In this lab I chose to do a drift, although technically it should not be called one.

It's basically a combination of driving towards the wall, do a 180 degree turn, and speed up again. However, because our car has built up momentum during the first phase, the 180 degree turn would look like a drift.

I also added some tricks in the end to make it more like a drift.

Here I first show the result of my basic setting of position control:

To get this, I combined several modules from previous labs, and constructed this pseudocode structure:

In order to get sensors'data, the robot first collects and records them in each loop. Then it will select the task according to the task flags, and complete each task in sequence.

After completing all the tasks, I will collect data via bluetooth.

The sensors' data of this demo is shown below.

Note that the Gyro data is updated in each loop, but the ToF data is not available in each loop. Simply waiting for ToF data in every loop will decrease my processing frequency, what I did is checking the availability of ToF data in every loop and record it when it's available. Thus you can see that the graph of ToF data looks like spikes cause when there's no data it shows 0.

However, this basic setting is not very stable as the spin rate is too fast for the IMU to record, failure happens frequently as this one.

Then I tried to lower the spinning speed a little bit. Furthermore, I figured out that to make it more like a drift, I don't have to use zero point turn, which means if I want the car to make a left drift, I can set the right wheels spinning forward and the left wheels spinning backward but not necessarily at same speed. I can use left wheels mainly as a brake and use right ones to spin. This will result in a drift-like turn.

The following video shows the result.

In order to showcase I can reproduce this I did two more recordings.

That's it for my drift stunt!


LAB9: Mapping

Lab Objective

In this lab we will map an arena. We will first use the robot to scan the terrain at five positions using ToF sensors. Then complete the map construction through a series of transformations.

Part1: Orientation control

The manual provides us with three methods for controlling angles, namely open loop control, orientation control, and angular speed control.

I chose to use orientation control which I can modify from previous labs. I set the angular control goal to 20 degree, which ideally would lead to 18 sample points per round.

Because we don't have the ground truth for angular value (the angle is calculated from integration), I didn't run the angular control throughout the spin. Instead, I reset the control after each 20 degree turn.

Because the external environment is essentially unchanged, I believe this control strategy makes each rotation more independent and controllable.

Here is a sample video of the orientation control:

I initially planned to test the actual rotation results and adjust the angle for each rotation based on the actual number of rotations in one cycle. However, I found that this was not necessary, as the control of the rotations is already quite accurate.

Part2: Execute the Scan

Once the rotation control is determined, I can specify my scanning strategy. Below is the pseudocode for the scanning process.

By using notification_handler from previous labs, I got the ToF readings from the robot. Then I sanity checked the data by plotting them on a polar coordinate map:

The result is quite good in my opinion. Note that the drawn line overlaps with itself in some parts, because I collected data points exceeding 360 degrees. This overlap indicates the reproducibility of the results.

Next, we need to transform the data into Cartesian coordinates for ease of merging.

In addition to this, we should also recall that the ToF sensor is not located on the origin of the rotation. For simplicity we assume that the robot is rotating in place, then we need to measure the position from the sensors to the center of the robot as compensation for the readings.

The compensation is set to 70mm. Furthermore, I also corrected the initial angle of the robot.

Finally, I got the map in Cartesian coordinates:

Through the same process, I obtained maps for other four sampling points, here are 4 polar maps for sanity check.

Part3: Merge the Maps

To begin with, the separate maps I've shown are in sequence inside this picture.

In order to merge these maps, we first need to determine the origin of one of the maps as the origin for the fused map. Then, we calculate the transformation matrices for the other four maps in order to merge them.

The second point is chosen to be the origin of the merged map. Because I maintained a consistent initial orientation of the robot during the scanning process, there is no need for rotation between these five maps, only translation.

The rotation matrix here should be an identity matrix, and the z term in translation is also 0.

We only need to count the number of tiles(1ft/tile) between each sampling point to obtain the corresponding x and y terms.

After translation, we got the merged map as following.

Part4: Line-Based Map

Finally, we can add the "ground truth" of walls and obstacles on to the map, which makes it useful in following labs.

As you can see, the boundaries of the walls are clearly delineated, but there is a larger error in outlining the contours of the cardboard boxes in the arena. This may be due to differences in material of the obstacles, which could result in variations in the performance of the ToF sensors.


LAB10: Grid Localization using Bayes Filter

Lab Objective

In this lab we will implement grid localization using Bayes filter. The manual for this experiment provided us with a great framework and a lot of useful information.

Function1: Compute Control

Firstly, we have a function to compute the actual control result based on actual current pose and previous pose.

Note: Pay attention to the order of parameters in atan2 function.

Function2: Odom Motion Model

After we obtain observations of the motion that occurred, we can use the Gaussian distribution to calculate the probability of this motion actually occurring under the given control command.

I initially planurate.

Function3: Prediction Step

Now we have enough information to update our predictions. For each grid, the new probability is the sum of the probabilities of moving to that grid from every other grid. This is why there are six nested loops in the code: the first three loops iterate over every grid, and the last three loops iterate over all grids from the previous time step to update the probabilities for this grid.

Note that we set a threshold here -- 0.0001. The threshold is used to skip calculations for some grids in order to improve computational speed. These grids with probabilities smaller than the threshold have minimal impact on the result.

Function4: Sensor Model

In this step, we will obtain the probabilities of observations from the sensor, which will be used in the final step to correct our predictions.

Function5: Update Step

Finally, in the last step, we multiply the probabilities obtained from prediction by the probabilities from the sensor model, and then normalize them to obtain the final probability for each grid.

Simulation Result

The completion of the simulation is depicted below. The green path represents ground truth, the blue path represents the robot's belief, and the red path represents odometry measurements.

You can see that the results of Bayes Filter look much better than pure odometry measurements.

Here I selected some output data of several steps.


LAB11: Localization on the real robot

Lab Objective

In this lab we will implement Lab10 on our real robot.

Code Base

In this lab, we've been provided with a complete localization codebase that implements the estimation of the robot's position using a Bayes Filter. All we have to do is write a function to allow this localization module to receive real observation data from the robot, and modify the corresponding code on Artemis to meet the relevant requirements.

To verify we have downloaded the codebase correctly, we first tested the localization function in simulation, here is the final plot of my test.

A little detour

The other parts of the experiment are relatively straightforward; but I encountered troubles in two parts.

The first one occurred during the setup of the Bluetooth connection. The code we brought over from the previous lab suddenly couldn't establish a connection between the computer and the robot's Bluetooth. After debugging for a while, TA helped us identify that it was an issue with some imported libraries. Thanks to Liam for quickly providing a solution!

The second issue arose during the exchange of information between the computer and the robot. My original implementation allowed the robot to receive commands, but the computer couldn't receive messages sent back from the robot. (I verified this by directly connecting the robot to the computer via Arduino's serial port.) I'll explain my solution in detail in the next section. I want to express my gratitude to Larry for spending a lot of time helping me debug this!

Perform Observations

A structure of the RealRobot class was given. Inside this class we need to modify this perform_observation_loop function.

We need to include (1) notification handler, (2) sending commands, and (3) receiving messages from the robot.

Here is the code of my implementation:

At the beginning, my code's while loop didn't include a wait statement; it only had "pass". Through debugging, we concluded that this was likely the reason why my computer couldn't receive messages sent back from the robot. Because this while loop was continuously looping without any delay, it essentially occupied all processes, preventing the notification handler from functioning properly, and thus I couldn't receive any information.

Afterward, I added "await asyncio.sleep(3)" within the while loop and defined the function as asynchronous (async), which resolved the issue.

In addition, because the Arduino processed TOF data in millimeters (mm), while through debugging, I found that the codebase here was using meters, so before returning, I divided the numpy array by 1000.

Experiment Results

Above are all the code changes required for this lab. Now, let's examine the localization results of the robot at four sample locations.

Here is screenshot of the first sample, (-3 ft ,-2 ft ,0 deg)

The output on the left indicates that my code is functioning properly. I also printed the TOF readings for the 18 sample locations. However, the localization results are not entirely accurate, indicating that there may be other bugs that need to be resolved. On the plot on the right, the green dots represent the ground truth, and the blue dots represent the estimation.

Similarly, here are plots for other three sample points.

Sample2, (0 ft,3 ft, 0 deg):

Sample3, (5 ft,-3 ft, 0 deg):

Sample4, (5 ft,3 ft, 0 deg):

Discussion

The estimation results obtained in this experiment show significant discrepancies from the ground truth. Possible reasons for this could be unresolved unit conversion issues in the code or significant errors in the TOF readings from the robot and the rotation errors every 20 degrees.


LAB12: Path Planning and Execution

Lab Objective

Since this lab is quite open-ended, I chose to utilize another TOF sensor that I hadn't used before to implement a closed-loop bug algorithm.

Pseudocode

In this way, the robot will execute a counterclockwise wall-following program, maintaining a distance of 90-110 millimeters from the wall on its right side. When encountering a 90° corner, it will perform a in place left turn.

Arduino implementation

Here is my related arduino code of this bug algorithm.

Unfortunately, due to time constraints, I wasn't able to test my program in the arena.

Acknowledgements

Thanks so much for Jonathan and all the TAs for their teaching and assistance throughout the semester, especially for providing us with ample time and guidance during lab sessions!