Junior CS Major
Junior ECE Major
Junior ECE Major
Junior ECE Major
Objective: This lab’s objective is to learn how to program the Arduino Uno through Arduino IDE and also learn about its functionalities. We wrote simple programs for the Arduino and then assembled our robot and programmed it to perform basic autonomous driving.
Objective: In this lab we added a microphone and an IR sensor to our robot. In order to accomplish this, we added a digital filter and analog filters so that the microphone could detect 660 Hz and the IR sensor could distinguish between other robots and decoys.
Below is a picture of the circuit after we built it on the breadboard.
Since the open loop gain of the op amp is around 100, we determined that a gain of around ~10 would be easy to implement, stable, and large enough for the Arduino to read. A scope FFT pre-amplification and post-amplification are shown below.
We first did a unit test to test that this code was working. Using the function generator, we plotted the data from the serial monitor using Excel to create a graph of the FFTs of the signals that we care about: 6.08kHz and 18kHz. The line: ADCSRA = 0xe5 Changes the prescaler from 128 to 32, this enables us to run the ADC clock at 500kHz. We can then calculate the width of each bin to be (16MHz/32 prescaler) / 13 clock cycles = ~38kHz. And 38 kHz / 256 bins = ~148.4 Hz per bin. This means that we would ideally see the peak for 6.08kHz to be in bin 41, and the peak for 18kHz in bin 121 which is what is shown below.
We confirmed that the output voltage changed appropriately as the phototransistor was moved closer to the IR hat. This is the output when the IR sensor sensed the presence of the IR hat.
We first checked that the circuit amplified the signal. Below are the before and after amplification photos.
Using this circuit, we were able to confirm that our amplifier detected the signal from different distances.
4 inches away 6 inches away 8 inches away
Looking at the FFT signal, we can see the harmonics as well.
By graphing our outputs and comparing them to those we saw before when the code was running each process individually, we saw that the signals still peaked in the correct bins even when there are multiple signals being detected. This is why we were still able to use the code below to detect the proper signals.
if (fft_log_out[44] >= 70) {
Serial.println("ROBOT");
}
if (fft_log_out[121] >= 70) {
Serial.println("DECOY");
}
if (fft_log_out[6] >= 70) {
Serial.println("SOUND");
}
Objective: Make the robot start on a 660Hz tone, navigate a maze autonomously, send maze information and update GUI.
By combining our bits this way, we were able to use only two bytes per transaction between sender and receiver. After this was developed, we began working on creating a virtual maze and robot which we used to test the functionality of our code, and later on, the functionality of the GUI. We set up a simple 9x9 array and used a for loop to have our robot move through the array. As it moved, we had simple if conditions to check its location and depending on the location the robot was at, we put in different conditions such as that walls were present, or that we wanted the robot to change directions. Using the serial monitor, we checked on the receiver side to ensure the robot was driving in the directions we thought and that we were detecting the walls and treasures that we had set up in the maze.
At this point, we were ready to set up the GUI. We started this by taking out every print statement in the code, and then adding many switch case statements to our receiver code. By ANDing the received bytes with different bits, we could retrieve exactly the bits we wanted to check, and then we would shift these bits so that they were the least significant bits. After doing this, it was very easy to figure out the possible values we could receive and use the bitmasks we had come up with earlier to decode the bits we received. We used Arduino Strings to concatenate the strings we wanted to create depending on the conditions in the switch case statements. Finally, after decoding each byte, one at a time, we put in a Serial.println() and printed our code to the GUI. Attached below is an example of using a switch case statement to determine whether a robot is present in a given square. As can be seen, we and the received bit with an 8 bit 1 so that we can get rid of all the other bits that don't contain any information about whether a robot is present, and then depending on this value, we concatenate different strings. At the end, we combine the strings we made about the direction the robot is moving (which we change into the coordinates) and the robot string to form one complete statement to print to the GUI.
After getting the GUI running, changed our code so that we had a transmit script and a receive script. We knew that we would have to separate the two codes because including the receive code on the robot Arduino would simply be a waste of space, and we ended up freeing up about 15% of our dynamic memory on the robot by removing this code. The most important step in this step was to change the pipe setup for each of the arduinos. This is accomplished by changing the role of the script. After doing this, our code was working separately.
Above is some of our line following code, one of the most important parts of the robot's code. As can be seen, the robot is able to steer and drive itself based on the values the line sensors detect. The radio code is included in a header so we only have to call the radiosetup and ping_out functions within our code to send information to the ground station.
void detectLeftWall() {
digitalWrite(s2, LOW);
digitalWrite(s1, HIGH);
digitalWrite(s0, LOW);
read_wallL = analogRead(walls);
}
detectFrontWall();
detectLeftWall();
detectRightWall();
AnalogRead
and 128 samples.
We see that the bin number is 10 and the threshold is around 50,
but to be safe we drop the threshold in our detection code to be 40.
We run our 660Hz detection in our setup function so that it occurs prior to running anything else.
Doing it in this function also means that it won't constantly be checked during our loop.
int mic = 0;
while (mic == 0) {
mic = detectMicrophone();
Serial.println("Waiting for mic");
}
detectMicrophone
runs the FFT code using AnalogRead
to read the microphone for a signal and returns 1 if the bin we check is above the threshold.
Objective: In this lab, we are developing an FPGA module capable of detecting basic shapes from a camera input, and passing this information on to the Arduino. This device will be mounted on the robot to identify these shapes on the walls of the maze.
Figure 1: Hard-coded Cross
Figure 2: M9k and VGA Modules
Figure 3: Downsampler code
There was no communication occurring between the devices, so we checked both the FPGA and Arduino. Originally, we thought that the distorted clock signal (more sinusoidal than square) coming from the FPGA was causing the issue, but after some testing the issue was found to be the program stopping after calling Wire.endTransmission(). Then, after some googling, we switched out our knock-off Arduino for a real one, and the two immediately were able to communicate. We then compared register values before and after writing them to verify that they we could change them.
The color bars somewhat correct. Here is the video output:
Objective: The objective of this milestone was to add line sensors to our robot to make it drive and follow a line, and to traverse a grid in a figure 8.
if (readR >= 800 && readL >= 800) {
leftservo.write(135);
rightservo.write(45);
}
else if (readR < 800 && readL >= 800) {
leftservo.write(135);
rightservo.write(90);
}
else if (readR >= 800 && readL < 800) {
leftservo.write(90);
rightservo.write(45);
}
void turn() {
if (turnCount % 8 < 4) { //left turn
turnCount++;
leftservo.write(90);
rightservo.write(45);
delay(1200);
}
else { //right turn
turnCount++;
leftservo.write(135);
rightservo.right(90);
delay(1200);
}
}
Figure 1: Block Diagram
Figure 1 shows a high level circuit diagram of our current robot setup. As can be seen, the light sensors are attached to two of the analog pins on the arduino and the two servos are attached to two PWM pins. The light sensors are connected directly to the arduino pins because their wires are not long enough to reach the breadboard. The servos, which have much longer wires, are connected through our breadboard to help organize the wiring on the robot, ensuring the wires do not end up in the wheels of the robot.
Figure 2: Breadboard on the Robot
On the breadboard, we have the power and ground rails coming from the arduino 5V and GND pins. The light sensors and the servos get their power through these rails (the servo connections are in the bottom left corner and the light sensors are connected on the right side of Figure 2).
Objective: The objective of this milestone was to update our robot to circle a set of arbitrary walls through right-hand wall following and successfully avoid other robots.
int LRwalls = 195;
int Fwall = 100;
// U-turn
if (read_wallF >= Fwall && read_wallL >= LRwalls && read_wallR >= LRwalls) {
turn(2);
}
// Left Turn
else if (read_wallF >= Fwall && read_wallL < LRwalls && read_wallR >= LRwalls) {
turn(0);
}
// Right Turn
else if (read_wallR < LRwalls) {
turn(1);
}
// Go forward
else {
leftservo.write(135);
rightservo.write(45);
}
robot = detectRobot();
if (robot == 1) {
digitalWrite(7, HIGH);
Serial.println("ROBOT");
leftservo.write(90);
rightservo.write(90);
delay(1000);
digitalWrite(7, LOW);
}
else {
Serial.println("no robot");
}
detectRobot()
. The code for robot detection is below
with FFT code from lab 2 omitted):
int detectRobot() {
//default adc values
unsigned int default_timsk = TIMSK0;
unsigned int default_adcsra = ADCSRA;
unsigned int default_admux = ADMUX;
unsigned int default_didr = DIDR0;
//setup
TIMSK0 = 0; // turn off timer0 for lower jitter
ADCSRA = 0xe5; // set the adc to free running mode
ADMUX = 0x40; // use adc0
DIDR0 = 0x01; // turn off the digital input for adc0
Runing FFT code from lab 2
//checking the bin
if (fft_log_out[23] >= 70) {
TIMSK0 = default_timsk;
ADCSRA = default_adcsra;
ADMUX = default_admux;
DIDR0 = default_didr;
return 1;
}
else {
TIMSK0 = default_timsk;
ADCSRA = default_adcsra;
ADMUX = default_admux;
DIDR0 = default_didr;
return 0;
}
}
Objective: Create an algorithm that allows the robot to traverse a maze and update the GUI.
Materials: Fully assembled robot capable of moving, detecting walls, and line following.
struct node {
bool visited;
maze_direction dir;
};
bool r_blocked
bool l_blocked
bool f_blocked
int right = (m_direction + 1) % 4;
int left = (m_direction + 3) % 4;
backtracking
variable to
true
and then proceeds to calculate how to move. The normal wall-following
always sets this variable back to false
so that the robot won't always
think it's backtracking. To calculate how to backtrack, we use dir
.
dir
has told us how we originally got to that node. We also make sure not
to update dir
if we are backtracking which is why that variable is necessary. We can then
calculate what that direction is using similar algebra to before and then calculate the coordinates as above as well.
We then are able to determine how to move.
m_direction
to be able to properly update the gui.
We also update the maze every time we hit an intersection.
This shows that our robot is functioning. However, there is still a bug where sometimes it knows to turn but yet doesn't complete the turn which can be seen at a point in the video where April sets the robot back on the correct path. Additionally, if one looks closely at the mapped maze, there is one wall that the robot 'saw' that was not present for the big maze, and the robot missed the three walls that made it necessary for the robot to turn around in the small maze. However, the rest of the maze was spot on.
Objective: Robot will be capable of detecting whether treasures are present, what color the treasures are, and what shape the treasures are.
Materials: FPGA, Arduino Uno, various wires, VGA adapter, VGA cables, monitor
The heart of this milestone is our image processor code. In this code, we determine the shape of the object when VGA_VSYNC_NEG goes low, indicating the end of a frame. At this point, we don’t need to read the incoming data and can quickly go through and figure out what the previous image was. We are able to figure out the color by comparing the number of blue pixels detected with the number of red pixels detected. Additionally, we make sure that the number of pixels detected is greater than a threshold, indicating we are probably seeing a treasure and not just a higher concentration of red or blue pixels. After doing this, we determine the shape of the object by comparing the concentration of pixels at the top, middle, and bottom of the image. We know that for a square all three concentrations should be equal, for a triangle the bottom should be greater than the middle which should be greater than the top, and for the diamond, the bottom should be less than the middle which is greater than the top. Depending on what we find, we determine the value of result which is a parallel connection to the arduino and tells the arduino which shape and color we have detected.
Below is our image detection code:
We spent about 2 weeks trying to use our own code to run the camera and still had no luck. At this point we turned to Group 5’s code because we knew they had it working for their camera. After changing all the IO pins so that it should have worked for our setup, it still didn’t work at all. After another week, we found a single wire with poor connection to the breadboard. We were so far behind at this point that we simply continued working with Group 5’s code since it was nearly identical to our own but we knew it would function properly.
Below is a picture of our shape detection code working. In each picture is the view from the camera and a view of the serial monitor displaying information from the arduino telling the user what shape and color object it is looking at.
Lorem ipsum dolor sit amet consectetur.
Use this area to describe your project. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Est blanditiis dolorem culpa incidunt minus dignissimos deserunt repellat aperiam quasi sunt officia expedita beatae cupiditate, maiores repudiandae, nostrum, reiciendis facere nemo!
Lorem ipsum dolor sit amet consectetur.
Use this area to describe your project. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Est blanditiis dolorem culpa incidunt minus dignissimos deserunt repellat aperiam quasi sunt officia expedita beatae cupiditate, maiores repudiandae, nostrum, reiciendis facere nemo!
Here we highlight the design decisions we made for our final design and explain why we made these decisions.
Our final robot consisted of:
The robot has four levels, and the components are divided between them as shown below:
After several iterations of our robot’s packaging, we settled on a four-level design in which each level has a distinct purpose. The bottom level contains the power banks and is a mounting point for the line sensors. Besides powering the robot, the battery helps maintain a low center of gravity and therefore stability (like in a Tesla). This fixed a previous issue where the robot would often tip over when it started moving. Additionally, by keeping an extra power bank in the front of the center of mass at the bottom of the robot, we were able to keep the line sensors near the ground even when the front of the robot wanted to bounce. This ensured the line sensors would read consistent values throughout the entire traversal process. Mounting the line sensor on the bottom level also keeps them low to the ground and as accurate as possible, increasing turning and line following reliability. The second level contains the Arduino, a power bus, and the radio transmitter. The power bus is a two-column breadboard, with one column connected to the Arduino 5V and the other connected to Arduino ground. This allows us to have easy access to our power supply and easily diagnose wiring issues. Having the Arduino and radio protected between middle levels also reduces the likelihood of wiring failures or accidental shorts to the Arduino. The wall sensors are also mounted on this level to be near the middle of the wall and out of the way of any wires that could obstruct them. The third level contains a small breadboard containing all of our lab-made circuits. The placement of this breadboard allows for easy access to the circuits, making debugging quick and easy. Wires to the power bus or Arduino can be easily routed through or around the “floor”. The circuits we made are highlighted below. The fourth and top level holds the IR hat at 5.5" above the ground and also is the mounting point for our IR detector. The IR detector is directional so since we have it pointing out from the front of the robot (away from the IR hat which is behind it), we don't detect ourselves as another robot. The IR sensor is plugged into the circuitry on the third level of the robot. The IR hat is plugged into a 9V battery attached to the bottom of this level. The servos are mounted to the bottom level with standard sized wheels.
All of the robot’s custom-made and exciting electronics are contained on the breadboard on the top level. A picture of the breadboard and schematic of the two op amp circuits are shown below.
The op amp circuits are used to amplify the outputs from the IR sensor and the microphone. We used these amplifiers because the Arduino would have been unable to read the values coming directly out of the sensors since their amplitudes were so small. We got it so that the IR sensor would output a voltage from 0 to 3.3V (easily readable for the FFT running on the Arduino), and the microphone was outputting a voltage from 0 to 2V. The breadboard also contains a mux which is used so that we can use a single analog input on the arduino for both the microphone and IR detector. This allowed us to use the analog inputs for other parts of the robot, and in exchange we only needed a single digital pin as the select bit for the mux (we had plenty of extra digital pins to use). At the beginning of the run, the mux is set to let the microphone input into the arduino so that we can detect the 660 Hz starting tone. Then the mux is set to read the IR input for the rest of the time.
On the day of the competition, we actually were not confident enough in our microphone starting ability to use it, so we took this part out of the code. Instead, we had the robot start immediately after it had been plugged in, and we waited to plug it in until after 5 seconds past the start time. We were concerned that the robot would either have a false start, or it would never start because the 660 Hz was never detected. We were running the IR sensor, however we only checked the value of the sensor at intersections as we assumed this would work (as it had in testing during the lab). However, on the day of competition we collided with two robots, once when the other robot was to our front left, and once when neither our robot nor the other robot saw each other in the middle of a square. For the first collision, our directional IR sensor would never be able to pick up a signal that wasn't coming from directly in front of us. For the second robot, since the other robot was not in front of ours until we were already driving again and in the middle of a square, we were not running the detection code (as previously mentioned, we only ran it at intersections).
bit 7 | bit 6 | bit 5 | bit 4 | bit 3 | bit 2 | bit 1 | bit 0 |
---|---|---|---|---|---|---|---|
North wall present |
East wall present |
West wall present |
Move | North | East | South | West |
Move
was a bit that indicated that the robot moved. The cardinal directions represented
which direction our robot was moving in. We defined constants so that updating this information
would only require bitwise OR-ing.
To keep track of our maze, we defined a 9 by 9 array of nodes. node
was a struct that we defined to contain both a boolean if the node was already
visited and the direction our robot took to get to that node. The later was used in
the backtracking portion of our algorithm. For every maze, the starting direction of our robot
is always south and it's starting location is (0,0).
The different setup functions as well as setup()
initialized all the pins
we used along with the servos. In order to not mess with the code that we already had working,
we found that we were lacking in analog pins, so we muxed the microphone and ir sensor.
We had originally integrated the microphone fully as well as a button to start our robot but on the day of the competition, we were not very confident in either of their functionality so we decided it was better to pull it out.
The detectMicrophone()
function uses the FFT library which outputs values for each bin number.
In lab 2, we chose to iterate through 256 bins, but in order to save dynamic memory on the arduino,
we reduced this to 128 bins.
By printing out the values of each bin, we found that the 660Hz tone now caused a peak in bin 11.
We check if this bin contains a value over the threshold value set in the beginning, it was 70
in this case, to determine if the tone is being played.
action
,
so we could have just called the functions instead of using a state machine.
stepPast()
was necessary because our robot turns in place when it turns.
If the body of the robot is not over the intersection properly, then the robot would never
be able to detect the next line it was to end up at.
detectRobot()
used code very similar to lab 2. Just like lab 2, we ran
the fft code and used the numbers from the serial monitor to create a graph similar to the one below.
We could not simply use the same bin numbers that we had from lab 2 because we changed the
number of samples from 256 to 128.
We observed the bin 23 peaked, and said that when that bin was over 160, then
a robot was detected. Something that could've been improved upon was the timing of our detection.
Rather than detecting only at intersection, it would have been better to polling type of dectection
that could alter our robots path on the way to an intersection rather.
updateMaze
was used to update the current location of where our robot
was on our maze representation.
updateBytes
updates the byte with information
that will then be sent to the base station
sendRadio
has been simplified a lot now that we only send one byte. Rather than sending
four bytes where two were simply to set byteflags, we just have to call ping_out
once and then wait until that one was done sending. This meant we spent significantly less time at each intersection than before. ping_out
has stayed the same
from lab 3 as it's functionality has not changed at all. We also make sure to reset the byte
that we send after we have successfully communicated with the base station.
For the base station, in order to keep track of the robot's location, it updates an int array with the proper x and y coordinates. Depending on these coordinates and the direction we send over, the base station is able to calculate where the robot is. It also places walls after decoding the byte sent over. The decoding was done through bit shifting and then having a switch statement on the possible integer values.
Something that could be improved with our backtracking would be rather than following the backtracking all the way back until there is NOT a blocked area, calculating the nearest unvisited node and moving there would make our robot's traversal time much shorter as well.
Our robot does not contain a camera or, by extension, an FPGA. There were multiple reasons for this. After successfully setting up camera-Arduino-FPGA communication, our camera output was still noisy and was hard to interpret even by a human. This was mainly due to wiring issues where the header wires were too loose in the breadboard sockets leading to noisy images on the screen. We tried numerous times to get the wires more securely attached but were still unable to get the system to work well enough. Considering the number of points we could achieve by having the camera functioning vs the number of points we could lose if the camera detected false treasures, we determined that the best course of action would be to not use the camera or FPGA. The treasure detection would have had to work at least 50% of the time for it to be worth the risk of putting it on, and we were not confident we could get treasures right 50% of the time. Second, the mounting of the camera and FPGA would have been difficult to implement with our current packaging. Either we would have had to add another layer to the robot which would have raised the center of mass and made the robot unstable, or extend one of the layers out the front or back, again throwing off the center of mass. Third, it was unclear that even if we overcame the first two issues that time spent on treasure detection would be worth it. The team decided that allocating effort to other reliability issues would be a better use of our time near competition than integrating and optimizing treasure detection.
As previously mentioned, the only other part of the robot that we didn't have running on the competition day was the 660 Hz detection. This part was functioning properly the day before the competition, but this was in quiet conditions. We determined that with all the noise in Duffield on Robotics Day, the robot was very likely to start early. We considered lowering the gain on the amplifier circuit so that it would take more sound at 660 Hz to get the robot to start, but decided this was also risky because the robot might never start then. In the end, we took out this functionality and simply took the 5 second penalty.
Here are pictures of our robot at competition:
Below is a video of our robot traversing a maze the night before the maze. It was running much better the previous night because we had plenty of time to calibrate all of the sensors so that the robot was seeing all the walls and lines properly. As can be seen, the DFS algorithm worked perfectly and the robot was backtracking as we expected. We mapped every square except for the last one where the robot came off the line for unknown reasons.
An offensive autonomous weapons ban will work because there is precedent for similar agreements working, and the benefits of the ban are clear and equal for everyone. This can be implemented in practice the same way that nations agreed to not pursue chemical and biological weapons, with international agreements, after negotiations had occurred and nations could sign onto the agreement. The main stakeholders are the scientists and engineers who work on and contribute to the development of these weapons, as well as the economic and political leaders in the countries producing these weapons. Secondary stakeholders are all other humans who can be affected by these weapons in terms of them being targeted by the weapons or experiencing financial repercussions due to the production of these weapons in the form of taxes, job lay-offs, or even new jobs being created. Only scientists and engineers should be evaluating scientific products and creating comprehensible media to present to those who need to see it for further decision making such as politicians or businesses who would use these weapons. Politicians or businessmen and women being involved in the evaluation process would likely taint it with their own biases, goals, and lack of technical knowledge and experience. It is unlikely that the integrity of these evaluations will be valued over the agendas of these politicians and business people. I think that these weapons being developed will only have positive effects on IPS research and development. The open letter on autonomous weapons mentions the possibility of a public backlash as a response to autonomous weapons that curtails its potential benefits from being realized. I don’t think this is a reasonable outcome, as the general population should be capable of realizing that autonomous weapons are just one negative application of AI, and that here are broader and more positive uses for it that everyone will be willing to explore for their own good.
This website was updated on 12/4/2018 at 23:53, moving the information from a Google Doc that was linked to this website. Please contact us with any questions.