What is a Vision Sensor? - Python
Teacher Toolbox - The Purpose of this Page
This page will introduce students to what a Vision Sensor is and some of its capabilities. The students will then analyze a partial image of an example project to view how the Vision Sensor can be used with VEXcode V5.
The Motivate Discussion questions at the bottom of the page can be completed as a class discussion or individually in the students' engineering notebooks.
Motivate Discussion
Q: What types of human jobs would benefit from having the help of a robot with a vision sensor?
A: Listen for human jobs that would benefit from the ability to see into environments and/or manipulate surroundings from remote distances (e.g., observing animals in the wild, disarming explosives, or performing robot-assisted surgery).
Q: Name a device and describe how it uses input, output, and process.
A: A possible answer could be a calculator that takes the sequences of numbers and mathematical operators a person inputs, processes those numbers and operations to calculate a result, and then outputs that result on a screen for the person.
Q: Why do you think a forever loop was used in the project shown?
A: A forever loop was used so that the Vision Sensor continuously checks the multiple snapshots taken to see if a red object comes into view of the sensor.
Description
The Vision Sensor allows your robot to collect visual data from a live feed. A live feed is a streaming transmission of what a video camera is capturing. The Vision Sensor is like a smart camera that can observe, select, adjust, and store colors and objects that appear in its visual field.
Capabilities:
- This sensor can be used for recognizing colors and color patterns.
- This sensor can be used to follow an object.
- This sensor can be used to collect information about the environment.
The Vision Sensor allows the robot to use visual input data from its environment. The project can then determine how the visual input data should affect the robot's behavior. For example, the robot could perform actions (output) such as spinning motors or displaying results on the LCD screen.
The Vision Sensor can also capture a snapshot of what is in front of it and analyze it according to what the user is asking. For example, a user can gather data from the snapshot such as, what color is the object? Is there an object detected at all? How large is the object (width and height)?
The robot can then make decisions based off of this data. The partial example project below shows how this is done. Three colors are being checked repeatedly after the project is started, and each color check is a different event. Only the event that checks for Blue is shown below. This stack has the robot print "Blue Object Found" if a blue object is detected or "No Blue Object" otherwise. The check_red and check_green events not shown below have similar stacks for deciding what to print on the screen.
# Library imports
from vex import *
# Begin project code
check_red = Event()
check_blue = Event()
check_green = Event()
def check_blue_callback():
brain.screen.set_font(FontType.MONO40)
brain.screen.clear_row(1)
brain.screen.set_cursor(1, 1)
# Take a snapshot with the Vision Sensor and store vision data into a variable
vision_5_objects = vision_5.take_snapshot(vision_5__BLUEBOX)
# Check the variable to see if the snapshot detected any blue objects
if (vision_5_objects):
brain.screen.print("Blue Object Found")
else:
brain.screen.print("No Blue Object")
# System Event Handlers
check_blue(check_blue_callback)
#add 15ms delay to make sure events are registered correctly
wait(15, MSEC)
while True:
check_blue.broadcast_and_wait()
check_red.broadcast_and_wait()
check_green.broadcast_and_wait()
wait(0.1,SECONDS)
wait(5, MSEC)