In order to run the fourth iteration, a lot of firmware and software must be installed. This includes ROS (Robot Operating System), GStreamer, and pulling from specific GitHub repositories. The complete details can be found here.
In the fourth and last integration iteration, the Raspberry Pi sends over the video stream via GStreamer to a laptop that then runs the classification algorithms on the video frames. The Raspberry Pi is also set up to be a server and receives information about the classification of each frame.
The laptop launches the interface between GStreamer and ROS which converts the video stream sent from the Raspberry Pi to ROS topics. Then, using a Python package called CV Bridge, the laptop would then "decode" each image into OpenCV objects that could then be classified. Once classified, the laptop also acts as a client and sends the classification information over to the Raspberry Pi (whether or not there are faces and if any of the faces are smiling).
We also ran into problems powering all four servo motors using an Adafruit shield and a 12-volt 5-amp power supply. The Arduino frequently restarted due to overheating. Since we did not need to draw that much current and wanted to make the whole system portable, we switched to a 4-AAA battery pack (which provides 6 V) without a current regulator, so that the peak amperage could still be hit (when all of the servos move at once). We supplied the Raspberry Pi with a separate battery, a 4400 mAh rechargeable Lithium ion battery that outputs 5 volts at 1 amp. This separation prevented a high current draw from the Raspberry Pi, which an only safely output 200 milliamps without fear of the system shutting down. This also prevented any overheating from occurring, even after 3 hours of use.
In the third integration iteration, the Raspberry Pi recorded the video stream from the on-board Pi Camera module and detected faces and smiles from the video stream. We utilized Python's built-in multi-threading capabilities. However, the Raspberry Pi only has one core, so the multi-threading devolved into a very advanced instruction scheduler. This forced the Raspberry Pi to either record video or classify the frames, but not both at the same time because of the limits of Raspberry Pi's computing capabilities.
Iteration 1 and 2
In the first and second integration sessions, the Raspberry Pi was not used. We did not expect to be able to integrate the Raspberry Pi into a portable scheme. We simply used PySerial to communicate between the Arduino and the computer. There were minor glitches, including the fact that the data is sent over as Strings to the Arduino. This prevents the direct comparison between the integer finite states. Instead, we converted the integer finite states to be integers type-casted to Strings for comparison.