Timestamped Multi-Camera Capture

Timestamped Multi-Camera Capture

Capture (Multiple) Timestamped Videos Using USB Web Cameras

This project is depreciated. Please see this project

Markerless motion tracking has become a popular tool with many behavior neuroscience labs, granting the ability to track human or animal features in novel videos without the need to meticulously mark a feature on every frame of a video. Researchers often utilize commercial ‘scientific’ webcams in order to obtain timestamped videos, and usually across several cameras when recording behavioral experiments. These commercial systems can be expensive, uncustomizable, closed source, and often not as easy to implement as originally promised. This project seeks to address these issues by providing code that allows researchers to capture videos across multiple USB webcams with each frame timestamped. The code must be run on a computer running a Linux-based operating system, and has been tested on computers running Ubuntu 16.04 and 20.04. The program is written as a Python library, but has been structured so that the average user can capture videos with minimal knowledge of programming in Python. We provide an example Jupyter Notebook code that has documented examples of how to use the library effectively. Those familiar with Python will find the code commented and relatively easy to modify should that be desired.

Major Features

  • See all the cameras available to the computer. 3D analysis can be achieved when more than one camera is used.
  • Visualize each camera’s output and adjust manual lenses.
  • See camera settings available to each webcam in the software (ex: exposure), update the settings, and save for future use.
  • Capture videos:
    • Each video will output as an .avi video file.
    • Multithreading across the CPU is used to improve capture speed.
    • The software checks for duplicate frames.
  • Get timestamps:
    • Each frame is timestamped, and data is saved as a .csv file.
    • Timestamp data can be visualized to help detect any performance drops.

The Jupyter Notebook code provided below runs as an example of a laptop using Ubuntu with a webcam and a Playstation Eye camera attached. The code is set to record only from the PS3Eye camera. Modifications to this setup (ex: recording from three PS3Eye cameras ‘simultaneously’) can be made by the user. You need not understand all the code, and we hope our documentation shows you how to quickly change the parameters to best suit your specific needs.

We have tested this code explicitly with a PS3Eye camera and a Logitech C270. Other basic webcams (ex: Logitech webcams) should work with this script, but take care to note that settings (such as resolution, frame rate, and exposure) are unique for each webcam. The project also assumes that all cameras being used are of the same type, so mixing and matching of different webcam types is not supported.

To optimize performance, close all other programs on your computer and unnecessary tabs in your browser before running. The computational power required to process the videos can be a significant bottleneck.


Workflow:

  1. One-time installation
  2. Determine cameras detected by the system
  3. Initialize the cameras
  4. Load appropriate camera settings
  5. Turn on camera feed for visual check of angles and settings
  6. Capture videos
  7. Visualize framerate to get an idea of performance

Overview of files

  • video_capture_py.py : This is the python code at the heart of the project, and contains the MultiCam class used for video capture.
  • video_capture_example.ipynb : A Jupyter Notebook that demonstrates how to capture timestamped video files with the included library.
  • Visualize Timestamps Example.ipynb : A Jupyter Notebook that demonstrates how to visualize recorded timestamps to observe frame drops/stutter.
  • gfd : A folder from Gilbert Francois’s Multithreaded video capture with OpenCV and Python project. This contains the code that allows for multithreading.
  • default_ps3eye_params.txt : Contains example parameters for the PS3Eye camera that can be loaded using the load_cam_params method. A useful starting point that can be modified with a text editor before loading in order to change camera parameters.
  • pycache: Folder than can be ignored.

Installation:

If you don’t have Jupyter Notebook installed yet, we recommend getting it by installing Anaconda on your system. To install Anaconda, go to https://www.anaconda.com/products/individual and download Anaconda. You should then have the Anaconda install file within your Downloads folder. Then run something like the following commands given below (you may need to change the second line to match the exact filename of the version of Anaconda that you downloaded). If you are new to using the terminal, know that it can auto-complete names of directories and files names in the current folder by pressing “Tab”. So once you have navigated to the Downloads folder (see the ‘cd’ command in the below text), you could write out bash Ana and hit “Tab”. If there are no other files or folders that start with ‘Ana’, the terminal will fill in the version number.

cd Downloads
bash Anaconda3-2021.05-Linux-x86_64.sh
cd ~

All code can be downloaded as a zip here. However, it is recommended to install via the terminal as such (open with Ctrl-Alt-T):

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install v4l-utils
conda create --name camera_capture3 python=3.6
conda activate camera_capture3
pip install opencv-python==4.3.0.36
pip install PyQt5==5.15.2
pip install matplotlib
pip install jupyter
cd Documents
git clone https://gitlab.com/OptogeneticsandNeuralEngineeringCore/cameracode
jupyter notebook

Note that you may need to update the path with the location of the “opencv-python” distribution package, or else the “Import Camera Capture Code” section will fail to locate the module. This will most likely be found in ‘/usr/local/lib/python3.8/dist-packages’, and this directory has been set into the code to automatically include in your path. But if this does not work and you receive an “ImportError: No module named ‘cv2’”, you may need to search for it.


Run an Example to Capture Videos in Jupyter Notebook

jupyter_notebook_pic.PNG

Start a Jupyter Notebook with the command:

jupyter notebook

The computer will open up the default browser with the filesystem tree, often starting your user’s home directory unless you have changed your current working directory. You can run a Notebook by navigating to the location where you have downloaded the file and clicking on it. Then click the ‘Run’ button. It will run each section of the Notebook sequentially, or you can skip around if you want. More information on using Jupyter Notebooks can be found here.

Navigate to and open ‘video_capture_example.ipynb’. Follow the example by reading the text and updating cells/sections as necessary. Run a cell/section by clicking the Run button at the top.

This code allows the user to determine cameras detected by the system, initialize the cameras, load appropriate camera settings, turn on the camera feed for visual check of angles and settings, capture videos, and visualize video framerates based on created timestamps files. See the Jupyter Notebook (.ipynb) for more information.


Tips for Live Feed Windows

Note that if running the code from Jupyter Notebook, the live video camera feeds may appear in windows ‘behind’ the Notebook browser page. Click on the live feed window to make it the current window and then press the ‘Q’ key while in the viewing window to stop the video feed and recording. Note that you cannot just click the “X” button like a normal window, this will just close and reopen the video capture window.

Troubleshooting and Contacting Us

The following is one common error that can occur: An error of AttributeError: 'NoneType' object has no attribute 'copy'

The error is indicative that communication to the webcam has been interrupted and is having trouble being reestablished. The most reliable solution to this is to unplug the webcams, plug them back in again, and (!) restart the Jupyter Notebook kernel.

If the code breaks on you, you can try emailing us. Alternatively, you are more than welcome to take a look inside the accompanying code and see if you can find the problem. We tried to make the code readable, so finding the problem yourself could be easier than waiting for us to email you back when we can find the time. See the ONE Core website for more information.

Special thanks to Aidan Armstrong.


ONE Core Acknowledgement

Please acknowledge the ONE Core facility in your publications. An appropriate wording would be:

“The Optogenetics and Neural Engineering (ONE) Core at the University of Colorado School of Medicine provided engineering support for this research. The ONE Core is part of the NeuroTechnology Center, funded in part by the School of Medicine and by the National Institute of Neurological Disorders and Stroke of the National Institutes of Health under award number P30NS048154.”