Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider moving rig testing and calibration workflows to reusable acquisition package #308

Open
glopesdev opened this issue Aug 31, 2023 · 2 comments
Assignees
Labels
discussion Requires discussion proposal Proposal for a feature implementation quality control Quality control of data acquisition

Comments

@glopesdev
Copy link
Collaborator

glopesdev commented Aug 31, 2023

The growing number and complexity of foraging arena components requires the development of self-contained hardware and system integration tests to ensure both the correct functioning and calibration of the rig elements, e.g. feeder delivery and torque measurements, RFID testing, camera calibration, etc.

These tests cut across the different experimental protocols, since they are associated with quality-control of the entire arena structure. As such, it makes sense to have some kind of reusable and pluggable module that is available to include in protocols.

Considerations:

  • These tests include not just data acquisition protocols, but also analysis notebooks and / or criteria for validating correctness.
  • In opposition to software unit tests, these tests require interactions with physical systems so are unfortunately not amenable to CI workflows per se.
  • They need to be linked and versioned together with the experimental protocol, so researchers are able to validate which tests were used to calibrate the rig at any point in time.
  • Ideally they would be easily runnable on different arena configurations by changing a few parameters, rather than keeping redundant copies of the test structure.

Proposals:

Note

The below are not mutually exclusive and combinations might be advantageous, e.g. reusable acquisition testing modules could be included in the running environments powering the aeon tests repo.

New experiments branch (tests)

Pros:

  • Follows existing structure for running aeon "experiments", just switch to the testing branch and run the workflows
  • Fastest to implement: it's what we have already
  • Can store everything without waiting for further developments, including notebooks, custom scripts, etc

Cons:

  • No guarantee that tests are synchronized with the actual experiments that are running
  • Harder to version the provenance of tests (who stores the commit hash or version of the test used to calibrate feeders for a specific experiment?)

New aeon package (Aeon.Testing)

Pros:

  • Easy to version and distribute reusable modules (included directly in the bonsai environment for each experiment)
  • Association between experiments and tests is explicit: workflows in each branch explicitly link to / parameterize the reusable test modules to the exact hardware running on the rigs

Cons:

  • Still requires some association between tests and rig hardware configuration: Bonsai does not yet have a concept of "runnable" packages, so cannot currently deploy these workflows as self-contained applications
  • Unclear how to deploy notebook / python analysis together with the reusable modules

New aeon repository (aeon_tests)

Pros:

  • Reusable testing modules can be entire workflows that can run in self-contained mode
  • Since it is a repository, can contain notebooks and artifacts in multiple languages

Cons:

  • Needs a way to explicitly associate the tests with a specific experiment and arena configuration

Include tests together with hardware module repositories

Pros:

  • Separation of concerns: each hardware module worries about its own tests

Cons:

  • Sometimes the development repos for certain components are outside of our control, or the integration might be aeon-specific, so we might not be able to impose a reasonable test in the component repo itself
  • Still need some way to connect the test results with the specific arena configuration / hardware: we need results to be documented and versioned per arena and per experiment
@jkbhagatio
Copy link
Member

Comments from #308 (closed as duplicate):

from @RoboDoig

aeon_experiments should have a standard structure for tracking benchmarking and calibration experiments related to the main aeon experiments.

Proposal is to have a branch off main called 'rig-qc'. For specific benchmarking experiments a new branch is created off rig-qc. The folder structure for benchmark worflows/analysis/data is:

workflows

  • tests
  • Readme (general readme for how to set up benchmark folder)
  • bonsai (contains env for the benchmark)
  • analysis
  • qc-workflows
  • Readme (purpose of benchmark, data location on ceph etc.)

from @jkbhagatio

Actually if we're not enforcing for qc stuff via the LogController, and if we end up having and keeping many branches branched off of 'rig-qc' that don't ever get merged back in, I may be inclined to go back to having all qc stuff in a separate repo. Let's discuss further again.

@jkbhagatio
Copy link
Member

Every QC project / folder should have at a minimum:

  • Bonsai folder (includes Bonsai.config, etc.)
  • Analysis folder
    • Analysis notebooks
    • Static reference for data used for this qc analysis
  • Bonsai workflows

This would rougly follow the structure in root of the 'aeon_experiments' repo:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Requires discussion proposal Proposal for a feature implementation quality control Quality control of data acquisition
Projects
None yet
Development

No branches or pull requests

6 participants