Demo of Mice Behavior Recognition

Comparison with a commercial software and human performance

  our system CleverSys commercial system Human
‘Set B’ (about 1.6 hours of video) 77.3 %* 60.9 % 71.6 %+
‘full database’ (over 10 hours of video) 78.3 % 61.0 % N/A

*The demo shows prediction of the system on two videos recorded in the day (left) and night (right).

+The demo shows two sets of labels from two annotators, if the annotation is consistent, only one label is shown.

Two videos are shown, day (left) and night(right).

The main confusion for a human annotators arose from

  • Limited resolution and viewpoint:
    • eat vs rear At the instance when a mouse stands against the back side of a cage (rearing), it looks very similar to reaching the foodhopper (eating) because in both cases, the head of the mouse seems to touch the foodhopper when seeing from front side of the cage (where the camera is placed)
    • micro-movement vs. grooming When sitting back to the camera during grooming, the mouse seems to only move its head slowly and therefore annotated as “micro-movement”.
  • Ambiguity of actions
    • micro-movement vs. walk Small movements of a mouse’s limbs (micro-movement) sometimes results in slow change of positions, and therefore being annotated as  “walking”.
    • grooming vs. eating Chewing (eating) is usually followed by acquiring food from the foodhopper (eating). If the temporal association is neglected, the appearance of chewing (rearing up with fore-limb sweep across the face) does look similar to grooming. Apparently, some annotators assign the most suitable category for each frame independently without considering the temporal association. 

The system applies to natural home-cage environment (with bedding and nesting material surround).

The motion-based, trainable system can also be trained to recognize complex mice behaviors of interacting with an object.