Does anyone have experience annotating images at a frame level (e.g., weather/lighting) versus track or detection-level annotations. And if so, any advice on how to configure this without having to enter these values repeatedly for individual detections, for example.
There are utility pipelines to aid with annotating full-frame events by populating fixed time intervals with full-frame (empty) annotations which can be assigned their types. They are located in the utilities pipeline dropdown (empty frame labels 1fr, 10fr, 100fr, etc…). These full frame labels can also be split using the track split function if the specified event occurs within a larger fixed time interval. There are then training pipelines for full frame events in the training pipeline selector (train_frame_classifier_ …)