Details about mobilenet-ssd fine-tuning

Hi

Can I know the stratgy of transfer learning used to fine-tune Mobilenet SDD for object detection. Which layers are locked and which are not ?

Also are you using the data augmentation behind the scene while training? I can’t find the option of data augmentation any where for object detection.

Last question, can I create a processing block to be used during training only and to be disabled in the resultant model (inference model)? I’m think of adding a block to add noize/rotation,…etc to images while traning only (in case data augmentation is not realy there).

Hi @yahyatawil!

Our current object detection implementation is pretty simple—we’re working on a more sophisticated training process and more options around architecture. For now, here are the answers:

Can I know the stratgy of transfer learning used to fine-tune Mobilenet SDD for object detection. Which layers are locked and which are not ?

We retrain both the box regression and classification heads of the model; we don’t touch the MobileNetV2 feature extractor.

Also are you using the data augmentation behind the scene while training? I can’t find the option of data augmentation any where for object detection.

We don’t currently do any augmentation—this is definitely on our list to add, in addition to adding hooks so it can be customized in Expert Mode.

Last question, can I create a processing block to be used during training only and to be disabled in the resultant model (inference model)? I’m think of adding a block to add noize/rotation,…etc to images while traning only (in case data augmentation is not realy there).

The best way to do this currently is just to materialize the augmentations in your dataset before uploading it using a custom script—for example, you might create 10 variants of each original sample. If you created the dataset in Studio you can export it, create the augmentations, and upload it again. Not as efficient as doing the augmentations during training, but it should be relatively easy.

Hope this helps! Expect a lot of improvements to our object detection pipeline in the next few months :slight_smile:

Warmly,
Dan

2 Likes

Thanks Daniel for the detailed reply. It is really helpful.

@dansitu

I am writing a script to do augmentation for my EI project exported dataset, then to be uploaded later back to edgeimpulse.

Can I upload the ‘modified’ .labels file with the augmented images and EI studio will recognize the bounding boxes automatically, or I should annotate again the augmented images?

@yahyatawil You can just edit the labels file, keep it in the same folder as the jpg files, then it’ll automatically be picked up by the CLI (this only works using the CLI).

I did the augmentation for the exported dataset. I added a prefix to each file in the directory, then I updated the json file through a script with the new images name ( I copied each bounding box info for each image to the new augmented one in the json file).

I have now the new json file but I get Failed to upload bounding_boxes.json Missing protected header error from edge-impulse-uploader.

Although, I did a small test by uploading one new image through the edge-impulse-uploader and then add that new image manually to json file and upload it, and it did work.

Bellow the original .labels file and modified .json file and to me they are identical (using json viwers), I even removed the spaces and newlines \n from .json file like .labels one.

Any help ?

@yahyatawil You should not upload the json file. You should have a folder structure with:

  1. yourfile.jpg
  2. bounding_boxes.labels referencing the yourfile.jpg

And then call:

edge-impulse-uploader yourfile.jpg

and that should be it.

This structure is what you should get out of the Dashboard > Export tab already in your project.

1 Like