FOMO label format

I am trying to input grayscale and depth data into the FOMO network and see how it behaves. I thought about manipulating the X_train_features.npy to fuse the depth data and write the labels (generated through EI labeler) into the y_train.npy.
But I am now running into the issue that the format of my labels is wrong, it expects a “samples” key (I used the info.labels file).
So which format does y_train.npy has to be? And do you think it could work to fuse the depth data and run FOMO?

Thanks in Advance

Hi Bohnj,

If you run up any unmodified FOMO project that will give an example of how the npy files are expected to be formatted.

Do you have an example of the file you’re trying to construct?

Cheers, Mat

thank you for your answer.
I am trying to perform training on my local machine with my own dataset (without uploading the data to EI every time I change anything).
I labeled all my data in EI, which was a very nice experience and downloaded the labels files from it.
If I understood correctly, it is possible to use data which is not yet split or shuffled (which is per deafult done if you download the training block locally). I am trying to figure out how to set up the labels file without the sampleId labels, which should be generated automatically when the data is split.

you do need to include the sampleids need to be present in y_true since they are required to join back to the ground truth x values…

here’s an example of using the API to get some of the required meta data to join things since the lookup from sampleid to the original X image filename isn’t immediate. this should all run in notebook, colab etc

hope it helps!


# based on

PROJECT_ID = 316474
DSP_ID = 3
EI_USER = "your_user_name"
EI_PASSWORD = getpass.getpass()

setup API cookies

headers = {
    'accept': "application/json",
    'content-type': "application/json"
url = ""
body = {
    'username': EI_USER,
    'password': EI_PASSWORD

response = requests.request('POST', url, headers=headers,

cookies = dict(jwt=response.json()['token'])

fetch the assets available for download

url = f"{PROJECT_ID}/downloads"
response = requests.request('GET', url, headers=headers, cookies=cookies)
downloads = json.loads(response.content.decode())
[{'name': 'Image training data',
  'type': 'NPY file',
  'size': '3 windows',
  'link': '/v1/api/316474/dsp-data/3/x/training'},
 {'name': 'Image training labels',
  'type': 'JSON file',
  'size': '3 windows',
  'link': '/v1/api/316474/dsp-data/3/y/training'},
 {'name': 'Object detection model',
  'type': 'TensorFlow Lite (float32)',
  'size': '83 KB',
  'link': '/v1/api/316474/learn-data/5/model/tflite-float'},
 {'name': 'Object detection model',
  'type': 'TensorFlow Lite (int8 quantized)',
  'size': '55 KB',
  'link': '/v1/api/316474/learn-data/5/model/tflite-int8'},
 {'name': 'Object detection model',
  'type': 'TensorFlow SavedModel',
  'size': '187 KB',
  'link': '/v1/api/316474/learn-data/5/model/tflite-savedmodel'},
 {'name': 'Object detection model',
  'type': 'Keras h5 model',
  'size': '90 KB',
  'link': '/v1/api/316474/learn-data/5/model/tflite-h5'}]

get x.npy

# get training data as a numpy array
SET = 'x/training'
url = f"{PROJECT_ID}/dsp-data/{DSP_ID}/{SET}"
response = requests.request('GET', url, headers=headers, cookies=cookies)
with open('foo', 'wb') as f:
x = np.load('foo')
(3, 27648)

get y data, as json

# get y data as json
import json

SET = 'y/training'
url = f"{PROJECT_ID}/dsp-data/{DSP_ID}/{SET}"
response = requests.request('GET', url, headers=headers, cookies=cookies)

y = json.loads(response.content.decode())
{'version': 1,
 'samples': [{'sampleId': 713162941,
   'boundingBoxes': [{'label': 1, 'x': 32, 'y': 52, 'w': 13, 'h': 12},
    {'label': 1, 'x': 51, 'y': 52, 'w': 13, 'h': 12},
    {'label': 1, 'x': 55, 'y': 33, 'w': 13, 'h': 13}]},
  {'sampleId': 713162940,
   'boundingBoxes': [{'label': 1, 'x': 22, 'y': 58, 'w': 13, 'h': 12}]},
  {'sampleId': 713162939,
   'boundingBoxes': [{'label': 1, 'x': 40, 'y': 31, 'w': 13, 'h': 13},
    {'label': 1, 'x': 30, 'y': 54, 'w': 13, 'h': 12},
    {'label': 1, 'x': 68, 'y': 55, 'w': 15, 'h': 14}]}]}

fetch corresponding sampleids via metadata

# fetch corresponding sample ids via metadata
url = f"{PROJECT_ID}/dsp/{DSP_ID}/metadata"
response = requests.request('GET', url, headers=headers, cookies=cookies)
metadata = response.json()
sample_ids = [d['id'] for d in metadata['includedSamples']]
assert len(sample_ids) == x.shape[0]

map sampleids back to filenames

# map back from sample_ids to filenames
sample_id_to_filename = {}
for sample_id in sample_ids:
  url = f"{PROJECT_ID}/raw-data/{sample_id}"
  response = requests.request('GET', url, headers=headers, cookies=cookies)
  sample_orig_fname = response.json()['sample']['filename']
  sample_id_to_filename[sample_id] = sample_orig_fname
{713162941: 'cubes.jpg.23im597e.ingestion-6797d84bf-dn6tb',
 713162940: 'cubes.jpg.23ima5a3.ingestion-6797d84bf-mkqlt',
 713162939: 'cubes.jpg.23imaso8.ingestion-6797d84bf-mkqlt'}