Malformed node or string error from NN classifier

Hello Edgy Impulsers :slight_smile:
I little bit of background: I am using EI for an entomology project. As a trial, I labelled some bee and wasp pics (~600). Object detection worked flawlessly but the NN classifier throws a “malformed node or string” error at me. I am not doing anything fancy, only two labels, the NN has only a single dense hidden layer and takes Raw images as input. Any help to understand why EI is unhappy with my settings would be much welcome .
full error message below.
Job started
Splitting data into training and validation sets…
Traceback (most recent call last):
File “/home/train.py”, line 252, in
main_function()
File “/home/train.py”, line 113, in main_function
mode, RANDOM_SEED, dir_path, 0.2)
File “./resources/libraries/ei_tensorflow/training.py”, line 21, in split_and_shuffle_data
X = np.load(os.path.join(dir_path, ‘X_train_features.npy’), mmap_mode=‘r’)
File “/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py”, line 437, in load
return format.open_memmap(file, mode=mmap_mode)
File “/usr/local/lib/python3.6/dist-packages/numpy/lib/format.py”, line 855, in open_memmap
shape, fortran_order, dtype = _read_array_header(fp, version)
File “/usr/local/lib/python3.6/dist-packages/numpy/lib/format.py”, line 583, in _read_array_header
d = safe_eval(header)
File “/usr/local/lib/python3.6/dist-packages/numpy/lib/utils.py”, line 1007, in safe_eval
return ast.literal_eval(source)
File “/usr/lib/python3.6/ast.py”, line 85, in literal_eval
return _convert(node_or_string)
File “/usr/lib/python3.6/ast.py”, line 66, in _convert
in zip(node.keys, node.values))
File “/usr/lib/python3.6/ast.py”, line 65, in
return dict((_convert(k), _convert(v)) for k, v
File “/usr/lib/python3.6/ast.py”, line 84, in _convert
raise ValueError('malformed node or string: ’ + repr(node))
ValueError: malformed node or string: <_ast.Name object at 0x7fa07e8c80f0>

Update:
After rerunning the whole pipeline, I still get an error at the classifier stage but a different one. The mystery thickens…

Creating job… OK (ID: 1287636)

Copying features from processing blocks…
Copying features from DSP block…
Copying features from DSP block OK
Copying features from processing blocks OK

Job started
Splitting data into training and validation sets…
Splitting data into training and validation sets OK
Traceback (most recent call last):
File “/home/train.py”, line 252, in
main_function()
File “/home/train.py”, line 121, in main_function
SPECIFIC_INPUT_SHAPE)
File “./resources/libraries/ei_tensorflow/training.py”, line 185, in get_datasets
train_dataset = get_dataset_standard(X_train, Y_train)
File “./resources/libraries/ei_tensorflow/training.py”, line 118, in get_dataset_standard
output_shapes=(tf.TensorShape(X_values[0].shape), tf.TensorShape(Y_values[0].shape)))
AttributeError: ‘dict’ object has no attribute ‘shape’

Application exited with code 1 (Error)

Job failed (see above)

Hello @Thomas_Launey,

From what I see in your project 46696, to train your Neural Network, you are using the raw data. Unfortunately all your images do not have the same height and width. You would need to preprocess your data with an image processing block before passing the generated features from the images to the NN.

Regards,

Hello Louis,
Thank you for helping me out again. In “Create impulse” I did set Image width and Image height to 320 and used Resize mode “Fit longest axis”. I assumed that this would rescale and pad the smallest dimension for each image (?). " From what I see in your project 46696" : not sure we are looking at the same one, The one I am trying to build was named “Horny_Hornet” (we study insect mating…). Thanks a lot. Thomas

@Thomas_Launey For object detection we currently don’t support custom classification blocks (should actually be hidden in the interface), so only Impulse + Object Detection (Images) is supported.

We’ll be fixing this in the future, but currently Raw data + Classifier is not supported for this data. You can do it if you have a non-object detection project.

@Jan, @Louis, Thank you very much for the very reactive support. I see improvements & new features added to EI almost daily. Keep up the very good work. Cheers

@janjongboom @louis
I transiently saw MobileNet V2 96x96 in the object detection options (?) That would be perfect perfect for my insect pics. Was it a test on your side and something you plan to add in the future ? The future is exciting then :-). Cheers Thomas

@Thomas_Launey, yeah, we’re working on smaller object detection models, but they’ll have a new architecture, so can’t give a release date yet :slight_smile:

Oooh this sounds very good for the future of my project. We get videos of flying insects in the field (literally) so the ROI are often quite small. We will step up the effort to label a (much) bigger training set then, to be ready when you release the new detection model :smile: cheers

@Thomas_Launey Yeah that sounds right up the alley here. Look at this work by @matkelcey from our ML team: https://matpalm.com/blog/counting_bees/ on which some of the new work is based.

@janjongboom Thank you very much for the link to Mat Kelcey’s work. Hardware and software are very close to what we are trying to achieve. For the inference part we are actually considering using hardware accelerator to get video-rate detection (in our case, the Coral stick). Again, thank you for the help, this goes way beyond traditional “support” :smile: Cheers

Sounds exciting! I actually did some work with an accelerator back then, the Movidius neural compute stick, but never fully integrated it (the code exists there on an abandoned branch). Was going to port to the Coral but never got back to it.

If you are able to share some general stats on your project; e.g. sizes of images, variance in light levels, what you expectation is from the model output, etc that’d be handy; helps us plan / configure base models to suit these kinds of use case as we build the tech out. I’d love to build a model like this that works on any hive in the world. Totally doable, just needs the data to back it :smiley:

1 Like

@matkelcey Dear Mat,
We would be very happy to pursue this with you. We have a side project on hornet predation strategies where we plan to track both hornet and bees in front of beehives. This implies high frame rate (video rate or better) and on-site data reduction. Your approach to detect center of mass is very appealing to us . If we can combine it with apparent size, this may give us a 3D coordinates estimate for bees and hornet (taking orientation into account). To simplify detection, we are building
a standard imaging “box” in the space in front of a row of hive, delimited by uniform green panels (to avoid false positive from graminid and dandelions :smile: ). We are currently collecting still pics and videos with RPi4+V2 Cams from hive in different locations, different orientations, sun orientation and time of day. I can give you access to our repository. Cheers, Thomas

Hi again! Looping back to this thread as we’ve just launched FOMO v1 which would be a first step for this project. Would love to hear if you think it’ll fit for this project, and if not, what we’d need to add to get it going!

On the topic of 3D localisation I reckon another option would be with another camera (if it’s at all possible) to start with at least. There’s some fusing things we can do to estimate depth then. Longer term it’d be better with a single camera but I’ve found it can really kick start things.

Cheers, Mat.