I’ve been working on an image classification project (24610). All of my images are 96x96 pixels. I wanted to convert them to 48x48 using the “Crop” feature in the input block. For some reason, the option for “Crop” was not showing up, so I chose “Fit shortest axis”, curious about how the block would process the image. In the image block below “Create Impulse”, I can see (in RGB mode) that the processed images are smaller and lower quality, but have not been cropped. How are the blocks resizing the images? I would like to recreate this in an Arduino sketch. I’ve seen the “Crop” option in other projects before and I wanted to use it, but the neural network was still producing good results with “Fit shortest axis” selected.