Expert Mode: Adding Extra Layers

Expert Mode: Adding an Extra Layer

I have noticed that classification vision models have the ability to easily add extra layers without worrying about writing the Keras code correctly. Also Power-users can clicking the 3 ... to get into Keras expert mode.

.

I have also noticed that FOMO using bounding boxes does not have that simplification to add layers:

  1. Is the simplified addition of layers coming soon?

  2. Does anyone have links to tutorials on how to best use “Expert Mode” for any data input (especially custom inputs from other types of sensors). I am not having much luck taking my limited TensorflowJS and Keras background to doing anything useful on EdgeImpulse with “Expert Keras Mode”

If I can, I would like to make a few short tutorials about ways you could use Expert Keras Mode. This post about the RC cars when finished would be a good advanced tutorial, but I should start with some baby steps perhaps using 2 raw data pressure sensors.

Hello @Rocksetta,

  1. When using pre-trained models (FOMO, MobileNet, KWS), we don’t provide that simplification. You will need to go through the expert mode if you want to modify things.
    For FOMO @matkelcey wrote some tips recently: FOMO: Object detection for constrained devices - Edge Impulse Documentation

  2. That’s indeed something we want to add to our documentation but it has not been done yet.

If I can, I would like to make a few short tutorials about ways you could use Expert Keras Mode

Please, feel free to work on that if you would like, we would be happy to promote those video too and/or reference them.

Regards,

Louis

Good point. We do want to add the simplified adding of layers for FOMO too, but there are a couple of things I need to do before that, so expert mode is your best bet. Feel free to ask questions, expert mode allows a lot of hacking. Mat

1 Like

I work best with code, so I have a request for people to submit expert mode Keras code either here or at my Github Repository in the folder expert-keras-mode (fork it and make a PR), probably easier just to post the code here.

If anyone submits anything it might be nice to add a one or more line explanation using comments # of the inputs, outputs and what it does or is trying to do, and to identify what they changed.

The point of this is to see what other people have done and to identify the changes so that I can help students understand how to change edgeimpulse models to their specific needs.

For example: An easy Keras code might be sued for this situation:

I have a 6 servo motor robot arm. It might be fun to send the angle for each servo to a model to determine if that position is do-able. The microcontroller would feed the model all 6 angles and the model would suggest if it can do the movement or not, based on what objects are placed around the robot arm.

.

while a harder one might be, the RC car mentioned in the thread here that I expect to work on this summer, hopefully with some guidance from @matkelcey

I kind of enjoy going from complete ignorance, to some form of understanding. This should be a good project for that.

P.S. I have been invited to join a tinyML working group based out of Harvard University with a few other Edge Impulse Experts. The TinyMLedu Team - TinyMLedu

regarding the arm there are very good algorithms already for this kind of joint vs operational space trajectory planning. we can just solve this analytically “robotics: modelling, planning and control”, by siciliano et al, is my favourite reference for this kind of stuff.

so if you want to make an ML problem you’ve got to introduce some uncertainty that means you need a model; e.g. noisy sensors, or failing actuators, or something like that. my favourite reference for that kind of modelling is “probabilistic robotics” by thrun et al.

both are really great reads.

mat

1 Like

@matkelcey

After a quick search this is the pdf for “probabilistic robotics” by thrun

and then I had a hard time with robotics: modelling, planning and control”, by siciliano et al because of the various academic journals.

Side Note: Last I checked it would cost about $1000 to find my Dads University funded ~1950 Artic climate research of temperature and salinity data he collected while dog sledding around Baffin Island. With our climate issues, seems strange to have research like this locked up.

So with a little digging I found this amazing Github with both articles and a lot more HERE

The direct links are:

“probabilistic robotics” by thrun

“robotics: modelling, planning and control”, by siciliano et al

Of the 23 articles, does anyone else see anything I should look at?

I love systems that can be solved in multiple ways. Reminds me of years ago trying to teach matrix math by showing how it can be used to solve simultaneous equations here. The nice thing with simultaneous equations is most high school students know how to solve them.

Possibly the same with my 6 axis robot arm, that if there are traditional ways for a robot to solve motion in space, then doing the same thing with an EdgeImpulse machine Learning model would be very interesting and we could then compare whether it is better or worse, etc.

With the dual core PortentaH7 it would be very interesting to have a vision model on one core directing the pincher and the 6 axis arm model on the other core deciding if the suggested motion is allowable.

By the way, I presented this ML summary to the Harvard tinyML group this morning. It is fully made using one Github README.md file and an audio file, the rest is all Javascript.

https://hpssjellis.github.io/my-robotics-machine-learning-teaching-lightning-talk-pecha-kucha/

Pecha Kucha presentations are 15 slides, 20 seconds each slide making them a 5 min presentation.

Just click the button, when you load the above link.
image

i feel straight up analytical motion planning would be a waste for a neural network, it’s hard to justify a non linear probabilistic model against closed form solutions but motion planning under the guidance of a vision model is different. that’s hard, definitely has elements of uncertainty, and is very real world useful because an external camera is much cheaper than proprioceptive sensors. i worked on this particular class of problems for quite awhile and it’s hard, and still unsolved. which makes it very interesting :smiley:

1 Like

I actually didn’t think either of these are journal items, they are just commercial text books (i.e. the writing wasn’t funded as research )