External Data Sources, custom data format fails

I have a lot data logged, from a bunch of instrumented machines, that is updated continiously, on google storage.

The format is proprietary, each datafile from one sensor is stored as 10 second timeseries data zipped into npz format - containing a file header, and a 500_000 item data array

I am able to connect to my google bucket by adding it as Data sources, but the data explorer step fails, i assume because of my custom data storage format.

However the log suggest that i have not yet created an impulse, which i cannot do before uploading any data :slight_smile:

Checking if data explorer is rendered...
Checking if data explorer is rendered OK

Refreshing data explorer...
Scheduling job in cluster...
Creating data explorer job in Edge Impulse failed Data explorer preset not set, and no configured impulse
Job started
Application exited with code 1
Job started
Checking if data explorer is rendered...
Checking if data explorer is rendered OK

Refreshing data explorer...
Creating data explorer job in Edge Impulse failed Data explorer preset not set, and no configured impulse
Application exited with code 1
Job started
Checking if data explorer is rendered...
Checking if data explorer is rendered OK

Refreshing data explorer...
Creating data explorer job in Edge Impulse failed Data explorer preset not set, and no configured impulse
Application exited with code 1

Job failed (see above)

So my two questions are :

  1. Will there be an option / feature to load custom dataformats, e.g. by providing a loader snippet
  2. Im a missing something rgd. creating an impulse before using connecting an external data source

Hi @morten_ece_au,

  1. This feature is already available but for enterprise customers only (read more about our custom transformation blocks here).
  2. Your setup is correct but indeed you need to import data first. Do you have a way to transform your npz format to json/csv directly on your bucket before importing it to your project?

Aurelien

Thanks for the reply @aurel - actually i was hoping to be able to transform npz->json/csv using e.g. a transformation block, or a conversion module on the loader everytime a data fetch was done, i will find out if we can do it on the bucket. I have an enterprise acount, so transformation blocks are availible.

1 Like