I'm working on a School assignment for machine learning and I've run into a hitch. I have my DataFrame here:
DataFrame of 6 columns, subject_id, task_code, data_lw, t_start, t_end
The problem starts with the data_lw column. This column, in every row of it, has a sub list of three columns that varies from 1000 to 1600 rows of data.
An example of what a single element in the column looks like
The issue I'm having is with extracting the features. I extracted the features before by doing this:
import pandas as pd
import tsfel
cfg=tsfel.get_features_by_domain('statistical')
feature_list = []
for index in df['data_lw']:
testing = tsfel.time_series_features_extractor(cfg, index)
feature_list.append(testing)
print(testing)
However, in total I arrived at 120 features. These 120 were split in three, 40 for each column. I suppose what it did was it averaged them all together row by row until it was done. When I told my professor about this they said that I should be getting like 300 features per row, so I'm wondering on how to go about this.