from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'w+')
dataset['text'] = dataset['text'].apply(tokenizer.tokenize)
dataset['text'].head()
GETTING THIS ERROR: enter image description hereenter image description here
what is the solution for this ??
enter image description here
I was expecting this output