Segments a given text into Tokens (usually words, numbers, punctuations, ...). Works best with english text.
A stream with a string property which contains a text.
Simply assign the correct output of the previous stream to the tokenizer input.
Adds a list to the stream which contains all tokens of the corresponding text.
(text: "Hi, how are you?")
(text: "Hi, how are you?", tokens: ["Hi", ",", "how", "are", "you", "?"])