Détail du package

regexp-stream-tokenizer

jamesramsay2.2kMIT0.2.2

A regular expression (RexExp) stream tokenizer.

streams, through, through2, tokenizer

readme

regexp-stream-tokenizer

Version License Build Status Coverage Status Dependency Status

NPM

This is a simple regular expression based tokenizer for streams.

IMPORTANT: If you return null from your function, the stream will end there.

IMPORTANT: Only supports object mode streams.


var tokenizer = require("regexp-stream-tokenizer");

var words = tokenizer(/w+/g);

// Sink receives tokens: 'The', 'quick', 'brown', 'fox', 'jumps', 'over', 'the', 'lazy', 'dog'
words.write('The quick brown fox jumps over the lazy dog');
words.pipe(sink)

// Separators are excluded by default, but can be included
var wordsAndSeparators = tokenizer({ separator: true }, /w+/g);

// Sink receives tokens: 'The', ' ', 'quick', ' ', 'brown', ' ', 'fox', ' ', 'jumps', ' ', 'over', ...
words.write('The quick brown fox jumps over the lazy dog');
words.pipe(sink)

API

require("regexp-stream-tokenizer")([options,] regexp)

Create a stream.Transform instance with objectMode: true that will tokenize the input stream using the regexp.

var Tx = require("regexp-stream-tokenizer").ctor([options,] regexp)

Create a reusable stream.Transform TYPE that can be called via new Tx or Tx() to create an instance.

Arguments

  • options
    • excludeZBS (boolean): defaults true.
    • token (boolean|string|function): defaults true.
    • separator (boolean|string|function): defaults false.
    • leaveBehind (string|Array): optionally provides pseudo-lookbehind support.
    • all other through2 options.
  • regexp (RegExp): The regular expression using which the stream will be tokenized.