7.3.34. tokenize
¶
7.3.34.1. Summary¶
tokenize
command tokenizes text by the specified tokenizer.
It is useful to debug tokenization.
7.3.34.2. Syntax¶
tokenize
command has required parameters and optional parameters.
tokenizer
and string
are required parameters. Others are
optional:
tokenize tokenizer
string
[normalizer=null]
[flags=NONE]
[mode=ADD]
[token_filters=NONE]
7.3.34.3. Usage¶
Here is a simple example.
Execution example:
tokenize TokenBigram "Fulltext Search"
# [
# [
# 0,
# 1337566253.89858,
# 0.000355720520019531
# ],
# [
# {
# "position": 0,
# "value": "Fu"
# },
# {
# "position": 1,
# "value": "ul"
# },
# {
# "position": 2,
# "value": "ll"
# },
# {
# "position": 3,
# "value": "lt"
# },
# {
# "position": 4,
# "value": "te"
# },
# {
# "position": 5,
# "value": "ex"
# },
# {
# "position": 6,
# "value": "xt"
# },
# {
# "position": 7,
# "value": "t "
# },
# {
# "position": 8,
# "value": " S"
# },
# {
# "position": 9,
# "value": "Se"
# },
# {
# "position": 10,
# "value": "ea"
# },
# {
# "position": 11,
# "value": "ar"
# },
# {
# "position": 12,
# "value": "rc"
# },
# {
# "position": 13,
# "value": "ch"
# },
# {
# "position": 14,
# "value": "h"
# }
# ]
# ]
It has only required parameters. tokenizer
is TokenBigram
and
string
is "Fulltext Search"
. It returns tokens that is
generated by tokenizing "Fulltext Search"
with TokenBigram
tokenizer. It doesn't normalize "Fulltext Search"
.
7.3.34.4. Parameters¶
This section describes all parameters. Parameters are categorized.
7.3.34.4.1. Required parameters¶
There are required parameters, tokenizer
and string
.
7.3.34.4.1.1. tokenizer
¶
It specifies the tokenizer name. tokenize
command uses the
tokenizer that is named tokenizer
.
See Tokenizers about built-in tokenizers.
Here is an example to use built-in TokenTrigram
tokenizer.
Execution example:
tokenize TokenTrigram "Fulltext Search"
# [
# [
# 0,
# 1337566253.89858,
# 0.000355720520019531
# ],
# [
# {
# "position": 0,
# "value": "Ful"
# },
# {
# "position": 1,
# "value": "ull"
# },
# {
# "position": 2,
# "value": "llt"
# },
# {
# "position": 3,
# "value": "lte"
# },
# {
# "position": 4,
# "value": "tex"
# },
# {
# "position": 5,
# "value": "ext"
# },
# {
# "position": 6,
# "value": "xt "
# },
# {
# "position": 7,
# "value": "t S"
# },
# {
# "position": 8,
# "value": " Se"
# },
# {
# "position": 9,
# "value": "Sea"
# },
# {
# "position": 10,
# "value": "ear"
# },
# {
# "position": 11,
# "value": "arc"
# },
# {
# "position": 12,
# "value": "rch"
# },
# {
# "position": 13,
# "value": "ch"
# },
# {
# "position": 14,
# "value": "h"
# }
# ]
# ]
If you want to use other tokenizers, you need to register additional
tokenizer plugin by register command. For example, you can use
KyTea based tokenizer by
registering tokenizers/kytea
.
7.3.34.4.1.2. string
¶
It specifies any string which you want to tokenize.
If you want to include spaces in string
, you need to quote
string
by single quotation ('
) or double quotation ("
).
Here is an example to use spaces in string
.
Execution example:
tokenize TokenBigram "Groonga is a fast fulltext earch engine!"
# [
# [
# 0,
# 1337566253.89858,
# 0.000355720520019531
# ],
# [
# {
# "position": 0,
# "value": "Gr"
# },
# {
# "position": 1,
# "value": "ro"
# },
# {
# "position": 2,
# "value": "oo"
# },
# {
# "position": 3,
# "value": "on"
# },
# {
# "position": 4,
# "value": "ng"
# },
# {
# "position": 5,
# "value": "ga"
# },
# {
# "position": 6,
# "value": "a "
# },
# {
# "position": 7,
# "value": " i"
# },
# {
# "position": 8,
# "value": "is"
# },
# {
# "position": 9,
# "value": "s "
# },
# {
# "position": 10,
# "value": " a"
# },
# {
# "position": 11,
# "value": "a "
# },
# {
# "position": 12,
# "value": " f"
# },
# {
# "position": 13,
# "value": "fa"
# },
# {
# "position": 14,
# "value": "as"
# },
# {
# "position": 15,
# "value": "st"
# },
# {
# "position": 16,
# "value": "t "
# },
# {
# "position": 17,
# "value": " f"
# },
# {
# "position": 18,
# "value": "fu"
# },
# {
# "position": 19,
# "value": "ul"
# },
# {
# "position": 20,
# "value": "ll"
# },
# {
# "position": 21,
# "value": "lt"
# },
# {
# "position": 22,
# "value": "te"
# },
# {
# "position": 23,
# "value": "ex"
# },
# {
# "position": 24,
# "value": "xt"
# },
# {
# "position": 25,
# "value": "t "
# },
# {
# "position": 26,
# "value": " e"
# },
# {
# "position": 27,
# "value": "ea"
# },
# {
# "position": 28,
# "value": "ar"
# },
# {
# "position": 29,
# "value": "rc"
# },
# {
# "position": 30,
# "value": "ch"
# },
# {
# "position": 31,
# "value": "h "
# },
# {
# "position": 32,
# "value": " e"
# },
# {
# "position": 33,
# "value": "en"
# },
# {
# "position": 34,
# "value": "ng"
# },
# {
# "position": 35,
# "value": "gi"
# },
# {
# "position": 36,
# "value": "in"
# },
# {
# "position": 37,
# "value": "ne"
# },
# {
# "position": 38,
# "value": "e!"
# },
# {
# "position": 39,
# "value": "!"
# }
# ]
# ]
7.3.34.4.2. Optional parameters¶
There are optional parameters.
7.3.34.4.2.1. normalizer
¶
It specifies the normalizer name. tokenize
command uses the
normalizer that is named normalizer
. Normalizer is important for
N-gram family tokenizers such as TokenBigram
.
Normalizer detects character type for each character while normalizing. N-gram family tokenizers use character types while tokenizing.
Here is an example that doesn't use normalizer.
Execution example:
tokenize TokenBigram "Fulltext Search"
# [
# [
# 0,
# 1337566253.89858,
# 0.000355720520019531
# ],
# [
# {
# "position": 0,
# "value": "Fu"
# },
# {
# "position": 1,
# "value": "ul"
# },
# {
# "position": 2,
# "value": "ll"
# },
# {
# "position": 3,
# "value": "lt"
# },
# {
# "position": 4,
# "value": "te"
# },
# {
# "position": 5,
# "value": "ex"
# },
# {
# "position": 6,
# "value": "xt"
# },
# {
# "position": 7,
# "value": "t "
# },
# {
# "position": 8,
# "value": " S"
# },
# {
# "position": 9,
# "value": "Se"
# },
# {
# "position": 10,
# "value": "ea"
# },
# {
# "position": 11,
# "value": "ar"
# },
# {
# "position": 12,
# "value": "rc"
# },
# {
# "position": 13,
# "value": "ch"
# },
# {
# "position": 14,
# "value": "h"
# }
# ]
# ]
All alphabets are tokenized by two characters. For example, Fu
is
a token.
Here is an example that uses normalizer.
Execution example:
tokenize TokenBigram "Fulltext Search" NormalizerAuto
# [
# [
# 0,
# 1337566253.89858,
# 0.000355720520019531
# ],
# [
# {
# "position": 0,
# "value": "fulltext"
# },
# {
# "position": 1,
# "value": "search"
# }
# ]
# ]
Continuous alphabets are tokenized as one token. For example,
fulltext
is a token.
If you want to tokenize by two characters with noramlizer, use
TokenBigramSplitSymbolAlpha
.
Execution example:
tokenize TokenBigramSplitSymbolAlpha "Fulltext Search" NormalizerAuto
# [
# [
# 0,
# 1337566253.89858,
# 0.000355720520019531
# ],
# [
# {
# "position": 0,
# "value": "fu"
# },
# {
# "position": 1,
# "value": "ul"
# },
# {
# "position": 2,
# "value": "ll"
# },
# {
# "position": 3,
# "value": "lt"
# },
# {
# "position": 4,
# "value": "te"
# },
# {
# "position": 5,
# "value": "ex"
# },
# {
# "position": 6,
# "value": "xt"
# },
# {
# "position": 7,
# "value": "t"
# },
# {
# "position": 8,
# "value": "se"
# },
# {
# "position": 9,
# "value": "ea"
# },
# {
# "position": 10,
# "value": "ar"
# },
# {
# "position": 11,
# "value": "rc"
# },
# {
# "position": 12,
# "value": "ch"
# },
# {
# "position": 13,
# "value": "h"
# }
# ]
# ]
All alphabets are tokenized by two characters. And they are normalized
to lower case characters. For example, fu
is a token.
7.3.34.4.2.2. flags
¶
It specifies a tokenization customize options. You can specify
multiple options separated by "|
". For example,
NONE|ENABLE_TOKENIZED_DELIMITER
.
Here are available flags.
Flag | Description |
---|---|
NONE |
Just ignored. |
ENABLE_TOKENIZED_DELIMITER |
Enables tokenized delimiter. See Tokenizers about tokenized delimiter details. |
Here is an example that uses ENABLE_TOKENIZED_DELIMITER
.
Execution example:
tokenize TokenDelimit "Fulltext Seacrch" NormalizerAuto ENABLE_TOKENIZED_DELIMITER
# [
# [
# 0,
# 1337566253.89858,
# 0.000355720520019531
# ],
# [
# {
# "position": 0,
# "value": "full"
# },
# {
# "position": 1,
# "value": "text sea"
# },
# {
# "position": 2,
# "value": "crch"
# }
# ]
# ]
TokenDelimit
tokenizer is one of tokenized delimiter supported
tokenizer. ENABLE_TOKENIZED_DELIMITER
enables tokenized delimiter.
Tokenized delimiter is special character that indicates token
border. It is U+FFFE
. The character is not assigned any
character. It means that the character is not appeared in normal
string. So the character is good character for this puropose. If
ENABLE_TOKENIZED_DELIMITER
is enabled, the target string is
treated as already tokenized string. Tokenizer just tokenizes by
tokenized delimiter.
7.3.34.4.2.3. mode
¶
It specifies a tokenize mode. If the mode is specified ADD
, the text
is tokenized by the rule that adding a document. If the mode is specified
GET
, the text is tokenized by the rule that searching a document. If
the mode is omitted, the text is tokenized by the ADD
mode.
The default mode is ADD
.
Here is an example to the ADD
mode.
Execution example:
tokenize TokenBigram "Fulltext Search" --mode ADD
# [
# [
# 0,
# 1408017697.66886,
# 0.00126171112060547
# ],
# [
# {
# "value": "Fu",
# "position": 0
# },
# {
# "value": "ul",
# "position": 1
# },
# {
# "value": "ll",
# "position": 2
# },
# {
# "value": "lt",
# "position": 3
# },
# {
# "value": "te",
# "position": 4
# },
# {
# "value": "ex",
# "position": 5
# },
# {
# "value": "xt",
# "position": 6
# },
# {
# "value": "t ",
# "position": 7
# },
# {
# "value": " S",
# "position": 8
# },
# {
# "value": "Se",
# "position": 9
# },
# {
# "value": "ea",
# "position": 10
# },
# {
# "value": "ar",
# "position": 11
# },
# {
# "value": "rc",
# "position": 12
# },
# {
# "value": "ch",
# "position": 13
# },
# {
# "value": "h",
# "position": 14
# }
# ]
# ]
The last alphabet is tokenized by one character.
Here is an example to the GET
mode.
Execution example:
tokenize TokenBigram "Fulltext Search" --mode GET
# [
# [
# 0,
# 1408017732.62883,
# 0.000665903091430664
# ],
# [
# {
# "value": "Fu",
# "position": 0
# },
# {
# "value": "ul",
# "position": 1
# },
# {
# "value": "ll",
# "position": 2
# },
# {
# "value": "lt",
# "position": 3
# },
# {
# "value": "te",
# "position": 4
# },
# {
# "value": "ex",
# "position": 5
# },
# {
# "value": "xt",
# "position": 6
# },
# {
# "value": "t ",
# "position": 7
# },
# {
# "value": " S",
# "position": 8
# },
# {
# "value": "Se",
# "position": 9
# },
# {
# "value": "ea",
# "position": 10
# },
# {
# "value": "ar",
# "position": 11
# },
# {
# "value": "rc",
# "position": 12
# },
# {
# "value": "ch",
# "position": 13
# }
# ]
# ]
The last alphabet is tokenized by two characters.
7.3.34.4.2.4. token_filters
¶
It specifies the token filter names. tokenize
command uses the
tokenizer that is named token_filters
.
See Token filters about token filters.
7.3.34.5. Return value¶
tokenize
command returns tokenized tokens. Each token has some
attributes except token itself. The attributes will be increased in
the feature:
[HEADER, tokens]
HEADER
See Output format aboutHEADER
.
tokens
tokens
is an array of token. Token is an object that has the following attributes.
Name Description value
Token itself. position
The N-th token.