1.20.x search.module search_expand_cjk($matches)

Splits CJK (Chinese, Japanese, Korean) text into tokens.

The Search module matches exact words, where a word is defined to be a sequence of characters delimited by spaces or punctuation. CJK languages are written in long strings of characters, though, not split up into words. So in order to allow search matching, we split up CJK text into tokens consisting of consecutive, overlapping sequences of characters whose length is equal to the 'minimum_word_size' variable. This tokenizing is only done if the 'overlap_cjk' variable is TRUE.

Parameters

$matches: This function is a callback for preg_replace_callback(), which is called from search_simplify(). So, $matches is an array of regular expression matches, which means that $matches[0] contains the matched text -- a string of CJK characters to tokenize.

Return value

Tokenized text, starting and ending with a space character.:

File

modules/search/search.module, line 539
Enables site-wide keyword searching.

Code

function search_expand_cjk($matches) {
  $min = config_get('search.settings', 'search_minimum_word_size');
  $str = $matches[0];
  $length = backdrop_strlen($str);
  // If the text is shorter than the minimum word size, don't tokenize it.
  if ($length <= $min) {
    return ' ' . $str . ' ';
  }
  $tokens = ' ';
  // Build a FIFO queue of characters.
  $chars = array();
  for ($i = 0; $i < $length; $i++) {
    // Add the next character off the beginning of the string to the queue.
    $current = backdrop_substr($str, 0, 1);
    $str = substr($str, strlen($current));
    $chars[] = $current;
    if ($i >= $min - 1) {
      // Make a token of $min characters, and add it to the token string.
      $tokens .= implode('', $chars) . ' ';
      // Shift out the first character in the queue.
      array_shift($chars);
    }
  }
  return $tokens;
}