From b3e8ab80d95a2aed2c561ac9166a9b1a7036b47b Mon Sep 17 00:00:00 2001
From: deogar Copyright © 2001-2015 Andrew Aksyonoff Copyright © 2008-2015 Sphinx Technologies Inc, http://sphinxsearch.com
+
Sphinx initial author (and a benevolent dictator ever since):
Andrew Aksyonoff, http://shodan.ru
-
+
Past and present employees of Sphinx Technologies Inc who should be
noted on their work on Sphinx (in alphabetical order):
- People who contributed to Sphinx and their contributions (in no particular order):
+ People who contributed to Sphinx and their contributions (in no particular order):
Robert "coredev" Bengtsson (Sweden), initial version of PostgreSQL data source Len Kranendonk, Perl API Dmytro Shteflyuk, Ruby API
(excluding title and content, that are full-text fields) as
attributes, indexing them, and then using API calls to
setup filtering, sorting, and grouping. Here as an example.
-
sql_attr_uint = forum_id
sql_attr_timestamp = post_date
...
-
Obviously, that's not much of a difference for 2000-row table,
but when it comes to indexing 10-million-row MyISAM table,
ranged queries might be of some help.
-
+
The difference between post-query and post-index query is in that post-query
is run immediately when Sphinx received all the documents, but further indexing
may still fail for some other reason. On the contrary,
@@ -2252,7 +2252,7 @@
// SphinxQL
mysql_query ( "SELECT ... OPTION ranker=sph04" );
-
+
Legacy matching modes automatically select a ranker as follows:
SPH_MATCH_ALL uses SPH_RANK_PROXIMITY ranker; SPH_MATCH_ANY uses SPH_RANK_MATCHANY ranker;
SPH_SORT_RELEVANCE is equivalent to sorting by "@weight DESC, @id ASC" in extended sorting mode,
SPH_SORT_ATTR_ASC is equivalent to "attribute ASC, @weight DESC, @id ASC",
and SPH_SORT_ATTR_DESC to "attribute DESC, @weight DESC, @id ASC" respectively.
-
+
In SPH_SORT_TIME_SEGMENTS mode, attribute values are split into so-called
time segments, and then sorted by time segment first, and by relevance second.
@@ -6917,7 +6917,7 @@
(Section 9.4.5, “SetGeoAnchor”) are now internally implemented using
this computed expressions mechanism, using magic names '@expr' and '@geodist'
respectively.
-
because to fix it, we need to be able either to reproduce and fix the bug,
or to deduce what's causing it from the information that you provide.
So here are some instructions on how to do that.
- Nothing special to say here. Here is the
+ Nothing special to say here. Here is the
<a href="http://sphinxsearch.com/bugs">link</a>. Create a new
ticket and describe your bug in details so both you and developers can
-save their time. In case of crashes we sometimes can get enough info to fix from
+save their time. In case of crashes we sometimes can get enough info to fix from
backtrace. Sphinx tries to write crash backtrace to its log file. It may look like
this:
that the binary is not stripped. Our official binary packages should be fine.
(That, or we have the symbols stored.) However, if you manually build Sphinx
from the source tarball, do not run To fix your bug developers often need to reproduce it on their machines.
+binary, and/or do not let your build/packaging system do that! To fix your bug developers often need to reproduce it on their machines.
To do this they need your sphinx.conf, index files, binlog (if present),
sometimes data to index (like SQL tables or XMLpipe2 data files) and queries.
@@ -8325,7 +8325,7 @@
and "127.0.0.1" will force TCP/IP usage. Refer to
MySQL manual
for more details.
-
Optional, default is 3306 for
SQL user to use when connecting to sql_host.
Mandatory, no default value.
Applies to SQL source types (
SQL user password to use when connecting to sql_host.
Mandatory, no default value.
Applies to SQL source types (
SQL database (in MySQL terms) to use after the connection and perform further queries within.
Mandatory, no default value.
Applies to SQL source types (
On Linux, it would typically be
both in theory and in practice. However, enabling compression on 100 Mbps links
may improve indexing time significantly (upto 20-30% of the total indexing time
improvement was reported). Your mileage may vary.
-
ODBC DSN (Data Source Name) specifies the credentials (host, user, password, etc)
to use when connecting to ODBC data source. The format depends on specific ODBC
driver used.
-
-
by default it builds with 32-bit IDs support but
it will automatically switch to a variant that matches keywords
in those fields, computes a sum of matched payloads multiplied
by field weights, and adds that sum to the final rank.
-
exactly equal to
-
over the network when sending queries. (Because that might be too much
of an impact when the K-list is huge.) You will need to setup a
separate per-server K-lists in that case.
-
such bitfields are packed together in 32-bit chunks in
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (
Note that unlike sql_attr_uint,
these values are signed.
Introduced in version 0.9.9-rc1.
-
and UNIX_TIMESTAMP() in MySQL will not return anything expected.
If you only needs to work with dates, not times, consider TO_DAYS()
function in MySQL instead.
-
One important usage of the float attributes is storing latitude
and longitude values (in radians), for further usage in query-time
geosphere distance calculations.
-
RANGE-QUERY is SQL query used to fetch min and max ID values, similar to 'sql_query_range'
-
declared using
You can read more on JSON attributes in
http://sphinxsearch.com/blog/2013/08/08/full-json-support-in-trunk/.
-
-
value but does not full-text index it. In some cases it might be desired to both full-text
index the column and store it as attribute. Free open-source SQL full-text search engine
Author
Author
Team
Team
Contributors
Contributors
Author
Example sphinx.conf part:
+
Example sphinx.conf part:
...
sql_query = SELECT id, title, content, \
author_id, forum_id, post_date FROM my_forum_posts
@@ -1022,7 +1022,7 @@
Author
Example application code (in PHP):
+
Example application code (in PHP):
// only search posts by author whose ID is 123
$cl->SetFilter ( "author_id", array ( 123 ) );
@@ -1258,7 +1258,7 @@
Author
sql_query_post
vs. sql_query_post_index
sql_query_post
vs. sql_query_post_index
Author
Legacy matching modes rankers
Legacy matching modes rankers
Author
SPH_SORT_TIME_SEGMENTS mode
SPH_SORT_TIME_SEGMENTS mode
Author
Example:
+
Example:
$cl->SetSelect ( "*, @weight+(user_karma+ln(pageviews))*0.1 AS myweight" );
$cl->SetSelect ( "exp_years, salary_gbp*{$gbp_usd_rate} AS salary_usd,
IF(age>40,1,0) AS over40" );
@@ -8059,10 +8059,10 @@
Author
Bug-tracker
Bug-tracker
Crashes
Crashes
@@ -8109,7 +8109,7 @@
Author
strip
utility on that
-binary, and/or do not let your build/packaging system do that!Uploading your data
Uploading your data
Author
mssql
type is currently only available on Windows.
odbc
type is available both on Windows natively and on
Linux through UnixODBC library.
-Example:
+
Example:
type = mysql
Author
Example:
+
Example:
sql_host = localhost
Author
mysql
source type and 5432 for pgsql
type.
Applies to SQL source types (mysql
, pgsql
, mssql
) only.
Note that it depends on sql_host setting whether this value will actually be used.
-Example:
+
Example:
sql_port = 3306
Author
mysql
, pgsql
, mssql
) only.
-Example:
+
Example:
sql_user = test
Author
mysql
, pgsql
, mssql
) only.
-Example:
+
Example:
sql_pass = mysecretpassword
Author
mysql
, pgsql
, mssql
) only.
-Example:
+
Example:
sql_db = test
Author
/var/lib/mysql/mysql.sock
.
On FreeBSD, it would typically be /tmp/mysql.sock
.
Note that it depends on sql_host setting whether this value will actually be used.
-Example:
+
Example:
sql_sock = /tmp/mysql.sock
Author
Example:
+
Example:
mysql_connect_flags = 32 # enable compression
Author
indexer
and MySQL. The details on creating
the certificates and setting up MySQL server can be found in
MySQL documentation.
-Example:
+
Example:
mysql_ssl_cert = /etc/ssl/client-cert.pem
mysql_ssl_key = /etc/ssl/client-key.pem
mysql_ssl_ca = /etc/ssl/cacert.pem
@@ -8440,7 +8440,7 @@
Author
Example:
+
Example:
odbc_dsn = Driver={Oracle ODBC Driver};Dbq=myDBName;Uid=myUsername;Pwd=myPassword
Author
sql_query_pre = SET SESSION query_cache_type=OFF
Example:
+
Example:
sql_query_pre = SET NAMES utf8
sql_query_pre = SET SESSION query_cache_type=OFF
Author
--enable-id64
option
to configure
allows to build with 64-bit document and word IDs support.
-Example:
+
Example:
sql_query = \
SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, \
title, content \
@@ -8573,7 +8573,7 @@
Author
Example:
+
Example:
sql_joined_field = \
tagstext from query; \
SELECT docid, CONCAT('tag',tagid) FROM tags ORDER BY docid ASC
@@ -8606,7 +8606,7 @@
Author
$start
or $end
from your query.
The example in Section 3.8, “Ranged queries”) illustrates that; note how it
uses greater-or-equal and less-or-equal comparisons.
-Example:
+
Example:
sql_query_range = SELECT MIN(id),MAX(id) FROM documents
Example:
+
Example:
sql_range_step = 1000
Author
Example:
+
Example:
sql_query_killlist = \
SELECT id FROM documents WHERE updated_ts>=@last_reindex UNION \
SELECT id FROM documents_deleted WHERE deleted_ts>=@last_reindex
@@ -8695,7 +8695,7 @@
Author
.spa
attribute data file. Bit size settings are ignored if using
inline storage.
-Example:
+
Example:
sql_attr_uint = group_id
sql_attr_uint = forum_id:9 # 9 bits for forum_id
Author
mysql
, pgsql
, mssql
) only.
Equivalent to sql_attr_uint declaration with a bit count of 1.
-Example:
+
Example:
sql_attr_bool = is_deleted # will be packed to 1 bit
Author
Example:
+
Example:
sql_attr_bigint = my_bigint_id
Author
Example:
+
Example:
# sql_query = ... UNIX_TIMESTAMP(added_datetime) AS added_ts ...
sql_attr_timestamp = added_ts
Author
Example:
+
Example:
sql_attr_float = lat_radians
sql_attr_float = long_radians
Author
Example:
+
Example:
sql_attr_multi = uint tag from query; SELECT id, tag FROM tags
sql_attr_multi = bigint tag from ranged-query; \
SELECT id, tag FROM tags WHERE id>=$start AND id<=$end; \
@@ -8809,7 +8809,7 @@
Author
sql_attr_string
will not be full-text
indexed; you can use sql_field_string
directive for that.
-Example:
+
Example:
sql_attr_string = title # will be stored but will not be indexed
Author
Example:
+
Example:
sql_attr_json = properties
Author
sql_column_buffers = <colname>=<size>[K|M] [, ...]
Example:
+
Example:
sql_query = SELECT id, mytitle, mycontent FROM documents
sql_column_buffers = mytitle=64K, mycontent=10M
Author
sql_field_string
lets you do
exactly that. Both the field and the attribute will be named the same.
-Example:
+
sql_field_string = title # will be both indexed and stored
in size are skipped. Any errors during the file loading (IO errors, missed limits, etc) will be reported as indexing warnings and will not early terminate the indexing. No content will be indexed for such files. -
+Example:
sql_file_field = my_file_path # load and index files referred to by my_file_path
For instance, updates on helper table that permanently change the last successfully indexed ID should not be run from post-fetch query; they should be run from post-index query instead. -
+Example:
sql_query_post = DROP TABLE my_tmp_table
expanded to maximum document ID which was actually fetched from the database during indexing. If no documents were indexed, $maxid will be expanded to 0. -
+Example:
sql_query_post_index = REPLACE INTO counters ( id, val ) \ VALUES ( 'max_indexed_id', $maxid )
database server. It causes the indexer to sleep for given amount of milliseconds once per each ranged query step. This sleep is unconditional, and is performed before the fetch query. -
+Example:
sql_ranged_throttle = 1000 # sleep for 1 sec before each query step@@ -8963,7 +8963,7 @@Author
Specifies a command that will be executed and which output will be parsed for documents. Refer to Section 3.9, “xmlpipe2 data source” for specific format description. -
Example:
+Example:
xmlpipe_command = cat /home/sphinx/test.xml@@ -8971,7 +8971,7 @@@@ -8984,7 +8984,7 @@Author
xmlpipe field declaration. Multi-value, optional. Applies to
xmlpipe2
source type only. Refer to Section 3.9, “xmlpipe2 data source”. -Example:
+Example:
xmlpipe_field = subject xmlpipe_field = contentAuthor
Makes the specified XML element indexed as both a full-text field and a string attribute. Equivalent to <sphinx:field name="field" attr="string"/> declaration within the XML file. -
Example:
+Example:
xmlpipe_field_string = subject@@ -8993,7 +8993,7 @@Author
Multi-value, optional. Applies to
xmlpipe2
source type only. Syntax fully matches that of sql_attr_uint. -Example:
+Example:
xmlpipe_attr_uint = author_id@@ -9002,7 +9002,7 @@Author
Multi-value, optional. Applies to
xmlpipe2
source type only. Syntax fully matches that of sql_attr_bigint. -Example:
+Example:
xmlpipe_attr_bigint = my_bigint_id@@ -9011,7 +9011,7 @@Author
Multi-value, optional. Applies to
xmlpipe2
source type only. Syntax fully matches that of sql_attr_bool. -Example:
+Example:
xmlpipe_attr_bool = is_deleted # will be packed to 1 bit@@ -9020,7 +9020,7 @@Author
Multi-value, optional. Applies to
xmlpipe2
source type only. Syntax fully matches that of sql_attr_timestamp. -Example:
+Example:
xmlpipe_attr_timestamp = published@@ -9029,7 +9029,7 @@@@ -9044,7 +9044,7 @@Author
Multi-value, optional. Applies to
xmlpipe2
source type only. Syntax fully matches that of sql_attr_float. -Example:
+Example:
xmlpipe_attr_float = lat_radians xmlpipe_attr_float = long_radiansAuthor
that will constitute the MVA will be extracted, similar to how sql_attr_multi parses SQL column contents when 'field' MVA source type is specified. -
Example:
+Example:
xmlpipe_attr_multi = taglist@@ -9058,7 +9058,7 @@Author
that will constitute the MVA will be extracted, similar to how sql_attr_multi parses SQL column contents when 'field' MVA source type is specified. -
Example:
+Example:
xmlpipe_attr_multi_64 = taglist@@ -9070,7 +9070,7 @@Author
This setting declares a string attribute tag in xmlpipe2 stream. The contents of the specified tag will be parsed and stored as a string value. -
Example:
+Example:
xmlpipe_attr_string = subject@@ -9083,7 +9083,7 @@Author
XML tag are to be treated as a JSON document and stored into a Sphinx index for later use. Refer to Section 12.1.24, “sql_attr_json” for more details on the JSON attributes. -
Example:
+Example:
xmlpipe_attr_json = properties@@ -9100,7 +9100,7 @@Author
UTF8 fixup feature lets you avoid that. When fixup is enabled, Sphinx will preprocess the incoming stream before passing it to the XML parser and replace invalid UTF-8 sequences with spaces. -
Example:
+Example:
xmlpipe_fixup_utf8 = 1@@ -9114,7 +9114,7 @@Author
authentication when connecting to MS SQL Server. Note that when running
searchd
as a service, account user can differ from the account you used to install the service. -Example:
+Example:
mssql_winauth = 1@@ -9128,7 +9128,7 @@@@ -9143,7 +9143,7 @@Author
using standard zlib algorithm (called deflate and also implemented by
gunzip
). When indexing on a different box than the database, this lets you offload the database, and save on network traffic. The feature is only available if zlib and zlib-devel were both available during build time. -Example:
+Example:
unpack_zlib = col1 unpack_zlib = col2Author
using modified zlib algorithm used by MySQL COMPRESS() and UNCOMPRESS() functions. When indexing on a different box than the database, this lets you offload the database, and save on network traffic. The feature is only available if zlib and zlib-devel were both available during build time. -
Example:
+Example:
unpack_mysqlcompress = body_compressed unpack_mysqlcompress = description_compressed@@ -9159,7 +9159,7 @@Author
data can not go over the buffer size. This option lets you control the buffer size, both to limit
indexer
memory use, and to enable unpacking of really long data fields if necessary. -Example:
+Example:
unpack_mysqlcompress_maxsize = 1M@@ -9187,7 +9187,7 @@Author
Index type setting lets you choose the needed type. By default, plain local index type will be assumed. -
Example:
+Example:
type = distributed-
Example:
+Example:
source = srcpart1 source = srcpart2 source = srcpart3 @@ -9275,7 +9275,7 @@Author
.sps
stores string attribute data.
-
+Example:
path = /var/data/test1@@ -9299,7 +9299,7 @@Author
However, such cases are infrequent, and docinfo defaults to "extern". Refer to Section 3.3, “Attributes” for in-depth discussion and RAM usage estimates. -
Example:
+Example:
docinfo = inline@@ -9322,7 +9322,7 @@Author
from root account, or be granted enough privileges otherwise. If mlock() fails, a warning is emitted, but index continues working. -
Example:
+Example:
mlock = 1@@ -9451,7 +9451,7 @@Author
a matching entry in the dictionary, stemmers will not be applied at all. Or in other words, wordforms can be used to implement stemming exceptions. -
Example:
+Example:
morphology = stem_en, libstemmer_sv@@ -9529,7 +9529,7 @@Author
top-speed worst-case searches (CRC dictionary), or only slightly impact indexing time but sacrifice worst-case searching time when the prefix expands into very many keywords (keywords dictionary). -
Example:
+Example:
dict = keywords@@ -9567,7 +9567,7 @@Author
PRE, TABLE, TBODY, TD, TFOOT, TH, THEAD, TR, and UL.
Both sentences and paragraphs increment the keyword position counter by 1. -
Example:
+Example:
index_sp = 1@@ -9598,7 +9598,7 @@Author
in a document. Once indexed, zones can then be used for matching with the ZONE operator, see Section 5.3, “Extended query syntax”. -
Example:
+Example:
index_zones = h*, th, titleEarlier versions than 2.1.1-beta only provided this feature for plain @@ -9619,7 +9619,7 @@
Author
exactly as long as specified will be stemmed. So in order to avoid stemming 3-character keywords, you should specify 4 for the value. For more finely grained control, refer to wordforms feature. -
Example:
+Example:
min_stemming_len = 4@@ -9657,7 +9657,7 @@@@ -9727,7 +9727,7 @@Author
of the index, sorted by the keyword frequency, see
--buildstops
and--buildfreqs
switch in Section 7.1, “indexer
command reference”. Top keywords from that dictionary can usually be used as stopwords. -Example:
+Example:
stopwords = /usr/local/sphinx/data/stopwords.txt stopwords = stopwords-ru.txt stopwords-en.txtAuthor
s02e02 > season 2 episode 2 s3 e3 > season 3 episode 3
-
+Example:
wordforms = /usr/local/sphinx/data/wordforms.txt wordforms = /usr/local/sphinx/data/alternateforms.txt wordforms = /usr/local/sphinx/private/dict*.txt @@ -9758,7 +9758,7 @@Author
time it makes no sense to embed a 100 MB wordforms dictionary into a tiny delta index. So there needs to be a size threshold, and
embedded_limit
is that threshold. -Example:
+Example:
embedded_limit = 32K@@ -9838,7 +9838,7 @@Author
during indexing and searching respectively. Therefore, to pick up changes in the file it's required to reindex and restart
searchd
. -Example:
+Example:
exceptions = /usr/local/sphinx/data/exceptions.txt@@ -9848,7 +9848,7 @@Author
Only those words that are not shorter than this minimum will be indexed. For instance, if min_word_len is 4, then 'the' won't be indexed, but 'they' will be. -
Example:
+Example:
min_word_len = 4@@ -9916,7 +9916,7 @@Author
Starting with 2.2.3-beta, aliases "english" and "russian" are allowed at control character mapping. -
Example:
+Example:
# default are English and Russian letters charset_table = 0..9, A..Z->a..z, _, a..z, \ U+410..U+42F->U+430..U+44F, U+430..U+44F, U+401->U+451, U+451 @@ -9939,7 +9939,7 @@Author
The syntax is the same as for charset_table, but it's only allowed to declare characters, and not allowed to map them. Also, the ignored characters must not be present in charset_table. -
Example:
+Example:
ignore_chars = U+AD@@ -9969,7 +9969,7 @@Author
$cl->Query ( "( keyword | keyword* ) other keywords" );-
Example:
+Example:
min_prefix_len = 3@@ -9991,7 +9991,7 @@Author
There's no automatic way to rank perfect word matches higher in an infix index, but the same tricks as with prefix indexes can be applied. -
Example:
+Example:
min_infix_len = 3@@ -10027,7 +10027,7 @@Author
and intentionally forbidden in that case. If required, you can still limit the length of a substring that you search for in the application code. -
Example:
+Example:
max_substring_len = 12@@ -10041,7 +10041,7 @@Author
page contents. prefix_fields specifies what fields will be prefix-indexed; all other fields will be indexed in normal mode. The value format is a comma-separated list of field names. -
Example:
+Example:
prefix_fields = url, domain@@ -10051,7 +10051,7 @@Author
Similar to prefix_fields, but lets you limit infix-indexing to given fields. -
Example:
+Example:
infix_fields = url, domain@@ -10091,7 +10091,7 @@Author
good results, thanks to phrase based ranking: it will pull closer phrase matches (which in case of N-gram CJK words can mean closer multi-character word matches) to the top. -
Example:
+Example:
ngram_len = 1@@ -10103,7 +10103,7 @@Author
this list defines characters, sequences of which are subject to N-gram extraction. Words comprised of other characters will not be affected by N-gram indexing feature. The value format is identical to charset_table. -
Example:
+Example:
ngram_chars = U+3000..U+2FA1F@@ -10129,7 +10129,7 @@Author
Phrase boundary condition will be raised if and only if such character is followed by a separator; this is to avoid abbreviations such as S.T.A.L.K.E.R or URLs being treated as several phrases. -
Example:
+Example:
phrase_boundary = ., ?, !, U+2026 # horizontal ellipsis@@ -10139,7 +10139,7 @@Author
On phrase boundary, current word position will be additionally incremented by this number. See phrase_boundary for details. -
Example:
+Example:
phrase_boundary_step = 100@@ -10176,7 +10176,7 @@Author
There are no restrictions on tag names; ie. everything that looks like a valid tag start, or end, or a comment will be stripped. -
Example:
+Example:
html_strip = 1@@ -10187,7 +10187,7 @@Author
Specifies HTML markup attributes whose contents should be retained and indexed even though other HTML markup is stripped. The format is per-tag enumeration of indexable attributes, as shown in the example below. -
Example:
+Example:
html_index_attrs = img=alt,title; a=title;@@ -10203,7 +10203,7 @@Author
The value is a comma-separated list of element (tag) names whose contents should be removed. Tag names are case insensitive. -
Example:
+Example:
html_remove_elements = style, script@@ -10226,7 +10226,7 @@@@ -10282,7 +10282,7 @@Author
local indexes (refer to Section 12.2.31, “agent” for the details). However, that creates redundant CPU and network load, and
dist_threads
is now strongly suggested instead. -Example:
+Example:
local = chunk1 local = chunk2Author
(ie. sequentially or in parallel too) depends solely on the agent configuration (ie. dist_threads directive). Master has no remote control over that. -
Example:
+Example:
# config on box2 # sharding an index over 3 servers agent = box2:9312:chunk2 @@ -10297,7 +10297,7 @@Author
# sharding an index over 3 servers agent = box1:9312:chunk2 agent = box2:9312:chunk3 -
Agent mirrors
+
Agent mirrors
New syntax added in 2.1.1-beta lets you define so-called agent mirrors that can be used interchangeably when processing a search query. Master server keeps track of mirror status (alive or dead) and response times, and does @@ -10337,7 +10337,7 @@
Author
in order to have some statistics and at least check, whether the remote host is still alive. ha_ping_interval defaults to 1000 msec. Setting it to 0 disables pings and statistics will only be accumulated based on actual queries. -
Example:
+Example:
# sharding index over 4 servers total # in just 2 chunks but with 2 failover mirrors for each chunk # box1, box2 carry chunk1 as local @@ -10371,7 +10371,7 @@Author
in workers=threads mode. In other modes, simple non-persistent connections (i.e., one connection per operation) will be used, and a warning will show up in the console. -
Example:
+Example:
agent_persistent = remotebox:9312:index2@@ -10390,7 +10390,7 @@Author
Also, all network errors on blackhole agents will be ignored. The value format is completely identical to regular agent directive. -
Example:
+Example:
agent_blackhole = testbox:9312:testindex1,testindex2@@ -10403,7 +10403,7 @@Author
successfully. If the timeout is reached but connect() does not complete, and retries are enabled, retry will be initiated. -
Example:
+Example:
agent_connect_timeout = 300@@ -10418,7 +10418,7 @@Author
a remote agent equals to the sum of
agent_connection_timeout
andagent_query_timeout
. Queries will not be retried if this timeout is reached; a warning will be produced instead. -Example:
+Example:
agent_query_timeout = 10000 # our query can be long, allow up to 10 sec@@ -10438,7 +10438,7 @@Author
This directive does not affect
indexer
in any way, it only affectssearchd
. -Example:
+Example:
preopen = 1@@ -10469,7 +10469,7 @@Author
This directive does not affect
searchd
in any way, it only affectsindexer
. -Example:
+Example:
inplace_enable = 1@@ -10481,7 +10481,7 @@Author
This directive does not affect
searchd
in any way, it only affectsindexer
. -Example:
+Example:
inplace_hit_gap = 1M@@ -10493,7 +10493,7 @@Author
This directive does not affect
searchd
in any way, it only affectsindexer
. -Example:
+Example:
inplace_docinfo_gap = 1M@@ -10505,7 +10505,7 @@Author
This directive does not affect
searchd
in any way, it only affectsindexer
. -Example:
+Example:
inplace_reloc_factor = 0.1@@ -10517,7 +10517,7 @@Author
This directive does not affect
searchd
in any way, it only affectsindexer
. -Example:
+Example:
inplace_write_factor = 0.1@@ -10531,7 +10531,7 @@Author
enables exact form operator in the query language to work. This impacts the index size and the indexing time. However, searching performance is not impacted at all. -
Example:
+Example:
index_exact_words = 1@@ -10542,7 +10542,7 @@Author
This directive does not affect
searchd
in any way, it only affectsindexer
. -Example:
+Example:
overshort_step = 1@@ -10553,7 +10553,7 @@Author
This directive does not affect
searchd
in any way, it only affectsindexer
. -Example:
+Example:
stopword_step = 1@@ -10588,7 +10588,7 @@Author
hitless, "simon says hello world" will be converted to ("simon says" & hello & world) query, matching all documents that contain "hello" and "world" anywhere in the document, and also "simon says" as an exact phrase. -
Example:
+Example:
hitless_words = all@@ -10621,7 +10621,7 @@Author
This directive does not affect
indexer
in any way, it only affectssearchd
. -Example:
+Example:
expand_keywords = 1@@ -10660,7 +10660,7 @@@@ -10709,7 +10709,7 @@Author
so that multiple different blended characters could be normalized into just one base form. This is useful when indexing multiple alternative Unicode codepoints with equivalent glyphs. -
Example:
+Example:
blend_chars = +, &, U+23 blend_chars = +, &->+ # 2.0.1 and aboveAuthor
Default behavior is to index the entire token, equivalent to
blend_mode = trim_none
. -Example:
+Example:
blend_mode = trim_tail, skip_pure@@ -10729,7 +10729,7 @@Author
hence, specifying 512 MB limit and only inserting 3 MB of data should result in allocating 3 MB, not 512 MB.
-
Example:
+Example:
rt_mem_limit = 512M@@ -10743,7 +10743,7 @@Author
in INSERT statements without an explicit list of inserted columns will have to be in the same order as configured.
-
Example:
+Example:
rt_field = author rt_field = title rt_field = content @@ -10754,7 +10754,7 @@Author
Multi-value (an arbitrary number of attributes is allowed), optional. Declares an unsigned 32-bit attribute. Introduced in version 1.10-beta. -
Example:
+Example:
rt_attr_uint = gid@@ -10763,7 +10763,7 @@Author
Multi-value (there might be multiple attributes declared), optional. Declares a 1-bit unsigned integer attribute. Introduced in version 2.1.2-release. -
Example:
+Example:
rt_attr_bool = available@@ -10772,7 +10772,7 @@Author
Multi-value (an arbitrary number of attributes is allowed), optional. Declares a signed 64-bit attribute. Introduced in version 1.10-beta. -
Example:
+Example:
rt_attr_bigint = guid@@ -10781,7 +10781,7 @@Author
Multi-value (an arbitrary number of attributes is allowed), optional. Declares a single precision, 32-bit IEEE 754 format float attribute. Introduced in version 1.10-beta. -
Example:
+Example:
rt_attr_float = gpa@@ -10790,7 +10790,7 @@Author
Declares the UNSIGNED INTEGER (unsigned 32-bit) MVA attribute. Multi-value (ie. there may be more than one such attribute declared), optional. Applies to RT indexes only. -
Example:
+Example:
rt_attr_multi = my_tags@@ -10799,7 +10799,7 @@Author
Declares the BIGINT (signed 64-bit) MVA attribute. Multi-value (ie. there may be more than one such attribute declared), optional. Applies to RT indexes only. -
Example:
+Example:
rt_attr_multi_64 = my_wide_tags@@ -10807,7 +10807,7 @@Author
Timestamp attribute declaration. Multi-value (an arbitrary number of attributes is allowed), optional. Introduced in version 1.10-beta. -
Example:
+Example:
rt_attr_timestamp = date_added@@ -10815,7 +10815,7 @@Author
String attribute declaration. Multi-value (an arbitrary number of attributes is allowed), optional. Introduced in version 1.10-beta. -
Example:
+Example:
rt_attr_string = author@@ -10825,7 +10825,7 @@Author
Introduced in version 2.1.1-beta.
Refer to Section 12.1.24, “sql_attr_json” for more details on the JSON attributes. -
Example:
+Example:
rt_attr_json = properties@@ -10839,11 +10839,11 @@Author
index. Essentially, this directive controls how exactly master does the load balancing between the configured mirror agent nodes. As of 2.1.1-beta, the following strategies are implemented: -
Simple random balancing
ha_strategy = random+
Simple random balancing
ha_strategy = randomThe default balancing mode. Simple linear random distribution among the mirrors. That is, equal selection probability are assigned to every mirror. Kind of similar to round-robin (RR), but unlike RR, does not impose a strict selection order. -
Adaptive randomized balancing
+
Adaptive randomized balancing
The default simple random strategy does not take mirror status, error rate, and, most importantly, actual response latencies into account. So to accommodate for heterogeneous clusters and/or temporary spikes in agent node load, we have @@ -10887,7 +10887,7 @@
Author
ha_strategy = noerrorsLatency-weighted probabilities, but mirrors with worse errors/success ratio are excluded from the selection. -
Round-robin balancing
ha_strategy = roundrobinSimple round-robin selection, that is, selecting the 1st mirror +
Round-robin balancing
ha_strategy = roundrobinSimple round-robin selection, that is, selecting the 1st mirror in the list, then the 2nd one, then the 3rd one, etc, and then repeating the process once the last mirror in the list is reached. Unlike with the randomized strategies, RR imposes a strict querying order (1, 2, 3, .., @@ -10913,7 +10913,7 @@
Author
to index a current word pair or not.
bigram_freq_words
lets you define a list of such keywords. -Example:
+Example:
bigram_freq_words = the, a, you, i@@ -10953,7 +10953,7 @@Author
For most usecases,
both_freq
would be the best mode, but your mileage may vary. -Example:
+Example:
bigram_freq_words = both_freq@@ -10980,7 +10980,7 @@Author
and its extension towards multiple fields, called BM25F. They require per-document length and per-field lengths, respectively. Hence the additional directive. -
Example:
+Example:
index_field_lengths = 1@@ -11017,7 +11017,7 @@Author
installed in the system and Sphinx must be configured built with a
--with-re2
switch. Binary packages should come with RE2 builtin. -Example:
+Example:
# index '13-inch' as '13inch' regexp_filter = \b(\d+)\" => \1inch @@ -11041,7 +11041,7 @@Author
stopwords_unstemmed directive fixes that issue. When it's enabled, stopwords are applied before stemming (and therefore to the original word forms), and the tokens are stopped when token == stopword. -
Example:
+Example:
stopwords_unstemmed = 1@@ -11076,14 +11076,14 @@Author
first, then converting those to .idf format using
--buildidf
, then merging all .idf files across cluser using--mergeidf
. Refer to Section 7.4, “indextool
command reference” for more information. -Example:
+Example:
global_idf = /usr/local/sphinx/var/global.idfRLP context configuration file. Mandatory if RLP is used. Added in 2.2.1-beta. -
Example:
+Example:
rlp_context = /home/myuser/RLP/rlp-context.xml@@ -11102,7 +11102,7 @@Author
Note that this option also affects RT indexes. When it is enabled, all atribute updates will be disabled, and also all disk chunks of RT indexes will behave described above. However inserting and deleting of docs from RT indexes is still possible with enabled ondisk_attrs. -
Possible values:
@@ -11146,7 +11146,7 @@Author
and the database server can timeout. You can resolve that either by raising timeouts on SQL server side or by lowering
mem_limit
. -Example:
+Example:
mem_limit = 256M # mem_limit = 262144K # same, but in KB # mem_limit = 268435456 # same, but in bytes @@ -11169,7 +11169,7 @@Author
(that's mostly limited by disk heads seek time). Limiting indexing I/O to a fraction of that can help reduce search performance degradation caused by indexing. -
Example:
+Example:
max_iops = 40@@ -11185,14 +11185,14 @@Author
by max_iops setting. At the time of this writing, all I/O calls should be under 256 KB (default internal buffer size) anyway, so
max_iosize
values higher than 256 KB must not affect anything. -Example:
+Example:
max_iosize = 1048576Maximum allowed field size for XMLpipe2 source type, bytes. Optional, default is 2 MB. -
Example:
+Example:
max_xmlpipe2_field = 8M@@ -11206,7 +11206,7 @@Author
mem_limit. Note that several (currently up to 4) buffers for different files will be allocated, proportionally increasing the RAM usage. -
Example:
+Example:
write_buffer = 4M@@ -11226,7 +11226,7 @@Author
(for example) 2 MB in size, but
max_file_field_buffer
value is 128 MB, peak buffer usage would still be only 2 MB. However, files over 128 MB would be entirely skipped. -Example:
+Example:
max_file_field_buffer = 128M@@ -11329,7 +11329,7 @@Author
makes all connections to that port bypass the thread pool and always forcibly create a new dedicated thread. That's useful for managing in case of a severe overload when the daemon would either stall or not let you connect via a regular port. -
Examples:
+Examples:
listen = localhost listen = localhost:5000 listen = 192.168.0.1:5000 @@ -11356,7 +11356,7 @@Author
Also you can use the 'syslog' as the file name. In this case the events will be sent to syslog daemon. To use the syslog option the sphinx must be configured '--with-syslog' on building. -
Example:
+Example:
log = /var/log/searchd.log@@ -11369,7 +11369,7 @@Author
In this case all search queries will be sent to syslog daemon with LOG_INFO priority, prefixed with '[query]' instead of timestamp. To use the syslog option the sphinx must be configured '--with-syslog' on building. -
Example:
+Example:
query_log = /var/log/query.log@@ -11385,7 +11385,7 @@Author
on the fly, using
SET GLOBAL query_log_format=sphinxql
syntax. Refer to Section 5.9, “searchd
query log formats” for more discussion and format details. -Example:
+Example:
query_log_format = sphinxql@@ -11393,14 +11393,14 @@Author
Network client request read timeout, in seconds. Optional, default is 5 seconds.
searchd
will forcibly close the client connections which fail to send a query within this timeout. -Example:
+Example:
read_timeout = 1Maximum time to wait between requests (in seconds) when using persistent connections. Optional, default is five minutes. -
Example:
+Example:
client_timeout = 3600@@ -11435,7 +11435,7 @@Author
Thus, in thread_pool mode it makes little sense to raise max_children much higher than the amount of CPU cores. Usually that will only hurt CPU contention and decrease the general throughput. -
Example:
+Example:
max_children = 10@@ -11465,7 +11465,7 @@Author
of
searchd
; to stopsearchd
; to notify it that it should rotate the indexes. Can also be used for different external automation scripts. -Example:
+Example:
pid_file = /var/run/searchd.pid@@ -11504,7 +11504,7 @@Author
memory usage during the rotation (because both old and new copies of
.spa/.spi/.spm
data need to be in RAM while preloading new copy). Average usage stays the same. -Example:
+Example:
seamless_rotate = 1@@ -11527,14 +11527,14 @@Author
They also make
searchd
use more file handles. In most scenarios it's therefore preferred and recommended to preopen indexes. -Example:
+Example:
preopen_indexes = 1Whether to unlink .old index copies on successful rotation. Optional, default is 1 (do unlink). -
Example:
+Example:
unlink_old = 0@@ -11550,7 +11550,7 @@Author
between those intervals is set with
attr_flush_period
, in seconds.It defaults to 0, which disables the periodic flushing, but flushing will still occur at normal shut-down. -
Example:
+Example:
attr_flush_period = 900 # persist updates to disk every 15 minutes@@ -11560,7 +11560,7 @@Author
Only used for internal sanity checks, does not directly affect RAM use or performance. Optional, default is 8M. Introduced in version 0.9.9-rc1. -
Example:
+Example:
max_packet_size = 32M@@ -11578,7 +11578,7 @@Author
In the meantime, MVA updates are intended to be used as a measure to quickly catchup with latest changes in the database until the next index rebuild; not as a persistent storage mechanism. -
Example:
+Example:
mva_updates_pool = 16M@@ -11587,7 +11587,7 @@Author
Only used for internal sanity checks, does not directly affect RAM use or performance. Optional, default is 256. Introduced in version 0.9.9-rc1. -
Example:
+Example:
max_filters = 1024@@ -11596,7 +11596,7 @@Author
Only used for internal sanity checks, does not directly affect RAM use or performance. Optional, default is 4096. Introduced in version 0.9.9-rc1. -
Example:
+Example:
max_filter_values = 16384@@ -11610,7 +11610,7 @@Author
fail with "connection refused" message. listen_backlog directive controls the length of the connection queue. Non-Windows builds should work fine with the default value. -
Example:
+Example:
listen_backlog = 20@@ -11622,7 +11622,7 @@Author
two associated read buffers (one for document list and one for hit list). This setting lets you control their sizes, increasing per-query RAM use, but possibly decreasing IO time. -
Example:
+Example:
read_buffer = 1M@@ -11638,7 +11638,7 @@Author
unhinted read size, but raising it for smaller lists. It will not affect RAM use because read buffer will be already allocated. So it should be not greater than read_buffer. -
Example:
+Example:
read_unhinted = 32K@@ -11649,7 +11649,7 @@Author
Makes searchd perform a sanity check of the amount of the queries submitted in a single batch when using multi-queries. Set it to 0 to skip the check. -
Example:
+Example:
max_batch_queries = 256@@ -11660,7 +11660,7 @@Author
Limits RAM usage of a common subtree optimizer (see Section 5.11, “Multi-queries”). At most this much RAM will be spent to cache document entries per each query. Setting the limit to 0 disables the optimizer. -
Example:
+Example:
subtree_docs_cache = 8M@@ -11671,7 +11671,7 @@Author
Limits RAM usage of a common subtree optimizer (see Section 5.11, “Multi-queries”). At most this much RAM will be spent to cache keyword occurrences (hits) per each query. Setting the limit to 0 disables the optimizer. -
Example:
+Example:
subtree_hits_cache = 16M@@ -11704,7 +11704,7 @@Author
does not suffer from overheads of creating a new thread per every new connection and managing a lot of parallel threads. As of 2.3.1, we still retain workers=threads for the transition period, but thread pool is scheduled to become the only MPM mode. -
Example:
+Example:
workers = thread_pool@@ -11735,7 +11735,7 @@@@ -11806,7 +11806,7 @@Author
Up to
dist_threads
threads are be created to process those files. That speeds up snippet extraction when the total amount of document data to process is significant (hundreds of megabytes). -Example:
+Example:
index dist_test { type = distributed @@ -11774,7 +11774,7 @@Author
Otherwise, the default path, which in most cases is the same as working folder, may point to the folder with no write access (for example, /usr/local/var/data). In this case, the searchd will not start at all. -
Example:
+Example:
binlog_path = # disable logging binlog_path = /var/data # /var/data/binlog.001 etc will be createdAuthor
cases, the default hybrid mode 2 provides a nice balance of speed and safety, with full RT index data protection against daemon crashes, and some protection against hardware ones. -
Example:
+Example:
binlog_flush = 1 # ultimate safety, low speed@@ -11818,7 +11818,7 @@Author
A new binlog file will be forcibly opened once the current binlog file reaches this limit. This achieves a finer granularity of logs and can yield more efficient binlog disk usage under certain borderline workloads. -
Example:
+Example:
binlog_max_log_size = 16M@@ -11842,7 +11842,7 @@Author
This might be useful, for instance, when the document storage locations (be those local storage or NAS mountpoints) are inconsistent across the servers. -
Example:
+Example:
snippets_file_prefix = /mnt/common/server1/@@ -11854,7 +11854,7 @@Author
Specifies the default collation used for incoming requests. The collation can be overridden on a per-query basis. Refer to Section 5.12, “Collations” section for the list of available collations and other details. -
Example:
+Example:
collation_server = utf8_ci@@ -11865,7 +11865,7 @@Author
Specifies the libc locale, affecting the libc-based collations. Refer to Section 5.12, “Collations” section for the details. -
Example:
+Example:
collation_libc_locale = fr_FR@@ -11877,7 +11877,7 @@@@ -11897,7 +11897,7 @@Author
Specifies the trusted directory from which the UDF libraries can be loaded. Requires workers = thread to take effect. -
Example:
+Example:
workers = threads plugin_dir = /usr/local/sphinx/libAuthor
mysql_version_string
directive and havesearchd
report a different version to clients connecting over MySQL protocol. (By default, it reports its own version.) -Example:
+Example:
mysql_version_string = 5.0.37@@ -11912,7 +11912,7 @@Author
periodic flush checks, and eligible RAM chunks can get saved, enabling consequential binlog cleanup. See Section 4.4, “Binary logging” for more details. -
Example:
+Example:
rt_flush_period = 3600 # 1 hour@@ -11938,7 +11938,7 @@Author
with upto 250 levels, 150K for upto 700 levels, etc. If the stack size limit is not met,
searchd
fails the query and reports the required stack size in the error message. -Example:
+Example:
thread_stack = 256K@@ -11955,7 +11955,7 @@Author
of such expansions. Setting
expansion_limit = N
restricts expansions to no more than N of the most frequent matching keywords (per each wildcard in the query). -Example:
+Example:
expansion_limit = 16@@ -11971,7 +11971,7 @@Author
process that monitors the main server process, and automatically restarts the latter in case of abnormal termination. Watchdog is enabled by default. -
Example:
+Example:
watchdog = 0 # disable watchdog@@ -11984,7 +11984,7 @@Author
If you load UDF functions, but Sphinx crashes, when it gets (automatically) restarted, your UDF and global variables will no longer be available; using persistent state helps a graceful recovery with no such surprises. -
Example:
+Example:
sphinxql_state = uservars.sql@@ -12000,7 +12000,7 @@Author
by this directive.
To disable pings, set ha_ping_interval to 0. -
Example:
+Example:
ha_ping_interval = 0@@ -12025,7 +12025,7 @@Author
They can be inspected using SHOW AGENT STATUS statement. -
Example:
+Example:
ha_period_karma = 120@@ -12036,7 +12036,7 @@Author
when all of them are busy). This very directive limits the number. It affects the num of connections to each agent's host, across all distributed indexes.
It is reasonable to set the value equal or less than max_children option of the agents. -
Example:
+Example:
persistent_connections_limit = 29 # assume that each host of agents has max_children = 30 (or 29).@@ -12049,7 +12049,7 @@Author
RT optimization activity will not generate more disk iops (I/Os per second) than the configured limit. Modern SATA drives can perform up to around 100 I/O operations per second, and limiting rt_merge_iops can reduce search performance degradation caused by merging. -
Example:
+Example:
rt_merge_iops = 40@@ -12066,7 +12066,7 @@Author
limit. Thus, it is guaranteed that all the optimization activity will not generate more than (rt_merge_iops * rt_merge_maxiosize) bytes of disk I/O per second. -
Example:
+Example:
rt_merge_maxiosize = 1M@@ -12116,7 +12116,7 @@Author
is somewhat more error prone.) It is not necessary to specify all 4 costs at once, as the missed one will take the default values. However, we strongly suggest to specify all of them, for readability. -
Example:
+Example:
predicted_time_costs = doc=128, hit=96, skip=4096, match=128@@ -12130,7 +12130,7 @@Author
flushing attributes and updating binlog. And it requires some time. searchd --stopwait will wait up to shutdown_time seconds for daemon to finish its jobs. Suitable time depends on your index size and load. -
Example:
+Example:
shutdown_timeout = 5 # wait for up to 5 seconds@@ -12198,7 +12198,7 @@Author
the base dictionary path. File names are hardcoded and specific to a given lemmatizer; the Russian lemmatizer uses ru.pak dictionary file. The dictionaries can be obtained from the Sphinx website. -
Example:
+Example:
lemmatizer_base = /usr/local/share/sphinx/dicts/@@ -12211,7 +12211,7 @@Author
By default, JSON format errors are ignored (
ignore_attr
) and the indexer tool will just show a warning. Setting this option tofail_index
will rather make indexing fail at the first JSON format error. -Example:
+Example:
on_json_attr_error = ignore_attr@@ -12225,7 +12225,7 @@Author
of strings; if the option is 0, such values will be indexed as strings. This conversion applies to any data source, that is, JSON attributes originating from either SQL or XMLpipe2 sources will all be affected. -
Example:
+Example:
json_autoconv_numbers = 1@@ -12239,21 +12239,21 @@Author
will be automatically brought to lower case when indexing. This conversion applies to any data source, that is, JSON attributes originating from either SQL or XMLpipe2 sources will all be affected. -
Example:
+Example:
json_autoconv_keynames = lowercasePath to the RLP root folder. Mandatory if RLP is used. Added in 2.2.1-beta. -
Example:
+Example:
rlp_root = /home/myuser/RLPRLP environment configuration file. Mandatory if RLP is used. Added in 2.2.1-beta. -
Example:
+Example:
rlp_environment = /home/myuser/RLP/rlp-environment.xml@@ -12262,7 +12262,7 @@Author
Do not set this value to more than 10Mb because sphinx splits large documents to 10Mb chunks before processing them by the RLP. This option has effect only if
morphology = rlp_chinese_batched
is specified. Added in 2.2.1-beta. -Example:
+Example:
rlp_max_batch_size = 100k@@ -12270,7 +12270,7 @@Author
Maximum number of documents batched before processing them by the RLP. Optional, default is 50. This option has effect only if
morphology = rlp_chinese_batched
is specified. Added in 2.2.1-beta. -Example:
+Example:
rlp_max_batch_docs = 100-Major features
- +
added query cache
Major features
-
added query cache
added thread pool mode, and the respective workers = thread_pool, max_children, net_workers, queue_max_length directives
added vip suffixes to listener protocols (sphinx_vip, mysql41_vip)
Removals
- +
removed fork and prefork modes
Removals
-
removed fork and prefork modes
removed
prefork_rotation_throttle
directiveMinor features
- +
added RELOAD PLUGINS SphinxQL statement
Minor features
-
added RELOAD PLUGINS SphinxQL statement
added FLUSH ATTRIBUTES SphinxQL statement
Bug fixes
- +
fixed #2167,
--keep_attrs
did not work with--rotate
Bug fixes
-Minor features
- +
added #2112, string equal comparison support for IF() function (for JSON and string attributes)
Minor features
-Bug fixes
- +
fixed #2158, crash at RT index after morphology changed to AOT after index was created
Bug fixes
fixed #2158, crash at RT index after morphology changed to AOT after index was created
fixed #2155, stopwords got missed on disk chunk save at RT index
fixed #2151, agents statistics missed in case of huge amount of agents
- @@ -12350,7 +12350,7 @@
fixed #2139, escape all special characters in JSON result set, according to RFC 4627
Bug fixes
fixed snippets crash with blend chars at the beginning of a string
-Bug fixes
- +
fixed #2104, ALL()/ANY()/INDEXOF() support for distributed indexes
Bug fixes
fixed #2104, ALL()/ANY()/INDEXOF() support for distributed indexes
fixed #2102, show agent status misses warnings from agents
fixed #2100, crash of
indexer
while loading stopwords with tokenizer plugin- @@ -12359,9 +12359,9 @@
fixed #2098, arbitrary JSON subkeys and IS NULL for distributed indexes
Bug fixes
indexation of duplicate documents
-New minor features
- +
added OPTION rand_seed which affects ORDER BY RAND()
New minor features
-
added OPTION rand_seed which affects ORDER BY RAND()
Bug fixes
- +
fixed #2042,
indextool
fails with field mask on 32+ fieldsBug fixes
fixed #2042,
indextool
fails with field mask on 32+ fieldsfixed #2031, wrong encoding with UnixODBC/Oracle source
fixed #2056, several bugs in RLP tokenizer
- @@ -12372,12 +12372,12 @@
fixed #2054, SHOW THREADS hangs if queries in prefork mode
Bug fixes
fixed MySQL protocol response when daemon maxed out
-New major features
- +
added ALTER RTINDEX rt1 RECONFIGURE which allows to change RT index settings on the fly
New major features
-
added ALTER RTINDEX rt1 RECONFIGURE which allows to change RT index settings on the fly
added SHOW INDEX idx1 SETTINGS statement
added ability to specify several destination forms for the same source wordform (as a result, N:M mapping is now available)
added blended chars support to exceptions
New minor features
- +
New minor features
added FACTORS() alias for PACKEDFACTORS() function
added
LIMIT
clause for the FACET keyword- @@ -12386,11 +12386,11 @@
added JSON-formatted output to
PACKEDFACTORS()
functionNew minor features
-added
searchd
configuration keys agent_connect_timeout, agent_query_timeout, agent_retry_count and agent_retry_delayGROUPBY() function now returns strings for string attributes
Optimizations and removals
- +
optimized json_autoconv_numbers option speed
Optimizations and removals
-
optimized json_autoconv_numbers option speed
optimized tokenizing with expections on
fixed #1970, speeding up ZONE and ZONESPAN operators
Bug fixes
- +
fixed #2027, slow queries to multiple indexes with large kill-lists
Bug fixes
fixed #2027, slow queries to multiple indexes with large kill-lists
fixed #2022, blend characters of matched word must not be outside of snippet passage
- @@ -12403,7 +12403,7 @@
fixed #2018, different wildcard behaviour in RT and plain indexes
Bug fixes
fixed cpu time logging for cases where work is done in child threads or agents
-New features
- +
added #1920, charset_table aliases
New features
added #1920, charset_table aliases
added #1887, filtering over string attributes
- @@ -12412,10 +12412,10 @@
added #1689, GROUP BY JSON attributes
New features
-Optimizations and removals
- +
improved speed of concurrent insertion in RT indexes
Optimizations and removals
-
improved speed of concurrent insertion in RT indexes
removed max_matches config key
Bug fixes
- +
Bug fixes
fixed #1942, crash in SHOW THREADS command
fixed #1922, crash on snippet generation for queries with duplicated words
- @@ -12431,7 +12431,7 @@
Bug fixes
fixed template index removing on rotation
-New features
- +
added #1604, CALL KEYWORDS can show now multiple lemmas for a keyword
New features
added #1604, CALL KEYWORDS can show now multiple lemmas for a keyword
added ALTER TABLE DROP COLUMN
added ALTER for JSON/string/MVA attributes
- @@ -12443,7 +12443,7 @@
added REMAP() function which surpasses SetOverride() API
New features
-
added position shift operator to phrase operator
added possibility to add user-defined rankers (via plugins)
Optimizations, behavior changes, and removals
- +
changed #1797, per-term statistics report (expanded terms fold to their respective substrings)
Optimizations, behavior changes, and removals
changed #1797, per-term statistics report (expanded terms fold to their respective substrings)
changed default thread_stack value to 1M
changed local directive in a distributed index which takes now a list (eg.
local=shard1,shard2,shard3
)- @@ -12460,7 +12460,7 @@
deprecated SetMatchMode() API call
Optimizations, behavior changes, and removals
<removed support for client versions 0.9.6 and below
-Major new features
- added ALTER TABLE that can add attributes to disk and RT indexes on the fly
+Major new features
- added ALTER TABLE that can add attributes to disk and RT indexes on the fly
- added ATTACH support for non-empty RT target indexes
- added Chinese segmentation with RLP (Rosette Linguistics platform) support
- added English, German lemmatization support
@@ -12475,19 +12475,19 @@Major new features
added table functions mechanism, and REMOVE_REPEATS() table function-- added support for arbitrary expressions in WHERE for DELETE queries
Ranking related features
- added OPTION local_df=1, an option to aggregate IDFs over local indexes (shards)
+Ranking related features
-
- added OPTION local_df=1, an option to aggregate IDFs over local indexes (shards)
- added UDF XXX_reinit() method to reload UDFs with
workers=prefork
- added comma-separated syntax to OPTION
idf
,tfidf_unnormalized
andtfidf_normalized
flags- added
lccs
,wlccs
,exact_order
,min_gaps
, andatc
ranking factors- added
sphinx_get_XXX_factors()
, a faster interface to access PACKEDFACTORS() in UDFs- added support for exact_hit, exact_order field factors when using more than 32 fields (exact_hit, exact_order)
Instrumentation features
- added DESCRIBE and --dumpheader support for tokencount attributes (generated by index_field_lengths=1 directive)
+Instrumentation features
-
- added DESCRIBE and --dumpheader support for tokencount attributes (generated by index_field_lengths=1 directive)
- added RT index query profile, percentages, totals to SHOW PROFILE
- added
predicted_time
,dist_predicted_time
,fetched_docs
,fetched_hits
counters to SHOW META- added
total_tokens
anddisk_bytes
counters to SHOW INDEX STATUSGeneral features
- added ALL(), ANY() and INDEXOF() functions for JSON subarrays
+General features
- added ALL(), ANY() and INDEXOF() functions for JSON subarrays
- added MIN_TOP_WEIGHT(), MIN_TOP_SORTVAL() functions
- added TOP() aggregate function to expression ranker
- added a check for duplicated tail hit positions in indextool --check
@@ -12503,7 +12503,7 @@General features
-added string filter support in distributed queries, SphinxAPI, SphinxQL query log
- added support for mixed distributed and local index queries (SELECT * FROM dist1,dist2,local3), and
index_weights
option for that caseOptimizations, behavior changes, and removals
- optimized JSON attributes access (1.12x to 2.0x+ total query speedup depending on the JSON data)
+Optimizations, behavior changes, and removals
-
- optimized JSON attributes access (1.12x to 2.0x+ total query speedup depending on the JSON data)
- optimized SELECT (1.02x to 3.5x speedup, depending on index schema size)
- optimized UPDATE (up to 3x faster on big updates)
- optimized away internal threads table mutex contention with
@@ -12514,7 +12514,7 @@workers=threads
and 1000s of threadsOptimizations, behavior changes, and removals
<- disallowed dashes in index names in API requests (just like in SphinxQL)
- removed legacy
xmlpipe
data source v1,compat_sphinxql_magics
directive,SetWeights()
SphinxAPI call, and SPH_SORT_CUSTOM SphinxAPI modeBug fixes
- fixed #1734, unquoted literal in json subscript could cause a crash, returns 'unknown column' now.
+Bug fixes
- fixed #1734, unquoted literal in json subscript could cause a crash, returns 'unknown column' now.
- fixed #1683, under certain conditions stopwords were not taken into account in RT indexes
- fixed #1648, #1644, when using AOT lemmas with snippet generation, not all the forms got highlighted
- fixed #1549, OPTION
@@ -12529,7 +12529,7 @@idf=tfidf_normalized
was ignored for distributed queriesBug fixes
fixed a crash while creating indexes with sql_joined_field
-Bug fixes
- +
fixed #1994, parsing of empty JSON arrays
Bug fixes
fixed #1994, parsing of empty JSON arrays
fixed #1987, handling of index_exact_words with AOT morphology and infixes on
fixed #1984, teaching HTML parser to handle hex numbers
- @@ -12537,7 +12537,7 @@
fixed #1983, master and agents networking issue
Bug fixes
-Bug fixes
- +
Bug fixes
fixed #1933, quorum operator works incorrectly if it's number is exception
fixed #1932, fixed daemon index recovery after failed rotation
- @@ -12547,7 +12547,7 @@
fixed #1923, crash at
indexer
withdict=keywords
Bug fixes
fixed #1682, field end modifier doesn't work with words containing blended chars
-Bug fixes
- +
fixed #1917, field limit propagation outside of group
Bug fixes
fixed #1917, field limit propagation outside of group
fixed #1915, exact form passes to index skipping stopwords filter
fixed #1905, multiple lemmas at the end of a field
- @@ -12562,7 +12562,7 @@
fixed #1903,
indextool
check mode for hitless indexes and indexes with large amount of documentsBug fixes
fixed Quick Tour documentation chapter
-Bug fixes
- +
fixed #1857, crash in arabic stemmer
Bug fixes
fixed #1857, crash in arabic stemmer
fixed #1875, fixed crash on adding documents with long words in dict=keyword index with morphology and infixes enabled
fixed #1876, crash on words with large codepoints and infix searches
- @@ -12583,7 +12583,7 @@
fixed #1880, crash on multiquery with one incorrect query
Bug fixes
fixed index corruption in UPDATE queries with non-existent attributes
-Bug fixes
- +
fixed #1848, infixes and morphology clash
Bug fixes
fixed #1848, infixes and morphology clash
fixed #1823,
indextool
fails to handle indexes with lemmatizer morphologyfixed #1799, crash in queries to distributed indexes with GROUP BY on multiple values
- @@ -12591,7 +12591,7 @@
fixed #1718,
expand_keywords
option lost in disk chunks of RT indexesBug fixes
fixed network protocol issue which results in timeouts of
libmysqlclient
for big Sphinx responses-Bug fixes
- +
fixed #1778, indexes with more than 255 attributes
Bug fixes
fixed #1778, indexes with more than 255 attributes
fixed #1777, ORDER BY WEIGHT()
fixed #1796, missing results in queries with quorum operator of indexes with some lemmatizer
- @@ -12599,7 +12599,7 @@
fixed #1780, incorrect results while querying indexes with wordforms, some lemmatizer and enable_star=1
Bug fixes
fixed, --with-re2 check
-Bug fixes
- +
fixed #1753, path to re2 sources could not be set using
--with-re2
, options--with-re2-libs
and--with-re2-includes
added toconfigure
Bug fixes
fixed #1753, path to re2 sources could not be set using
--with-re2
, options--with-re2-libs
and--with-re2-includes
added toconfigure
fixed #1739, erroneous conversion of RAM chunk into disk chunk when loading id32 index with id64 binary
fixed #1738, unlinking RAM chunk when converting it to disk chunk
- @@ -12607,7 +12607,7 @@
fixed #1710, unable to filter by attributes created by index_field_lengths=1
Bug fixes
fixed crash while querying index with lemmatizer and wordforms
-New features
- +
added FLUSH RAMCHUNK statement
New features
added FLUSH RAMCHUNK statement
added SHOW PLAN statement
added support for GROUP BY on multiple attributes
- @@ -12621,7 +12621,7 @@
added BM25F() function to
SELECT
expressions (now works with the expression based ranker)New features
-
JSON
attributes (up to 5-20% fasterSELECTs
using JSON objects)optimized xmlpipe2 indexing (up to 9 times faster on some schemas)
Bug fixes
- +
fixed #1684, COUNT(DISTINCT smth) with implicit
GROUP BY
returns correct value nowBug fixes
fixed #1684, COUNT(DISTINCT smth) with implicit
GROUP BY
returns correct value nowfixed #1672, exact token AOT vs lemma (
indexer
skips exact form of token that passed AOT through tokenizer)fixed #1659, fail while loading empty infix dictionary with dict=keywords
- @@ -12680,7 +12680,7 @@
fixed #1638, force explicit JSON type conversion for aggregate functions
Bug fixes
fixed
TOP_COUNT
usage inmisc/suggest
and updated to PHP 5.3 and UTF-8-Major new features
- +
added query profiling (SET PROFILING=1 and SHOW PROFILE statements)
Major new features
added query profiling (SET PROFILING=1 and SHOW PROFILE statements)
added AOT-based Russian lemmatizer (morphology={lemmatize_ru | lemmatize_ru_all}, lemmatizer_base, and lemmatizer_cache directives)
added wordbreaker, a tool to split compounds into individual words
- @@ -12691,7 +12691,7 @@
added JSON attributes support (sql_attr_json, on_json_attr_error, json_autoconv_numbers, json_autoconv_keynames directives)
Major new features
-added wildcards support to dict=keywords (eg. "t?st*")
added substring search support (min_infix_len=2 and above) to dict=keywords
New features
- +
added --checkconfig switch to indextool to check config file for correctness (bug #1395)
New features
added --checkconfig switch to indextool to check config file for correctness (bug #1395)
added global IDF support (global_idf directive, OPTION global_idf)
added "term1 term2 term3"/0.5 quorum fraction syntax (bug #1372)
- @@ -12723,7 +12723,7 @@
added an option to apply stopwords before morphology, stopwords_unstemmed directive
New features
-
added support for upto 255 keywords in quorum operator (bug #1030)
added multi-threaded agent querying (bug #1000)
New SphinxQL features
- +
added SHOW INDEX indexname STATUS statement
New SphinxQL features
added SHOW INDEX indexname STATUS statement
added LIKE clause support to multiple SHOW xxx statements
added SNIPPET() function
- @@ -12734,7 +12734,7 @@
added GROUP_CONCAT() aggregate function
New SphinxQL features
-added SHOW VARIABLES WHERE variable_name='xxx' syntax
added TRUNCATE RTINDEX statement
Major behavior changes and optimizations
- +
changed that UDFs are now allowed in fork/prefork modes via sphinxql_state startup script
Major behavior changes and optimizations
+
changed that UDFs are now allowed in fork/prefork modes via sphinxql_state startup script
changed that compat_sphinxql_magics now defaults to 0
changed that small enough exceptions, wordforms, stopwords files are now embedded into the index header
- @@ -12743,9 +12743,9 @@
changed that rt_mem_limit can now be over 2 GB (bug #1059)
Major behavior changes and optimizations
optimized filtering and scan in several frequent cases (single-value, 2-arg, 3-arg WHERE clauses)
Bug fixes
-Bug fixes
- +
fixed #1778, SENTENCE and PARAGRAPH operators and infix stars clash
Bug fixes
fixed #1778, SENTENCE and PARAGRAPH operators and infix stars clash
fixed #1774, stack overflow on parsing large expressions
fixed #1744, daemon failed to write to log file bigger than 4G
- @@ -12761,7 +12761,7 @@
fixed #1705, expression ranker handling of indexes with more than 32 fields
Bug fixes
fixed rt_flush_period - less stricter internal check and more often flushes overall
-Bug fixes
- +
fixed #1655, special characters like ()?* were not processed correctly by exceptions
Bug fixes
fixed #1655, special characters like ()?* were not processed correctly by exceptions
fixed #1651, CREATE FUNCTION can now be used with BIGINT return type
fixed #1649, incorrect warning message (about statistics mismatch) was returned when mixing wildcards and regular keywords
- @@ -12784,7 +12784,7 @@
fixed #1603, passing MVA64 arguments to non-MVA functions caused unpredicted behavior and crashes (now explicitly forbidden)
Bug fixes
added a warning for missed stopwords, exception, wordforms files on index load and in
indextool --check
-Bug fixes
- +
fixed #1515, log strings over 2KB were clipped when query_log_format=plain
Bug fixes
fixed #1515, log strings over 2KB were clipped when query_log_format=plain
fixed #1514, RT index disk chunk lose attribute update on daemon restart
fixed #1512, crash while formatting log messages
- @@ -12806,7 +12806,7 @@
fixed #1511, crash on indexing PostgreSQL data source with MVA attributes
Bug fixes
fixed #1405, between with mixed int float values
-Bug fixes
- +
fixed #1475, memory leak in the expression parser
Bug fixes
fixed #1475, memory leak in the expression parser
fixed #1457, error messages over 2KB were clipped
fixed #1454, searchd did not display an error message when the binlog path did not exist
- @@ -12857,7 +12857,7 @@
fixed #1441, SHOW META in a query batch was returning the last non-batch error
Bug fixes
added more debug info about failed index loading
-Bug fixes
- +
fixed #1322, J connector seems to be broken in rel20 , but works in trunk
Bug fixes
fixed #1322, J connector seems to be broken in rel20 , but works in trunk
fixed #1321, 'set names utf8' passes, but 'set names utf-8' doesn't because of syntax error '-'
fixed #1318, unhandled float comparison operators at filter
- @@ -12911,7 +12911,7 @@
fixed #1317, FD leaks on thread seamless rotation
Bug fixes
fixed x64 configurations for libstemmer
-Bug fixes
- +
fixed #1258,
xmlpipe2
refused to index indexes withdocinfo=inline
Bug fixes
fixed #1258,
xmlpipe2
refused to index indexes withdocinfo=inline
fixed #1257, legacy groupby modes vs
dist_threads
could occasionally return wrong search results (race condition)fixed #1253, missing single-word query performance optimization (simplified ranker) vs prefix-expanded keywords vs
dict=keywords
- @@ -12967,7 +12967,7 @@
fixed #1252, COUNT(*) vs dist_threads could occasionally crash (race condition)
Bug fixes
fixed missing command-line switches documentation
-Bug fixes
- +
fixed #605, pack vs mysql compress
Bug fixes
fixed #605, pack vs mysql compress
fixed #783, #862, #917, #985, #990, #1032 documentation bugs
fixed #885, bitwise AND/OR were not available via API
- @@ -13003,7 +13003,7 @@
fixed #984, crash on indexing data with MAGIC_CODE_ZONE symbol
Bug fixes
fixed #1120, negative total_found, docs and hits counter on huge indexes
-Bug fixes
- +
fixed #1031, SphinxQL parsing syntax for MVA at insert \ replace statements
Bug fixes
fixed #1031, SphinxQL parsing syntax for MVA at insert \ replace statements
fixed #1027, stalls on attribute update in high-concurrency load
fixed #1026, daemon crash on malformed API command
- @@ -13023,7 +13023,7 @@
fixed #1021,
max_children
option has been ignored withworker=threads
Bug fixes
fixed crash log for 'fork' and 'prefork' workers
-Major new features
- +
added keywords dictionary (
dict=keywords
) support to RT indexesMajor new features
added keywords dictionary (
dict=keywords
) support to RT indexesadded MVA, index_exact_words support to RT indexes (#888)
added MVA64 (a set of BIGINTs) support to both disk and RT indexes (rt_attr_multi_64 directive)
- @@ -13031,7 +13031,7 @@
added an expression-based ranker, and a number of new ranking factors
Major new features
-added
WHERE
clause support to UPDATE statementadded
bigint
,float
, andMVA
attribute support to UPDATE statementNew features
- +
added support for upto 256 searchable fields (was upto 32 before)
New features
added support for upto 256 searchable fields (was upto 32 before)
added
FIBONACCI()
function to expressionsadded load_files_scattered option to snippets
- @@ -13052,7 +13052,7 @@
added implicit attribute type promotions in multi-index result sets (#939)
New features
-
improved sentence extraction (handles salutations, starting initials better now)
changed max_filter_values sanity check to 10M values
New SphinxQL features
- +
added FLUSH RTINDEX statement
New SphinxQL features
added FLUSH RTINDEX statement
added
dist_threads
directive (parallel processing),load_files
,load_files_scattered
, batch syntax (multiple documents) support to CALL SNIPPETS statementadded
OPTION comment='...'
support to SELECT statement (#944)- @@ -13060,7 +13060,7 @@
added SHOW VARIABLES statement
New SphinxQL features
-added complete SphinxQL error logging (all errors are logged now, not just
SELECT
s)improved SELECT statement syntax, made expressions aliases optional
Bug fixes
- +
fixed #982, empty binlogs prevented upgraded daemon from starting up
Bug fixes
fixed #982, empty binlogs prevented upgraded daemon from starting up
fixed #978, libsphinxclient build failed on sparc/sparc64 solaris
fixed #977, eliminated (most) compiler warnings
- @@ -13128,7 +13128,7 @@
fixed #969, broken expression MVA/string argument type check prevented IF(IN(mva..)) and other valid expressions from working
Bug fixes
fixed that field/zone conditions were not propagated to expanded keywords with
dict=keywords
-New general features
- +
added remapping support to blend_chars directive
New general features
added remapping support to blend_chars directive
added multi-threaded snippet batches support (requires a batch sent via API, dist_threads, and
load_files
)added collations (collation_server, collation_libc_locale directives)
- @@ -13149,7 +13149,7 @@
added support for sorting and grouping on string attributes (
ORDER BY
,GROUP BY
,WITHIN GROUP ORDER BY
)New general features
-added id32 index support in id64 binaries (EXPERIMENTAL)
added SphinxSE support for DELETE and REPLACE on SphinxQL tables
New SphinxQL features
- +
added new, more SQL compliant SphinxQL syntax; and a compat_sphinxql_magics directive
New SphinxQL features
added new, more SQL compliant SphinxQL syntax; and a compat_sphinxql_magics directive
added CRC32(), DAY(), MONTH(), YEAR(), YEARMONTH(), YEARMONTHDAY() functions
- @@ -13163,19 +13163,19 @@
added reverse_scan=(0|1) option to SELECT
New SphinxQL features
-added SphinxQL multi-query support
added DESCRIBE, SHOW TABLES statements
New command-line switches
- +
added
--print-queries
switch toindexer
that dumps SQL queries it runsNew command-line switches
-
added
--print-queries
switch toindexer
that dumps SQL queries it runsadded
--sighup-each
switch toindexer
that rotates indexes one by oneadded
--strip-path
switch tosearchd
that skips file paths embedded in the index(-es)added
--dumpconfig
switch toindextool
that dumps an index header insphinx.conf
formatMajor changes and optimizations
- +
changed default preopen_indexes value to 1
Major changes and optimizations
-
changed default preopen_indexes value to 1
optimized English stemmer (results in 1.3x faster snippets and indexing with morphology=stem_en)
optimized snippets, 1.6x general speedup
optimized const-list parsing in SphinxQL
optimized full-document highlighting CPU/RAM use
optimized binlog replay (improved performance on K-list update)
Bug fixes
- +
fixed #767, joined fields vs ODBC sources
Bug fixes
fixed #767, joined fields vs ODBC sources
fixed #757, wordforms shared by indexes with different settings
fixed #733, loading of indexes in formats prior to v.14
- @@ -13528,7 +13528,7 @@
fixed #763, occasional snippets failures
Bug fixes
fixed default ID range (that filtered out all 64-bit values) in Java and Python APIs
-Indexing
- +
added support for 64-bit document and keyword IDs, --enable-id64 switch to configure
Indexing
added support for 64-bit document and keyword IDs, --enable-id64 switch to configure
added support for floating point attributes
added support for bitfields in attributes, sql_attr_bool directive and bit-widths part in sql_attr_uint directive
- @@ -13550,7 +13550,7 @@
added support for multi-valued attributes (MVA)
Indexing
-
improved ordinals sorting; now runs in fixed RAM
improved handling of documents with zero/NULL ids, now skipping them instead of aborting
Search daemon
- +
added an option to unlink old index on succesful rotation, unlink_old directive
Search daemon
added an option to unlink old index on succesful rotation, unlink_old directive
added an option to keep index files open at all times (fixes subtle races on rotation), preopen and preopen_indexes directives
added an option to profile searchd disk I/O, --iostats command-line option
- @@ -13564,7 +13564,7 @@
added an option to rotate index seamlessly (fully avoids query stalls), seamless_rotate directive
Search daemon
-
added Windows --rotate support
improved log timestamping, now with millisecond precision
Querying
- +
added extended engine V2 (faster, cleaner, better; SPH_MATCH_EXTENDED2 mode)
Querying
added extended engine V2 (faster, cleaner, better; SPH_MATCH_EXTENDED2 mode)
added ranking modes support (V2 engine only; SetRankingMode() API call)
added quorum searching support to query language (V2 engine only; example: "any three of all these words"/3)
- @@ -13585,14 +13585,14 @@
added query escaping support to query language, and EscapeString() API call
Querying
-
added optional limit on query time, SetMaxQueryTime() API call
added optional limit on found matches count (4rd arg to SetLimits() API call, so-called 'cutoff')
APIs and SphinxSE
- +
added pure C API (libsphinxclient)
APIs and SphinxSE
-
added pure C API (libsphinxclient)
added Ruby API (thanks to Dmytro Shteflyuk)
added Java API
added SphinxSE support for MVAs (use varchar), floats (use float), 64bit docids (use bigint)
added SphinxSE options "floatrange", "geoanchor", "fieldweights", "indexweights", "maxquerytime", "comment", "host" and "port"; and support for "expr:CLAUSE"
improved SphinxSE max query size (using MySQL condition pushdown), upto 256K now
General
- +
added scripting (shebang syntax) support to config files (example: #!/usr/bin/php in the first line)
General
added scripting (shebang syntax) support to config files (example: #!/usr/bin/php in the first line)
added unified config handling and validation to all programs
added unified documentation
- diff --git a/doc/sphinx.txt b/doc/sphinx.txt index 69d51bb5..af9d81e1 100644 --- a/doc/sphinx.txt +++ b/doc/sphinx.txt @@ -1,5 +1,5 @@ -Sphinx 2.3.1-beta reference manual -================================== +Sphinx 2.3.2-dev reference manual +================================= Free open-source SQL full-text search engine ============================================ diff --git a/doc/sphinx.xml b/doc/sphinx.xml index 8f4fa105..f047e63e 100644 --- a/doc/sphinx.xml +++ b/doc/sphinx.xml @@ -5,7 +5,7 @@ ]>
added .spec file for RPM builds
- Sphinx 2.3.1-beta reference manual +Sphinx 2.3.2-dev reference manual Free open-source SQL full-text search engine diff --git a/mysqlse/ha_sphinx.cc b/mysqlse/ha_sphinx.cc index 505309d3..271f29e0 100644 --- a/mysqlse/ha_sphinx.cc +++ b/mysqlse/ha_sphinx.cc @@ -154,7 +154,7 @@ void sphUnalignedWrite ( void * pPtr, const T & tVal ) #define SPHINXSE_MAX_ALLOC (16*1024*1024) #define SPHINXSE_MAX_KEYWORDSTATS 4096 -#define SPHINXSE_VERSION "2.3.1-beta" +#define SPHINXSE_VERSION "2.3.2-dev" // FIXME? the following is cut-n-paste from sphinx.h and searchd.cpp // cut-n-paste is somewhat simpler that adding dependencies however.. diff --git a/src/sphinx.h b/src/sphinx.h index bc1d429f..0e892312 100644 --- a/src/sphinx.h +++ b/src/sphinx.h @@ -196,10 +196,10 @@ inline const DWORD * STATIC2DOCINFO ( const DWORD * pAttrs ) { return STATIC2DOC #include "sphinxversion.h" #ifndef SPHINX_TAG -#define SPHINX_TAG "-beta" +#define SPHINX_TAG "-dev" #endif -#define SPHINX_VERSION "2.3.1" SPHINX_BITS_TAG SPHINX_TAG " (" SPH_SVN_TAGREV ")" +#define SPHINX_VERSION "2.3.2" SPHINX_BITS_TAG SPHINX_TAG " (" SPH_SVN_TAGREV ")" #define SPHINX_BANNER "Sphinx " SPHINX_VERSION "\nCopyright (c) 2001-2015, Andrew Aksyonoff\nCopyright (c) 2008-2015, Sphinx Technologies Inc (http://sphinxsearch.com)\n\n" #define SPHINX_SEARCHD_PROTO 1 #define SPHINX_CLIENT_VERSION 1