Thread pool settings
OpenSearch uses several thread pools to manage memory consumption and handle different types of operations efficiently. Thread pools can be configured to optimize performance based on your cluster’s workload patterns.
To learn more about static and dynamic settings, see Configuring OpenSearch.
Node processor settings
OpenSearch automatically detects the number of available processors and configures thread pools accordingly. You can override this detection:
node.processors(Static, integer): Explicitly sets the number of processors that OpenSearch should use for thread pool sizing calculations. This is useful when running multiple OpenSearch instances on the same host or when the automatic processor detection is incorrect. When set, thread pool sizes are calculated based on this value instead of the detected processor count. Default is the number of automatically detected processors.
Thread pool types
OpenSearch supports the following thread pool types. Each type supports different parameters.
Fixed thread pools
Fixed thread pools maintain a constant number of threads and use a queue for pending requests.
OpenSearch supports the following fixed thread pools:
get: For document retrieval operations (fixed type)analyze: For Analyze API requests (fixed type)write: For indexing, deletion, update, and bulk operations (fixed type)force_merge: For force merge operations (fixed type)search: For search operations (fixed type)search_throttled: For throttled search operations (fixed type)
Fixed thread pools support the following settings:
-
thread_pool.<pool_name>.size(Static, integer): Sets the number of threads in the thread pool. The thread count remains constant regardless of workload. -
thread_pool.<pool_name>.queue_size(Static, integer): Controls the size of the queue for pending requests when all threads are busy. Set to-1for unbounded queues. When the queue is full, new requests are rejected. Default varies by thread pool type.
Scaling thread pools
Scaling thread pools dynamically adjust the number of threads based on workload.
OpenSearch supports the following scaling thread pools:
generic: For general background operations like node discovery (scaling type)snapshot: For snapshot and restore operations (scaling type)warmer: For index warming operations (scaling type)refresh: For index refresh operations (scaling type)flush: Forflushandfsyncoperations (scaling type)management: For cluster management operations (scaling type)fetch_shard_started: For shard state operations (scaling type)fetch_shard_store: For shard store operations (scaling type)
Scaling thread pools support the following settings:
-
thread_pool.<pool_name>.core(Static, integer): Sets the minimum number of threads to keep in the pool, even when idle. -
thread_pool.<pool_name>.max(Static, integer): Sets the maximum number of threads that can be created in the pool. -
thread_pool.<pool_name>.keep_alive(Static, time unit): Determines how long idle threads are kept in the pool before being terminated. Threads above the core size are terminated after this period of inactivity.
Fork-join thread pool
Introduced 3.2
Fork-join thread pools use the Java ForkJoinPool to provide efficient parallelism for workloads that benefit from work stealing and task splitting. This is useful for compute-intensive operations. In OpenSearch, fork-join thread pools support features that rely on parallel computation, such as the OpenSearch jVector plugin, which accelerates index builds.
Fork-join thread pools support the following settings:
thread_pool.<pool_name>.parallelism(Static, integer): Sets the target parallelism level (number of worker threads) for the pool. Typically, this value matches the number of available processors but can be tuned for specific workloads.thread_pool.<pool_name>.async_mode(Static, Boolean): If set totrue, uses the asynchronous mode for fork-join pool scheduling.thread_pool.<pool_name>.queue_size(Static, integer): Sets the size of the submission queue for tasks. Set this value to-1for unbounded queue size.
Example configurations
To configure a fixed thread pool, update the configuration file as follows:
thread_pool:
write:
size: 30
queue_size: 1000
To configure a scaling thread pool, update the configuration file as follows:
thread_pool:
warmer:
core: 1
max: 8
keep_alive: 2m
To configure a fork-join thread pool, update the configuration file as follows:
thread_pool:
fork_join:
parallelism: 8
To set a custom processor count, update the configuration file as follows:
node.processors: 8
Thread pool timing settings
OpenSearch supports the following thread pool timing settings:
thread_pool.estimated_time_interval(Static, time unit): Sets the time interval for updating cached time values used by thread pools and other time-sensitive operations. This setting controls how frequently OpenSearch updates its internal time cache to reduce the overhead of frequent system time calls. A smaller interval provides more accurate time measurements but increases CPU overhead from more frequent time updates. Setting this to0disables caching and calls system time directly on each request (typically used for testing). Default is200ms. Minimum is0ms.
Cluster-level thread pool settings
OpenSearch supports cluster-level dynamic settings that allow you to override thread pool configurations across all nodes in the cluster:
-
cluster.thread_pool.generic.max(Dynamic, integer): Sets the maximum size of the generic thread pool across all nodes in the cluster. This overrides the default thread pool configuration specified inopensearch.yml. The generic thread pool handles lightweight operations and background tasks. Use this setting to dynamically adjust thread pool sizes without restarting nodes. -
cluster.thread_pool.snapshot.max(Dynamic, integer): Sets the maximum size of the snapshot thread pool across all nodes in the cluster. This overrides the default thread pool configuration for snapshot operations. The snapshot thread pool handles snapshot creation and restoration operations. Use this setting to adjust snapshot concurrency during high-load periods.
Best practices
- Monitor thread pool usage: Use the Nodes Stats API to monitor thread pool metrics.
- Avoid over-provisioning: Setting thread pool sizes too high can lead to memory pressure and context switching overhead.
- Consider workload patterns: Adjust thread pool sizes based on your cluster’s specific read/write patterns.
- Test configuration changes: Always test thread pool modifications in a non-production environment first.