Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix] [Tiered Caching] Make ehcache disk cache size setting be ByteSizeValue rather than long #13902

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
import org.opensearch.common.settings.Settings;
import org.opensearch.common.unit.TimeValue;
import org.opensearch.common.util.FeatureFlags;
import org.opensearch.core.common.unit.ByteSizeValue;
import org.opensearch.env.NodeEnvironment;
import org.opensearch.index.cache.request.RequestCacheStats;
import org.opensearch.index.query.QueryBuilders;
Expand Down Expand Up @@ -102,7 +103,7 @@ private Settings defaultSettings(long sizeInBytes, TimeValue expirationTime) {
EhcacheDiskCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE)
.get(DISK_MAX_SIZE_IN_BYTES_KEY)
.getKey(),
sizeInBytes
new ByteSizeValue(sizeInBytes)
)
.put(
EhcacheDiskCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
import org.opensearch.common.cache.CacheType;
import org.opensearch.common.settings.Setting;
import org.opensearch.common.unit.TimeValue;
import org.opensearch.core.common.unit.ByteSizeValue;

import java.util.HashMap;
import java.util.Map;
Expand Down Expand Up @@ -102,9 +103,9 @@ public class EhcacheDiskCacheSettings {
/**
* Disk cache max size setting.
*/
public static final Setting.AffixSetting<Long> DISK_CACHE_MAX_SIZE_IN_BYTES_SETTING = Setting.suffixKeySetting(
public static final Setting.AffixSetting<ByteSizeValue> DISK_CACHE_MAX_SIZE_IN_BYTES_SETTING = Setting.suffixKeySetting(
EhcacheDiskCache.EhcacheDiskCacheFactory.EHCACHE_DISK_CACHE_NAME + ".max_size_in_bytes",
(key) -> Setting.longSetting(key, DEFAULT_CACHE_SIZE_IN_BYTES, NodeScope)
(key) -> Setting.memorySizeSetting(key, new ByteSizeValue(DEFAULT_CACHE_SIZE_IN_BYTES), NodeScope)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this cause any issue with upgrade path where older nodes have the setting as long type vs new nodes has it as ByteSizeValue type ? Probably not given it is NodeScope but wanted to double confirm on this.

Copy link
Collaborator

@jainankitk jainankitk Jul 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this cause any issue with upgrade path where older nodes have the setting as long type vs new nodes has it as ByteSizeValue type ? Probably not given it is NodeScope but wanted to double confirm on this.

@sohami - Thinking again on this, it will break for old nodes. Since once the setting is preserved as memorySizeSetting in the cluster state, old nodes will not be able to apply the latest cluster state and cluster manager will kick those nodes out of the cluster.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've pushed a new change so that there's a new ByteSizeSetting "max_size" rather than "max_size_in_bytes", which has been deprecated. If the new setting is present it uses that value. If not, it looks for the deprecated setting and uses that. If neither are present it uses the default value.

I'm not totally sure this complexity is worth it, it might just be better to keep the long setting as it was before. Let me know what you think @jainankitk @sohami

);

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@
import org.opensearch.common.settings.Settings;
import org.opensearch.common.unit.TimeValue;
import org.opensearch.common.util.io.IOUtils;
import org.opensearch.core.common.unit.ByteSizeValue;

import java.io.File;
import java.io.IOException;
Expand Down Expand Up @@ -654,6 +655,11 @@ private V deserializeValue(ByteArrayWrapper binary) {
return valueSerializer.deserialize(binary.value);
}

// For testing
long getMaxWeightInBytes() {
return maxWeightInBytes;
}

/**
* Factory to create an ehcache disk cache.
*/
Expand Down Expand Up @@ -701,7 +707,7 @@ public <K, V> ICache<K, V> create(CacheConfig<K, V> config, CacheType cacheType,
.setWeigher(config.getWeigher())
.setRemovalListener(config.getRemovalListener())
.setExpireAfterAccess((TimeValue) settingList.get(DISK_CACHE_EXPIRE_AFTER_ACCESS_KEY).get(settings))
.setMaximumWeightInBytes((Long) settingList.get(DISK_MAX_SIZE_IN_BYTES_KEY).get(settings))
.setMaximumWeightInBytes(((ByteSizeValue) settingList.get(DISK_MAX_SIZE_IN_BYTES_KEY).get(settings)).getBytes())
.setSettings(settings)
.build();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@
import org.opensearch.core.common.bytes.BytesArray;
import org.opensearch.core.common.bytes.BytesReference;
import org.opensearch.core.common.bytes.CompositeBytesReference;
import org.opensearch.core.common.unit.ByteSizeUnit;
import org.opensearch.core.common.unit.ByteSizeValue;
import org.opensearch.env.NodeEnvironment;
import org.opensearch.test.OpenSearchSingleNodeTestCase;

Expand Down Expand Up @@ -128,7 +130,7 @@ public void testBasicGetAndPutUsingFactory() throws IOException {
EhcacheDiskCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE)
.get(DISK_MAX_SIZE_IN_BYTES_KEY)
.getKey(),
CACHE_SIZE_IN_BYTES
new ByteSizeValue(CACHE_SIZE_IN_BYTES)
)
.put(
EhcacheDiskCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE)
Expand Down Expand Up @@ -882,6 +884,107 @@ public void testStatsTrackingDisabled() throws Exception {
}
}

public void testCacheSizeSetting() throws Exception {
// Cache size setting should be a ByteSizeValue or parseable as bytes, not a long
MockRemovalListener<String, String> removalListener = new MockRemovalListener<>();
try (NodeEnvironment env = newNodeEnvironment(Settings.EMPTY)) {
// First try various valid options for the cache size setting
List<Settings> validSettings = new ArrayList<>();
List<Long> expectedCacheSizes = List.of(
(long) CACHE_SIZE_IN_BYTES,
new ByteSizeValue(10, ByteSizeUnit.GB).getBytes(),
new ByteSizeValue(1, ByteSizeUnit.GB).getBytes(),
new ByteSizeValue(50000000, ByteSizeUnit.BYTES).getBytes(),
EhcacheDiskCacheSettings.DEFAULT_CACHE_SIZE_IN_BYTES
); // The expected size of the cache produced by each of the settings in validSettings

// Should be able to pass a ByteSizeValue directly
validSettings.add(
getSettingsExceptSize(env).put(
EhcacheDiskCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE)
.get(DISK_MAX_SIZE_IN_BYTES_KEY)
.getKey(),
new ByteSizeValue(CACHE_SIZE_IN_BYTES)
).build()
);

// Should also be able to pass strings which can be parsed as bytes
List<String> validSettingStrings = List.of("10GB", "1G", "50000000B");
for (String validString : validSettingStrings) {
validSettings.add(
getSettingsExceptSize(env).put(
EhcacheDiskCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE)
.get(DISK_MAX_SIZE_IN_BYTES_KEY)
.getKey(),
validString
).build()
);
}

// Passing in settings missing a size value should give us the default
validSettings.add(getSettingsExceptSize(env).build());
assertEquals(validSettings.size(), expectedCacheSizes.size());

ICache.Factory ehcacheFactory = new EhcacheDiskCache.EhcacheDiskCacheFactory();

for (int i = 0; i < validSettings.size(); i++) {
ICache<String, String> ehcacheTest = ehcacheFactory.create(
new CacheConfig.Builder<String, String>().setValueType(String.class)
.setKeyType(String.class)
.setRemovalListener(removalListener)
.setKeySerializer(new StringSerializer())
.setValueSerializer(new StringSerializer())
.setDimensionNames(List.of(dimensionName))
.setWeigher(getWeigher())
.setSettings(validSettings.get(i))
.build(),
CacheType.INDICES_REQUEST_CACHE,
Map.of()
);
assertEquals((long) expectedCacheSizes.get(i), ((EhcacheDiskCache<String, String>) ehcacheTest).getMaxWeightInBytes());
ehcacheTest.close();
}

// Next try an invalid one and show we can't construct the disk cache
assertThrows(IllegalArgumentException.class, () -> {
ICache<String, String> ehcacheTest = ehcacheFactory.create(
new CacheConfig.Builder<String, String>().setValueType(String.class)
.setKeyType(String.class)
.setRemovalListener(removalListener)
.setKeySerializer(new StringSerializer())
.setValueSerializer(new StringSerializer())
.setDimensionNames(List.of(dimensionName))
.setWeigher(getWeigher())
.setSettings(
getSettingsExceptSize(env).put(
EhcacheDiskCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE)
.get(DISK_MAX_SIZE_IN_BYTES_KEY)
.getKey(),
"1000000"
).build()
)
.build(),
CacheType.INDICES_REQUEST_CACHE,
Map.of()
);
});
}
}

private Settings.Builder getSettingsExceptSize(NodeEnvironment env) {
return Settings.builder()
.put(
EhcacheDiskCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE).get(DISK_STORAGE_PATH_KEY).getKey(),
env.nodePaths()[0].indicesPath.toString() + "/request_cache"
)
.put(
EhcacheDiskCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE)
.get(DISK_LISTENER_MODE_SYNC_KEY)
.getKey(),
true
);
}

private List<String> getRandomDimensions(List<String> dimensionNames) {
Random rand = Randomness.get();
int bound = 3;
Expand Down
Loading