From c1d192f511a5d8fd91a7f846e9761f31b3ff990b Mon Sep 17 00:00:00 2001 From: Zezhi Shao <864453277@qq.com> Date: Tue, 3 Sep 2024 16:25:57 +0800 Subject: [PATCH] temp cmt --- tutorial/scaler_design.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tutorial/scaler_design.md b/tutorial/scaler_design.md index 185313d..a4fced2 100644 --- a/tutorial/scaler_design.md +++ b/tutorial/scaler_design.md @@ -14,7 +14,7 @@ For example, a Z-Score Scaler reads the raw data and computes the mean and stand The Scaler functions after the data is extracted from the dataset. The data is first normalized by the Scaler before being passed to the model for training. After the model processes the data, the Scaler denormalizes the output before it is passed to the runner for loss calculation and metric evaluation. -> **Note:** +> [!IMPORTANT] > In traditional time series analysis, normalization often occurs during data preprocessing, as was the case in earlier versions of BasicTS. However, this approach is not scalable. Adjustments like changing input/output lengths, applying different normalization methods (e.g., individual normalization for each time series), or altering the training/validation/test split ratios would require re-preprocessing the data. To overcome this limitation, BasicTS adopts an "instant normalization" approach, where data is normalized each time it is extracted. ```python